A series of blog posts on the AI debate
I recently finished reading Norbert Wiener’s incredible book ‘Cybernetics.’ Interestingly, soon after that, Pankaj (my advisor) sent me an inspirational article by Michael I. Jordan where he alerted the risk incurred by the misuse of “AI” as a placeholder nomenclature moving forward. Therein, Jordan referenced John McCarthy’s provision of intelligent systems based on the tie to logic as well as his attempt to sideline the intellectual field known as ‘cybernetics,’ a term Wiener coined in 1948 to refer to
the scientific study of control and communication in the animal and machine.
Jordan pointed out,
In an interesting reversal, it is Wiener’s intellectual agenda that has come to dominate in the current era under the banner of McCarthy’s terminology.
The confluence of ideas and technology trends has been rebranded as “AI” over the past few years. This rebranding is worthy of some scrutiny.
To be honest, I was thrilled by these bold yet lucid statements, for I believe the narrative of an emergent silicon-based intelligence which rivals and challenges our intellectual superiority, mostly gushed about by befuddled academicians and (potentially) malicious technocrats and venture capitalists, is, at least from what I can extrapolate based on the current technology, fairly misguided and irresponsible. In a similar vein, I’d like to blog about my take on this matter. However, instead of getting right to this AI debate, I’d like to focus first on some marginal yet mind-boggling points that I stumbled upon when forming my opinion. Here’s the first one—neural mechanism of idea formation.
Arguably, the association of ideas is one of the defining features of intelligence. For example, we humans can naturally and causally relate one idea to the other in conscious, one standard way of which is through the association of that idea to some principles. In the first blog post, I’d focus on the possibility to assign a neural mechanism to John Loke’s theory on the association of ideas. Specifically, I’d like to invoke two ideologies that (most of) the field of psychology and neuroscience giddily embrace. The first one is the belief that once we decipher the structure of the brain and the biochemistry under which it operates, we’re close to unraveling how conscious and cognition came into being. The other, however, pertains to the promise of building the silicon-based prototype of ‘intelligence,’ one that mimics humans’ capacity for logic, learning, reasoning, and self-awareness, etc., with the profound ambition to comprehend human cognition and harness the technology it brings about to solve problems. I’d make the argument that even if both beliefs develop pari passu, they are still not compatible with Locke’s theory. To see that, we shall talk about Locke’s theory of association and why it is pertinent to our discussion on intelligence.
According to Locke, the association of ideas happens based on three principles: the principle of contiguity, of similarity, and of cause and effect. Since he later on (even more emphatically, Hume) reduced the third principle to nothing more than constant concomitance, thereby embedded it into the first, the following discussion will focus on the first two—contiguity and similarity.