The confused history of AI - by Dr. Kunal Singh Berwar



Looking back at the history of AI, we can see that perhaps it began at the wrong end of the spectrum. If AI had been tackled logically, it would perhaps have begun as an artificial biology, looking at living things and saying "Can we model these with machines?". The working hypothesis would have been that living things are physical systems so let's try and see where the modeling takes us and where it breaks down. Artificial biology would look at the evolution of physical systems in general, development from infant to adult, self-organization, complexity and so on. Then, as a subfield of that, a sort of artificial zoology that looks at sensorimotor behavior, vision and navigation, recognizing, avoiding and manipulating objects, basic, pre-linguistic learning and planning, and the simplest forms of internal representations of external objects. And finally, as a further subfield of this, an artificial psychology that looks at human behavior where we deal with abstract reasoning, language, speech and social culture, and all those philosophical conundrums like consciousness, free will and so forth. 

That would have been a logical progression and is what should have happened. But what did happen was that what people thought of as intelligence was the stuff that impresses us. Our peers are impressed by things like doing complex mathematics and playing a good chess game. The ability to walk, in contrast, doesn't impress anyone. You can't say to your friends, "Look, I can walk", because your friends can walk too. 

So all those problems that toddlers grapple with every day were seen as unglamorous, boring, and probably pretty easy anyway. The really hard problems, clearly, were things demanding abstract thought, like chess and mathematical theorem proving. Everyone ignored the animal and went straight to the human, and the adult human too, not even the child human. And this is what `AI' has come to mean - artificial adult human intelligence. But what has happened over the last 40-50 years - to the disappointment of all those who made breathless predictions about where AI would go - is that things such as playing chess have turned out to be incredibly easy for computers, whereas learning to walk and learning to get around in the world without falling over has proved to be unbelievably difficult. 

And it is not as if we can ignore the latter skills and just carry on with human-level AI. It has proved very difficult to endow machines with `common sense', emotions and those other intangibles which seem to drive much intelligent human behavior, and it does seem that these may come more from our long history of interactions with the world and other humans than from any abstract reasoning and logical deduction. That is, the animal and child levels may be the key to making really convincing, well-rounded forms of intelligence, rather than the intelligence of chess-playing machines like Deep Blue, which are too easy to dismiss as `mindless’. 

Comments