Features

net.wars: The elephant in the dark

by Wendy M Grossman | posted on 01 September 2006


John McCarthy has been trying to get Stanford to make a robotic pickpocket.

Wendy M Grossman

Yesterday, August 31, was the actual 50th anniversary of the first artificial intelligence conference, held at Dartmouth in 1956 and recently celebrated with a kind of rerun. And John McCarthy, who convened the original conference, spent yesterday giving a talk to a crowd of students at Imperial College, London, on challenges for machine learning, specifically recounting a bit of recent progress working with Stephen Muggleton and Ramon Otero on a puzzle he proposed in 1999.

Here is the puzzle, which expresses the problem of determining an underlying reality from an outward appearance.

Most machine learning research, he noted, has concerned the classification of appearance. But this isn't enough for a robot – or a human – to function in the real world. "Robots will have to infer relations between reality and appearance."

One of his examples was John Dalton's work discovering atoms. "Computers need to be able to propose theories," he said – and later modify them according to new information. (Though I note that there are plenty of humans who are unable to do this and who will, despite all evidence and common sense to the opposite, cling desperately to their theory.)

Human common sense reasons in terms of the realities. Some research suggests, for example, that babies are born with some understanding of the permanence of objects – that is, that when an object is hidden by a screen and reappears it is the same object.

Take, as McCarthy did, the simple (for a human) problem of identifying objects without being able to see them; his example was reaching into your pocket and correctly identifying and pulling out your Swiss Army knife (assuming you live in a country where it's legal to carry one). Or identifying the coin you want from a collection of similar coins. You have some idea of what the knife looks and feels like, and you choose the item by its texture and what you can feel of the shape.

McCarthy also cited an informal experiment in which people were asked to draw a statuette hidden in a paper bag – they could reach into the paper bag to feel the statue. People can actually do this with little difference than if they can see the object.

But, he said, "You never form an image of the contents of the pocket as a whole. You might form a list." This is where he revealed he has been trying to get Stanford to make a robotic pickpocket.

You can, of course, have a long argument about whether there is such a thing as any kind of objective reality. I've been reading a lot of Philip K. Dick lately, and he had robots that were indistinguishable from humans, even to themselves; yet in Dick's work reality is a fluid, subjective concept that can be disrupted and turned back on itself at any time. You can't trust reality.

But even if you – or philosophers in general – reject the notion of "reality" as a fundamental concept, "You may still accept the notion of relative reality for the design and debugging of robots." Seems a practical approach.

But the more important aspect may be the amount of pre-existing knowledge. "The common view," he said, "is that a computer should solve everything from scratch." His own view is that it's best to provide computers with "suitably formalised" common sense concepts – and that formalising context is a necessary step.

For example: when you reach into your pocket you have some idea of the contents are likely to be. Partly, of course, because you put them there. But you could make a reasonable guess even about other people's pockets because you have some idea of the usual size of pockets and the kinds of things people are likely to put in them. We often call that "common sense", but a lot of common sense is experience. Other concepts have been built into human and most animal infants through evolution.

Although McCarthy never mentioned it, that puzzle and these other examples all remind me of the story of the elephant and the blind men, which I first came across in the writings of Idries Shah, who attributed it to the Persian poet Rumi. Depending which piece of the elephant a blind man got hold of, he diagnosed the object as a fan (ear), pillar (leg), hose (trunk), or throne (back). It seems to me a useful analogy to explain why, 50 years on, human-level artificial intelligence still seems so far off. Computers don't have our physical advantages in interacting with the world.

An amusing sidelight that seemed to reinforce that point. After the talk, there was some discussion of building the three-dimensional reality behind McCarthy's puzzle. The longer it went on, the more confused I got about what the others thought they were building; they insisted there was no difficulty in getting around the construction problem I had, which was how to make the underlying arcs turn one and only one stop in each direction. How do you make it stop? I asked. Turns out: they were building it mentally with Meccano. I was using cardboard circles with a hole and a fastener in the middle, and marking pens.

When I was a kid, girls didn't have Meccano. Though, I tell you, I'm going to get some now


Is that a robot in your pocket? or... - You can discuss this article on our discussion board.

Wendy M. Grossman’s Web site has an extensive archive of her books, articles, and music, and an archive of all the earlier columns in this series. Readers are welcome to post here, at net.wars home, follow on Twitter or send email to netwars(at) skeptic.demon.co.uk (but please turn off HTML).