Chapter 2



01  02  03  04  05  

06  










John McCarthy

Claude Shannon

Eugene Charniak's HomePage

A Brief history of AI

Who was doing what...

James Slagle's publications

Terry Winograd

The Case-Based Reasoning Group

Industrial Light + Magic HomePage

Artificial Intelligence Resources

David G. Stork: You -- along with John McCarthy, Claude Shannon, Nathaniel Rochester, and others -- are credited with founding the field of artificial intelligence (AI) at the famous Dartmouth conference in 1956. A decade later, in the mid-sixties, when Clarke and Kubrick began work on 2001 where was the field of AI? What were you trying to do?

Marvin Minsky: Well, we were trying to make intelligent machines. There were lots of good students working on interesting and important problems. In 1964 Tom Evans's program ANALOGY had excellent results on automated analogies -- you know, figure A is to figure B as figure C is to what ...

Jim Slagel wrote a program that could get an A on an MIT calculus exam. This is a tricky domain because, unlike simple arithmetic, to solve a calculus problem -- and in particular to perform integration -- you have to be smart about which integration technique should be used: integration by partial fractions, integration by parts, and so on. Around 1967 Dan Bobrow wrote a program to do algebra problems based on symbols rather than numbers.

It was somewhat later, 1974, that Eugene Charniak wrote a program to try to do word problems in algebra -- of the sort, "Mary bought two pounds of flour at fifty cents a pound, and bought three days later a pound of sugar at two dollars. How much did she spend altogether?" He found this was extremely difficult, and basically his program didn't work.

Stork: Was this because of insufficiently sophisticated language understanding, or instead lack of common sense or world knowledge?

Minsky: World knowledge. For instance, in the flour problem, how would the computer know that Mary didn't buy three days? In fact, somewhat later, around 1970, Terry Winograd addressed a simple Blocks World environment made up of simple objects and actions -- "put the big block next to the smallest block,"and so on -- and showed that one could have a fairly seamless transition from syntax to semantics. So language, as difficult as a domain as it is, was not the obstacle to Charniak's program.

Stork: In the late sixties did you really think that toy world domains such as Blocks World captured all the essential aspects of intelligence?

Minsky: I did then, and I still do! In fact, I think it was the move away from such problems that is the main reason for lack of progress in AI. The problem is that in working on specific problems (such as chess, character recognition, and so on), there is no depth.

I think a key to AI is the need for several representations of the knowledge, such that when the system is stuck (using one representation) it can jump to use another. When David Marr at MIT moved into computer vision, he generated a lot of excitement, but he hit up against the problem of knowledge representation; he had no good representations for knowledge in his vision systems. Bit by bit, people recognized the severity of the knowledge-representation problem, but only Doug Lenat took it seriously enough to base a research program on it. What I find astonishing is that Lenat -- who is working on this now, in the nineties -- is still considered a pioneer. Doug renounced trying to make intelligence in a particular domain, and I think this is a huge advance. I think Lenat is headed in the right direction, but someone needs to include a knowledge base about learning.

If the group at SRI hadn't built Shakey, the first autonomous robot, we would have had more progress. Shakey should never have been built. There was a failure to recognize the deep problems in AI; for instance, those captured in Blocks World. The people building physical robots learned nothing.

Stork: Oh, but doesn't the physical world force you to confront problems you might otherwise try to sneak around or overlook -- for instance, friction?

Minsky: If it was an aspect of the problem that you overlooked in a simulation, then you would have overlooked it in a physical robot too. There was great effort expended building real walking machines, but we learned much more from pure simulations. For the real systems, you never knew if it worked or if it didn't work because a cloud went over the window and changed the lighting. When Mark Rayburn realized that, and went into simulations of walking machines, then we started making progress. By the way, it was his simulations that helped out in Jurassic Park -- without them, there would have been only a few dinosaurs. Based on his techniques, Industrial Light and Magic could make whole herds of dinosaurs race across the screen.

Stork: Could you give a very broad overview of the techniques in AI?

Minsky: There are three basic approaches to AI: Case-based, rule-based, and connectionist reasoning.

The basic idea in case-based, or CBR, is that the program has stored problems and solutions. Then, when a new problem comes up, the program tries to find a similar problem in its database by finding analogous aspects between the problems. The problem is that it is very difficult to know which aspects from one problem should match which ones in any candidate problem in the other, especially if some of the features are absent.

In rule-based, or expert systems, the programmer enters a large number of rules. The problem here is that you cannot anticipate every possible input. It is extremely tricky to be sure you have rules that will cover everything. Thus these systems often break down when some problems are presented; they are very "brittle."


top of pageauthor infofurther readingorderforward