Behind Artificial Intelligence, a Squadron of Bright Real People

SAN FRANCISCO, Oct. 12 - The five robots that successfully navigated a 132-mile course in the Nevada desert last weekend demonstrated the re-emergence of artificial intelligence, a technology field that for decades has overpromised and underdelivered.

At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.

But the work of a small team of researchers at the Stanford Artificial Intelligence Laboratory is helping to restore credibility to the field. The team's winning robotic Volkswagen, named Stanley, covered the unpaved course in just 6 hours and 53 minutes without human intervention and guided only by global positioning satellite waypoints.

The feat, which won a $2 million prize from the Pentagon Defense Advanced Research Project Agency, was compared by exuberant Darpa officials to the Wright brothers' accomplishment at Kitty Hawk, because it was clear that it was not a fluke. Twenty-two of the 23 vehicles that started this year did as well or better than the seven miles completed by the best vehicle last year.

The ability of the vehicles to complete a complex everyday task -- driving -- underscores how artificial intelligence may at last be moving beyond the research laboratory.

While artificial intelligence technology is already in use in telephone answering systems with speech recognition and in popular household gadgets like the iRobot vacuum cleaner, none of the existing systems have been as ambitious as Darpa's Grand Challenge road race.

This leap was possible, in large part, because researchers are moving from an approach that relied principally on logic and rule-based systems to more probability or statistics-oriented software technologies.

"In the past A.I. has been dominated by symbolic systems and now the world is gray," said Terrence J. Sejnowski, head of the computational neurobiology laboratory at the Salk Institute in La Jolla, Calif. "That's what it's like to deal with the real world."

This crucial shift, Mr. Sejnowski said, "grew out of the recognition that the human brain is very good at this, why not have machines do the same thing?"

New artificial intelligence systems -- like that embodied in Stanley -- are now capable of evaluating a huge amount of data from sensors and then making probabilistic decisions.

"The prior opinion of many informed observers, based on decades of disappointing experimental results, was that the problems were so hard that they would remain unsolved for many decades yet," said Hans Moravec, a Carnegie Mellon University robotics researcher who was one of the nation's first developers of autonomous vehicles during the 1970's. "But now everyone knows differently," he said. "The interest, effort and investment in the broader field is sure to skyrocket."

The Stanford lab has long been at the forefront of A.I. research. The first autonomous vehicle, based on a vehicle salvaged from the NASA lunar landing program, was created at the lab and took its first baby steps in 1975. By the late 1970's, the robotic vehicle was capable of moving about two feet at a time in one-second spurts, pausing for half a minute to compute between attempting the next movement.

Until recently, progress in artificial intelligence lagged so far behind computing technology that some in the field talked about an "A.I. winter," after commercial and government funding evaporated in the mid-1980's.

Now there is talk about an A.I. spring among researchers like Sebastian Thrun, the director of the Stanford lab.

"The amount of journalistic interest and investor interest has fluctuated wildly," said John McCarthy, a pioneer in the field and now professor emeritus in the computer science department at Stanford University. "A.I. has continued all along, thanks to the interest among researchers and the continued support of government agencies, especially Darpa."

The enthusiasm is already spreading. Researchers point out that an obvious and powerful application for A.I. technology is in automobile safety systems.

"Any time you create a technology that has the potential of saving 20,000 to 30,000 lives in a year, one has to sit up and take notice," said Raj Reddy, a professor of computer science and robotics at Carnegie Mellon University. "If you look at automotive accidents in the United States, the repair bill is about $55 billion each year."

The potential of the application is directly relevant to Volkswagen, the German car manufacturer who was one of the research sponsors of the Stanford team.

The company has put a high priority on what it refers to as driver-assistance systems, which are now capable of providing intelligent cruise control and lane "departure" warnings, two systems that will be crucial for driver safety in coming years.

"We can take a lot of the approaches used in Stanley and adapt them," Sven Strohband, senior research engineer at the Volkswagen Electronics Research Laboratory in Palo Alto, Calif. "It's a nice fresh wind of ideas."

The public visibility of the Grand Challenge is a big boost for Darpa, but may also show that the agency's current funding approach is a poor strategy. It has shifted money away from universities and experimental projects and toward work that is classified or done through military contractors. The victory of the Stanford team is proof of what can be done by a motivated team of scientific researchers on a relatively small budget.

"This is consistent with the history of our field," said David A. Patterson, a computer scientist at the University of California, Berkeley, who is president of the Association for Computing Machinery. "This demonstrates the importance of the participation of government-funded academics."