One of the most profound questions of engineering, arguably, is whether we will ever create human-level consciousness in a machine. In the meantime, robots continue to take tiny little bot steps in the direction of faux humanity. Take Quasi, for instance, a robot dreamed up by Carnegie Mellon students that mimics the behavior of a 12-year-old boy [see "Heart of a New Machine" by Kim Krieger, in this issue]. Quasi's "moods" depend on what?s been happening in his environment, but rather than being driven by prepubescent biology, they are architected by an elaborately scripted software-based behavioral model that triggers his responses. Quasi lets you know how he's "feeling" through the changing colors of his LED eyes and his body language.
Other technologies are emulating more straightforward human traits. In the 9 June issue of Science, Vivek Maheshwari and Ravi F. Saraf of the University of Nebraska-Lincoln described their invention of a sensor that could allow robots to perceive temperature, pressure, and texture with exquisite sensitivity. Their sensor can detect surface details to within a pressure of about 10 kilopascals and distinguish features as small as 40 micrometers across?a sensitivity comparable to that of a human finger.
The Nebraska team is working on medical applications for the sensor. But it's the idea of covering portions of a robot's surface, particularly its "hands," with these sensors that's been making headlines.
Right now there are robots with increasingly sophisticated perceptual abilities and small behavioral repertoires operating in real-life environments. There are underwater vehicles that can map large swathes of sea bottom with total autonomy. There are computers operating on big problems at blazing computational speeds. But we still seem to be far away from that moment when our computational devices become autonomous entities with minds and brains--or the machine equivalent--of their own.
People have speculated about such a moment for decades, and most recently, ideas surrounding the questions of whether and when machine intelligence could equal and then surpass our own biological braininess have been subsumed into something called the Singularity. Popularized by science-fiction author and computer scientist Vernor Vinge in a 1983 article in Omni magazine, it has its early roots in the ideas of such cyberneticists as John von Neumann and Alan Turing. Notions about the Singularity--when it will happen, how it will happen, what it means for human beings and human civilization--come in several flavors. Its most well-known champions are roboticist Hans Moravec and computer scientist Raymond Kurzweil, who argue that when machine sapience kicks in, the era of human supremacy will be over. But it will be a good-news/bad-news situation: Moravec sees an era of indulgent leisure and an end to poverty and want; Kurzweil looks forward to uploading his brain into a computer memory and living on, in effect, indefinitely. But ultimately there's also a good chance we'll be booted off our little planet. Moravec goes so far as to predict that this massive machine intelligence will absorb the entire universe and everything in it, and that we will become part of the contents of this greater-than-human intelligence?s infinite knowledge database.
How would it work? According to Vinge's vision, once computer performance and storage capacity rival those of animals--a phase we are beginning to enter--superhumanly intelligent machines capable of producing ever more intelligent machines will simply take over. This intellectual runaway, writes Vinge, "will probably occur faster than any technical revolution seen so far. The precipitating event will likely be unexpected--perhaps even to the researchers involved. ('But all our previous models were catatonic! We were just tweaking some parameters....') If networking is widespread enough (into ubiquitous embedded systems), it may seem as if our artifacts as a whole had suddenly wakened."
Some thinkers dismiss the Singularity as "the rapture of the nerds." Others believe it's just a matter of time. Picking up on the good-news/bad-news theme, the Institute for the Future's Paul Saffo has remarked: "If we have superintelligent robots, the good news is that they will view us as pets; the bad news is they will view us as food." What do you think? Write to us at
The editorial content of IEEE Spectrum does not represent official positions of the IEEE or its organizational units. Please address comments to Forum at