By now it's been all over the news that IBM has received DARPA funding to create "cognitive" computers-- "systems that simulate the human brain's abilities for sensation, perception, action, interaction and cognition."
DARPA's brain-on-a-chip venture, called Systems of Neuromorphic Adaptive Plastic Scalable Electronics ("SyNAPSE"), seeks to "develop a brain-inspired electronic 'chip' that mimics that function, size, and power consumption of a biological cortex."
To what end? They want something that can quickly analyze massive amounts of data. "For example," the press release explains, "bankers must make split-second decisions based on constantly changing data that flows at an ever-dizzying rate."
So... what are we building here? A brain in a shoebox that can analyze the world's financial data?
"No," mulls Jim Olds, a neuroscientist at George Mason University, where researchers are also competing for a spot on the SyNAPSE team. "But if we did, it would probably do a lot better than Hank Paulson."
What DARPA is building is easily confused with any number of other projects that trumpet their intention to "reverse engineer the brain." Misunderstandings are inevitable. The New York Times, for example, described the effort as â''the quest to engineer the mind by reverse-engineering the brain.â''
Perhaps the NYT is talking about Blue Brain, because that's not true. (But given that the lead investigator for IBM/SyNAPSE is the same person who has been involved with IBM's other Blue Gene Brain effort, the confusion is understandable.)
The SyNAPSE project is almost exactly the opposite of engineering a mind. Instead of a neuroscience basic research effort, SyNAPSE is an applied physics endeavor that seeks to cherry-pick only the most useful elements of the brain--and that most certainly does not include the mind or consciousness--and use that to augment a machine.
DARPA has distanced itself from morally thorny projects that look into psychologically-based and neurobiology-based cognitive architectures.
But they do want to cull the the finer qualities of cognition and use them to make smarter machines.
Jim Olds was kind enough to take me through some possible applications.
A lot of soldiers are getting killed in Iraq because of bombs blowing up under their trucks. "It'd be great if instead of soldiers driving the trucks, the trucks drove themselves," Olds says, "like what they were trying to do with the DARPA Grand Challenge." Sadly enough though that Grand Challenge did not work out so well, because autonomous cars kinda suck. Here's why: "Our brains have all these capabilities that digital computers and robots don't," Olds says. "Basically our brains can multitask. You can be talking to someone on the phone, eating your lunch and looking something up on wikipedia, all at the same time and without breaking a sweat. That ability to multiprocess complex data streams is nearly impossible for a computer. Well, it's possible, but only in a very specific, pre-rigged situation."
The bottom line: a computer can't deal with surprises.
In controlled situations, computers win hands down. A computer can land a plane far better than any captain-- in fact, any smooth landing you've had recently has probably been executed by the on-board computer. The problem is exemplified by the recent near-catastrophe when a British Airways 777 crash-landed at Heathrow after ice clogged its engines. 300 feet above the ground, the plane's engines cut out. This was an event that Boeing's engineers had not anticipated and therefore not programmed into the computer's frame of reference. Therefore, the computer's response was essentially, "Oh hai! I can has engine restart?" The pilots put the kibosh on that immediately. They understood that 300 feet from the ground, that would have killed everyone. Instead, their best hope was to pancake the plane down by the seat of their pants. And they did. It wasn't pretty, but no one got killed.
"Human brains can react in real time to low-likelihood events and generate a possibility of responses that computers just can't," Olds says. Those pilots saved a lot of lives-- and that's what human brains can do.
So the goal is to keep the reasoned logic of a computer and cherry pick the things you like about the way the human cortex does business.
Imbuing these with some cognitive abilities would make UAVs more accurate and more useful. Taking the pilots out of the equation would reduce errors, and anecdotal evidence shows that remote pilots suffer the same amounts of post traumatic stress as people who are in theater. So there's really nothing good about having remote pilots operating these things.
(Readers of Asimov, you may stop reading here and go crawl under a desk. The rest of you, go watch Battlestar Galactica and meditate on Cylon raiders.)
The best (rad: non-creepy military autonomous things blowing up defenseless humans) application is for deep space exploration. The lag time for a radio signal between us and Mars is about 10 minutes. 10 minutes is an eternity when it lies between an earth-bound "Hey! There's a steep cliff! Should I keep rolling?" and the subsequent Mars-bound "Nooo! stopstopstopstop!"
"We want our rovers to be smart enough to decide it's probably not a good idea to get near that cliff," says Olds.
If you could create a chip that takes advantage of the kind of seat-of-your-pants multitasking humans are (generally) better at than computers, you could have a Mars Rover that does not accidentally commit hari kari. You can't really argue with that.
Though I admit I remain confused about one thing: the project's apparent goal to replicate the neural structure of a cat. According to Danger Room, which had the goods on this story about a month before it went public,
"the follow-on phases of the project will create a technology that functions like the brain of a cat, which comprises 10^8 neurons and 10^12 synapses," Dr. Narayan Srinivasa, SyNAPSE Program Manager and Senior Scientist, said. "The human brain has roughly 10^11 neurons and 10^15 synapses."
Certainly no one is counting on even a cyborg-kitteh to properly operate a HMMVW in Fallujah?
But I digress: the point is that you want a chip with the best of both worlds: a computer's inability to panic, succumb to attention deficit disorder, or fall asleep at the wheel combined with a human being's ability to deal with a completely surprising, out-of-left-field scenario. That, in a nutshell, is SyNAPSE: reverse engineering the good parts of our brains while leaving the rest, well, to us.