I was telling someone how intelligent my dog was. He shrugged dismissively and said, “Dogs are just really good pattern detectors.”
Afterward, I looked at my dog a little differently. “Are you intelligent, or just a pattern detector?” I asked her. She just wagged her tail and said nothing, and I suppose that’s open to interpretation. She swims in a sea of data from vision, sounds, and smells. From this data, she forms a model of the world—a dog’s world, and one that is unknowable to us, and yet seems to have commonalities with our own. She knows the objects and inhabitants of her world and the patterns of everyday experience and she is keenly aware of any anomalies. I once heard a speaker on intellectual property say that “your dog knows where your property ends.” I’m not sure that my dog does, but if so, it would be an example of deriving an abstract rule from patterns of behavioral data.
Humans are pretty good at pattern detection too. There was a scene early in the movie A Beautiful Mind, where the mathematician John Nash, played by Russell Crowe, is taken to a room in the Pentagon and shown a wall filled with seemingly random digits. “The computer can’t detect a pattern, but I’m sure it’s code,” says a general. Nash stares for a long time at the digits, and some of them seem to emerge to glow brighter than others. He turns to the general. “I need a map,” he says. He has found geographic information in the patterns. Later in the movie however, Nash starts seeing patterns that are delusion rather than deduction.
Today we’re increasingly using computers as pattern detectors. Back in the 1980s I had a neural-network research department in the organization that I managed.
At that time neural networking was a hot topic, riding quickly up the hype cycle. But my CEO was unimpressed. “It’s the second-best solution to any problem,” he said. It seemed a damning comment—whatever you were trying to do, there would be a dedicated approach that would be better than the generalized solution enabled by the structure of a neural network. But that was then, and now is different.
Since those early days of neural networks, computers have gotten so much more powerful, big-data sets have become ubiquitous, and neural networks have been enhanced with more layers and given a sophisticated mix of art and elegant mathematics for training. Breakthroughs have been made in long-standing problems such as the recognition of handwriting, faces, and speech, while new areas have opened up in the labeling of images and in the navigation and control of autonomous vehicles. Suddenly it seems that neural networks are being used everywhere. Wherever there are patterns and relevant data, deep learning is being applied. Neural networks are no longer the second-best solution to the problem. Often they are the best, and in many instances it is we humans who have taken second place. It is the computer that has the beautiful mind.
It is an exciting time in this evolution, but in one aspect the situation reminds me of looking at my dog. Just as with my dog’s inner world, we don’t always understand what is inside the black box of the deep neural network. What is the network “looking at”? What is it “thinking”? We could ask it to explain its decisions. Are you intelligent or just a pattern detector? But not only doesn’t it talk, it doesn’t even wag its tail.
But this is a fast-moving technology. We may get to that tail-wagging thing soon.
This article appears in the January 2018 print issue as “The Mind of Neural Networks.”