Neurotechnology is one of the hottest areas of engineering, and the technological achievements sound miraculous: Paralyzed people have controlled robotic limbs and computer cursors with their brains, while blind people are receiving eye implants that send signals to their brains’ visual centers. Researchers are figuring out how to make better implantable devices and scalp electrodes to record brain signals or to send electricity into the brain to change the way it functions.
While many of these systems are intended to help people with serious disabilities or illnesses, there’s growing interest in using neurotech to augment the abilities of everyday people. Companies like Facebook and Elon Musk’s Neuralink are developing consumer devices that may be used for brain-based communication, while some startups are exploring applications in entertainment. But what are the ethical implications of medding with our brains? And how far will we take it?
Anders Sandberg is “not technically a philosopher,” he tells IEEE Spectrum, although it is his job to think deeply about technological utopias and dystopias, the future of AI, and the possible consequences of human enhancement via genetic tweaks or implanted devices. In fact, he has a PhD in computational neuroscience. So who better to consult regarding the ethics of neurotech and brain enhancement?
Sandberg works as a senior research fellow at Oxford’s Future of Humanity Institute (which is helmed by Nick Bostrom, a leading AI scholar and author of the book Superintelligence that explores the AI threat). In a wide-ranging phone interview with Spectrum, Sandberg discussed today’s state-of-the-art neurotech, whether it will ever see widespread adoption, and how it could reshape society.
Anders Sandberg on . . .
- Whether people will be want elective brain surgery
- Cognitive, emotional, and moral enhancements
- A neurotech love potion
- How moral enhancement could be used in the justice system
- Who would benefit most from neural enhancement?
- Improving on evolution
- Elon Musk's and Facebook's big plans in brain tech
IEEE Spectrum: We currently need to use invasive technology, electrodes that are implanted in the brain tissue, if we want to get really precise signals out of the brain or into the brain. Will the need for brain surgery keep this tech from being widely adopted?
Anders Sandberg: That might hold people back a bit, because there’s a scariness factor. There’s also an important practical factor: If I want to upgrade my cellphone, I go to the store. If I want a neural upgrade, I’d have to go to the hospital and have something removed from my brain. Maybe this will work if surgery becomes very simple and painless, and you can just drop by and get a quick upgrade. Maybe nanomachines will do the surgery, or the technology is so small you can take it as a pill. But that’s not going to happen anytime soon.
The important question about brain-computer interfaces is, “What is the killer app?” Right now it’s replacing biological function that has been lost. That’s a killer app for a very small percentage of the population (such as people dealing with paralysis or blindness). For the larger population, it has to be something we really want to have, and can’t get any other way. If the implant is just supplying information, it has to be a lot better than smartphones or computer screens or even virtual reality. Maybe it would be an implant that controls your weight setpoint, so you can say, “I want to be 20 kilos.” This might actually make you want to put some electrodes in your head.
Spectrum: Some researchers and companies are focused on brain-computer interfaces (BCIs) that “read out” brain signals and use them to control something in the external world, while others are building BCIs that “write in” information into the brain. Which do you think is more likely to take off for enhancement purposes?
Sandberg: Read-out seems to be much easier to achieve than write-in. And in some applications you’re well off with just one or the other: For paralysis, read-out is really useful, and an implant for artificial vision would just need write-in. But most of the really important applications will be about enhancing communication, either between people or between people and machines, and communication is usually two ways. If we had perfect read-out but bad write-in, we would be somewhat stuck.
Spectrum: Should we view brain enhancements achieved through hardware and software as fundamentally different from enhancements achieved via drugs? Are the technological enhancements more alarming, or are they just new?
Sandberg: I think it’s mostly that they’re new. We tend to think new technology is scary and problematic, whereas old technology we take for granted—and “old” means it arrived before you were a teenager. But there’s no philosophical reason to treat neurotechnology as fundamentally different from anything else. Putting an electrode in the brain doesn’t change the brain’s mode of operation. If you take a piano lesson or take a drug or use a brain implant—none of these give you the instant ability to play piano, but they might all make it easier to learn.
Cognitive, emotional, and moral enhancements
Spectrum: You’ve studied brain enhancements that cause changes to people’s cognitive, emotional, and moral systems. Which of these do you think is most likely to become real? Do any give you qualms?
Sandberg: There was an interesting study that asked students about various mental traits and whether they’d be willing to use an enhancement technology to improve them. The students were very willing to use an enhancement to improve cognitive traits like attention, alertness, and rote memory. But they were loath to enhance other traits like empathy and kindness. Only 9 percent of people were willing to be enhanced in kindness.
The authors had a theory to explain their results. They also asked how central these traits are to the person’s sense of self, their sense of who they are. With traits like memory and language ability, the students said they’re part of me, but rather remote from my sense of self. But emotions, those are close to my heart. If this holds true—and I think this is a great study, it should be replicated—it tells us something very cool about how we think about ourselves. So I think cognitive enhancement will be seen as pretty acceptable. And it’s no secret that in academia there are a number of students interested in cognitive enhancement.
Spectrum: What kind of society would that bring about?
Sandberg: It’s interesting to ask which kinds of enhancements would be good for the world. I can get numbers for how society would benefit if people were a bit smarter. But it’s really hard to find numbers for what would happen if people were happier, or if they were more able to trust other people.
You can look at the effect of lead in drinking water, which does impact intelligence and cause worse school performance. We can imagine a brain implant that acts like an anti-lead, and say that an IQ point might be worth about 1 percent of GDP. Other researchers are trying to look at IQ and lifespan and life outcomes. There’s a correlation between being smart and doing better in school and getting better jobs. It’s not always the case, and not every smart person is a happy person. But that’s what we see overall. And people with lower intelligence are much more likely to be victims of a crime.
And people with high intelligence cooperate better. So overall, a society where everybody is a bit smarter would likely be a much better place. And even people who aren’t enhanced would be better off, because they would be surrounded by people who are good at cooperating and being nice. So maybe everyone has a rational reason for not wanting to be enhanced themselves, but wanting everyone else to be enhanced.
A neurotech love potion
Spectrum: In a recent study using prairie voles, rodents that form lifelong monogamous bonds, researchers stimulated certain brain regions and could cause two voles to form that bond, even though they weren’t allowed to mate. Can you imagine similar emotional meddling in the human brain?
Sandberg: There are several subsystems of love in brain. The first involves sex, mating with someone. If you enhance that, that’s just nice from a hedonistic and pleasure standpoint. The second subsystem is attaching, falling in love, selecting your partner. The research on prairie voles involved that second subsystem. The stimulation was a bit like love potion, it was stimulation that could make them fall in love with each other. That is quite intriguing. Imagine drinking a love potion, and the only reason you’re in love with this person is that you drank the love potion. That seems morally problematic—we generally think love should mean something, and that people should be together because they’re compatible.
But I think people would notice if someone tried to discreetly insert electrodes in your brain. This might be a reason we don’t want to make neurotech too microscopic. Then we might drink the wrong drink, and…
Spectrum: Then all the fairy tales come true.
How moral enhancement could be used in the justice system
Spectrum: Would it be feasible to use neurotech for moral enhancements in the context of law enforcement and prisoner rehabilitation?
Sandberg: I have given some thought to enhancement and punishment. Today’s punishment relies on operant conditioning, where you punish the person for doing something wrong. A philosopher would say, that’s not respecting the thinking being, if you just inflict pain when they do something wrong. If you want to rehabilitate someone, it’s more important that they understand why what they did is wrong.
The real reason people become criminals is often that they don’t have opportunities and don’t have the skills they need to succeed in society. They might need to understand that you don’t need to solve problems with violence. So I can imagine using cognitive enhancement in rehabilitation.
But the more interesting case would be to use enhancement to make them understand what they did. Sociopaths might not feel remorse for what they did. Could you make them understand? That would be a pretty tough punishment—if they suddenly understood why it was bad, and had to live with that guilt forever. So there’s a case for not curing the sociopath.
Who would benefit most from neural enhancement?
Spectrum: Do you worry that neurotech brain enhancements will only be available to the wealthy, and will increase the disparities between the haves and have-nots?
Sandberg: I’m not too worried about it. If the enhancement it is in the form of a device or pill, those things typically come down in price exponentially. We don’t have to worry so much about them being too expensive for the mass market. It’s more of a concern if there is a lot of service required—if you have to go to a special place and get your brain massaged, or you have to take a few weeks off work for training, the prices for those services won’t come down because they’re based on salaries.
The real question is, how much benefit do you get from being enhanced? You have to consider positional benefits versus absolute benefits. For example, being tall is positionally good for men, tall men tend to get ahead in work and have better life outcomes. But if everyone becomes taller, no one is taller. You only get the benefit if you’re taller than everyone else. Many people who are against enhancement use this argument: Enhancement leads to this crazy race and we’re all worse off.
Spectrum: So even if a cognition-enhancing device became available, you don’t think everyone should get one?
Sandberg: Intellectual enhancement would be good for the lower half of the bell curve, for people who are generally hindered by their lack of intelligence, and make stupid mistakes that they make their lives worse.
People with good life outcomes tend to be smart but not super geniuses. Giving these people more intelligence might allow them to solve problems that less intelligent people can’t solve, but that might not be an advantage unless you care about solving deep problems.
Super geniuses tend to toil away at something very specialized. Everyone benefits from their work, and having more of those people would be a very good thing. Or if we could make them even smarter, they would come up with more interesting solutions to the hard problems facing our society.
Improving on evolution
Spectrum: You’ve written about people’s concerns that human enhancements are “going against nature.” But you think there are ways to identify human enhancements that are feasible and a good idea, right?
Sandberg: When I’m suggesting an enhancement, the skeptical listener says, “If that’s such a good idea, why hasn’t nature already done it?” It’s true that evolution has optimized our species, so it’s like messing with something that a master engineer has built. But nature optimizes for different things than we care about as modern humans. Nature cares about us having a lot of grandchildren, but a good human life might not involve having any kids at all. So we differ in our value functions from evolution.
There are also situations where the trade-off might have changed. Our brains use about 20 percent of the energy from our metabolism, they’re really expensive organs. So having a larger brain would require us to eat more, and birth would be trickier too. So there are reasons why evolution can’t give us bigger brains. But today we have C-sections and we have far too much food anyway. So the equation has changed, and if there are enhancements that would make our brains bigger, that could be better.
One thing to consider: We might not know why something is around, but still suspect it’s dangerous to mess with it. Take sleep: Spending one-third of your life unconscious, at risk of attack from predators, that’s weird, that’s a huge cost. But we find it in all animals, and something that looks like it in insects. If you force animals to stay awake, they’re not healthy and they die after a couple of weeks. So an enhancement that removes sleep might be very risky. We should start with the suspicion that it will be very hard to improve on the existing sleep system.
Spectrum: Could we view brain enhancements as just another human adaptation to environment?
Sandberg: In a way. This environment in some ways is a dream come true for our caveman ancestors. Right now it’s raining outside but I’m watching from this nice dry office, which is brightly lit. But if Nick rushes into my office and says, “Where’s that book chapter?” my blood pressure goes up. The fight-or-flight response is adaptive for a bear attack, but not for coming up with a creative excuse for why this book chapter isn’t ready. Ironically, we have created an environment that we’re not well adapted to.
Over generations, evolution will make humans more and more safe in traffic. After a million years of an unchanged culture, we would be better drivers. But we don’t want to wait a million years, so we make better cars. And my brain didn’t evolve to help me look at symbols on a screen. So I can take a drug or use technology to enhance my focus and attention, which will help me survive in this environment.
Elon Musk's and Facebook's big plans in brain tech
Spectrum: Elon Musk has started a mysterious neurotech company called Neuralink to develop a new type of implant called neural lace. Musk has said that we need to improve the human brain so we can keep up with artificial intelligence, which is improving so rapidly. Do you think brain implants can keep us from being subjugated by superintelligent AI?
Sandberg: I remember when he made one of his early proclamations—there was a bit of face-palming around the office because it sounded so stupid. It’s hard to make the argument that we can enhance ourselves to stay one step ahead of machine intelligence. It can run its programs very fast, with lots of copies working at once. The human biological brain is going to be a bit of bottleneck. It might still be a great idea to make ourselves smarter and better able to handle things, so we should probably try. But it’s not going to help us keep ahead of the machines.
Another argument is that if we merge with machines, then we’ll be on the winning side. Philosophers have an idea of extended minds, which means we don’t live just in heads, but also in the phones and calendars that hold part of our cognitive structure. In that sense, I have already merged with the machine. If AI takes over the world and turns it all into paperclips, I would be on winning side—but it wouldn’t feel like winning.
Spectrum: Facebook has also announced that it’s developing a brain-computer interface, but theirs will be a non-invasive system that reads out “intentional speech” in the brain and types it out as text. Does that technology sound more promising to you?
Sandberg: That depends on the mechanism used to read out from the brain. If you need to sit down with your thinking cap, it might not be too bad. Other mechanisms might make it very easy to send off a message that’s very insulting, although maybe that message came from just one part of my brain, and that wasn’t my intention as a whole person. Also, as soon as that technology exists, you can imagine law enforcement wanting to try it. We’ll see—getting from the lab to a commercial product is a grueling process. First they have to get it to work.
An abridged version of this post appears in the January 2018 print issue as “Should We Upgrade Our Brains?”