It's been clear since brain computer interfaces were developed, that customizing these devices would require learning both on the part of the machine and the human. New research in the Proceedings of the the Academy of the Sciences gives evidence that humans quickly adapt to BCIs.
A team of neurologists and computer scientists at the University of Washington recruited epilepsy patients awaiting surgery and recorded their brain activity with electrocorticography (electrodes attached to the surface of the brain) before and after they manipulated a simple BCI. You can find the full article here, to the right of the press release.
First of all, here's what they did. They recorded during three circumstances: when patients imagined moving their hand, when they actually moved it, and when they moved a computer cursor by manipulating a BCI. The activity during the imagined task mapped roughly onto the recordings from the actual movement, but were less powerful. When the patients hooked up to the BCI, the pattern was again similar, but the signal much stronger than both the other recordings.
The press release pitched this as evidence that BCIs are a "workout" for the brain. I don't completely buy this. The brain isn't a muscle and more activity doesn't necessarily mean it's operating at a higher level. What it does indicate (to me), and what I find far more interesting, is that people can quickly change their brain activity to accommodate BCIs. It also shows how important visual feedback is to people who are manipulating these devices. Experiments like this seem like a good way to maximize the level of feedback a user is getting and to test out different ways of delivering it.
It's also substantial proof that the brain activity produced when we imagine a movement or task can effectively drive BCIs. Every group that's developing BCIs right now is doing it slightly differently. So far, there is no clear consensus on which brain signals should be used.
This is the first paper I've seen that focused fully on what brain activity looks like when it's manipulating a BCI. The output of the setup was a cursor moving on a screen. The experiment is a good indication that BCIs have become well enough understood that we can use them in experiments as tools to once again study the brain itself.
That being said, there are also some really interesting things to be learned from this article about the brain in general and the difference between imagining and actuating movement. Here are a couple points that may get you to read it and some questions you can respond to if you do.
1. During both tasks, high frequency signals increase while low frequency signals decrease. Does this mean that part of attending to a task is muting some of the competing activity?
2. Of these two, it is the signal that decreases which map similarly in both imagery and movement. Does this mean you could further localize an area that controls movement imagery in the high frequency signals?