The name of Paul Marasco’s lab says it all: the Laboratory for Bionic Integration. Here, technology and human biology come together. And the lab’s latest project shows the potential of this approach: Marasco’s team used a vibrating gadget and a neat perceptual illusion to give amputees a natural feeling of their plastic-and-metal arms moving through space.
Button batteries are the lifeblood of modern portable electronics, but they can also be death traps for small children. A startup called Landsdowne Labs is developing a new kind of pressure-sensitive coating to make them safer.
The company’s 'battery armor' is made of silicon laced through with metal microparticles. In the low-pressure environment of a child’s esophagus, it renders inert these squat cylindrical batteries found inside remote controls, flashlights, musical greeting cards and toys of all kinds.
But because of a phenomenon known as a quantum tunneling, the material, when squeezed between the contact points of a battery compartment, still allows current to flow through. “It’s essentially a waterproof insulating coating that, when you apply force to it, converts into a conductor,” explains Jeff Karp, a bioengineer at Brigham Women's Hospital in Boston.
Karp, along with MIT’s Bob Langer, developed the technology and cofounded the company, which opened its offices today in Fairfield, CT.
The Landsdowne glaze is thus designed to turn button batteries—which are currently chemical burn hazards waiting to happen—into household items about as dangerous as the coins spilling over from a change jar.
A toddler who gets a penny or quarter trapped in the esophagus generally can wait a few hours and the coin will dislodge itself, pass into the gastrointestinal tract, and out the other end; in some cases, a doctor may need to remove the obstruction, but there’s usually no rush to get to an ER because the coin is not blocking the airways and the child can still breath.
Not so with button batteries. If one of these battery cells stops going and going—as the old Energizer tagline goes—and gets stuck in the food pipe, it will begin to conduct an electrical current, which raises the pH of the surrounding tissue. Within a few hours, if not removed, it can burn a hole through the esophagus, causing permanent damage and in some cases death.
According to the National Poison Data System, battery ingestion accounts for more than 3,000 hospital emergency department visits annually in the United States. A handful of children die each year from swallowed batteries and dozens more suffer serious injuries—a problem that seems to be on the rise owing to the shift in the industry away from older zinc and alkaline technologies and toward higher-voltage and larger-diameter lithium cells.
Gary Smith directs the Center for Injury Research and Policy at Nationwide Children’s Hospital in Columbus, OH, and has studied the issue of battery-related ER visits, concluding there is a “need for increased prevention efforts.”
Among those efforts: a national Button Battery Task Force; educational campaigns to raise awareness about the risks of battery ingestion; child-resistant packaging and warning labels; and the use of tiny screws to lock battery compartments found in toys for young children.
Collectively, these initiatives are aimed at reducing the incidence of button battery injuries in children. But with more and more portable electronics in our lives, there are more and more small batteries in our homes for children to inadevertly stick in their mouths and swallow.
Public service announcements simply aren’t enough, says Smith. “In the injury prevention field, the most effective solutions are those that design the problem out of existence.”
Landsdowne’s battery coating may do just that. When tested in pigs, standard 1.4 Volt hearing aid batteries started to cause tissue peeling and cell death inside the porcine esophagus after just two hours of contact, whereas similar exposure to the coated batteries caused no such injuries.
Bob Altabet, a former head of business management at Duracell who now does market research on the battery industry, thinks the big battery manufacturers will be receptive to the technology. “Some of it obviously depends on the cost versus benefit, but in principle they would all be inclined to do things that promote safety,” he says—adding that “the moment one of them does it, they’ll all do it, because they’ll be worried about the liability consequences.”
Melissa Fensterstock, Landsdowne’s CEO, agrees. “There is pressure and there is interest in implementing a solution that can save lives,” she says. But, she notes, among the company’s investors in its initial funding round of close to $3 million, none were battery makers.
In discussions Fensterstock had with battery companies about adding the coating technology to their products, she was told that “if it’s a penny or less [to manufacture] they’d be willing to entertain it,” she says. That’s well within Fensterstock’s projections—and with billions of button batteries produced globally each year, even penny royalties can add up to huge profits for Landsdowne, she notes.
In addition to the battery coating, the company is also developing another invention from the Karp and Langer labs: a quick-release medical tape that can be removed without damaging delicate skin. The first application will likely be in the NICU, where premature babies often need to have respirators and other medical devices secured to their paper-thin skin. After that could come pain-free adhesive tape for a broader consumer market: “The next logical application for this technology is in the wearable space,” Karp says.
The deep neural networks that power today’s artificial intelligence systems work in mysterious ways.
They’re black boxes: A question goes in (“Is this a photo of a cat?” “What’s the best next move in this game of Go?” “Should this self-driving car accelerate at this yellow light?”), and an answer comes out the other side. We may not know exactly how a black box AI system works, but we know that it does work.
But a new study that mapped a neural network to the components within a simple yeast cell allowed researchers to watch the AI system at work. And it gave them insights into cell biology in the process. The resulting tech could help in the quest for new cancer drugs and personalized treatments.
First, let’s cover the basics of the neural networks used in today’s machine learning systems.
Computer scientists provide the framework for a neural network by setting up layers, each of which contains thousands of “neurons” that perform tiny computational tasks. The trainers feed in a dataset (millions of cat and dog photos, millions of Go moves, millions of driver actions and outcomes), and the system connects the neurons in the layers to make structured sequences of computations. The system runs the data through the neural network, then checks to see how well it performed its task (how accurately it distinguished cats from dogs, etc). Finally it rearranges the connection patterns between the neurons and runs through the dataset again, checking to see if the new patterns produce a better result. When the neural network is able to perform its task with great accuracy, its trainers consider it a success.
Although they’re called neural networks, these systems are only very roughly inspired by human neural systems, explains Trey Ideker, a professor of bioengineering and medicine at UC San Diego.
“Look at AlphaGo [the program that beat the Go grandmaster]. The inner workings of the system are a complete jumble; it looks nothing like the human brain,” Ideker says. “They’ve evolved a completely new thing that just happens to make good predictions.”
Ideker, who led the new research on the AI for cell biology, set out to do something different. He wanted to use a neural network not just to spit out answers, but to show researchers how it reached those conclusions. And by mapping a neural network to the components of a yeast cell, his team could learn about the way life works. “We’re interested in a particular structure that was optimized not by computer scientists, but by evolution,” he tells IEEE Spectrum.
This project was doable because brewer’s yeast, a single-cell organism, has been studied since the 1850s as a basic biological system. “It was convenient because we had a lot of knowledge about cell biology that could be brought to the table,” Ideker says. “We actually know an enormous amount about the structure of a yeast cell.”
So his team mapped the layers of a neural network to the components of a yeast cell, starting with the most microscopic elements (the nucleotides that make up its DNA), moving upward to larger structures such as ribosomes (which take instructions from the DNA and make proteins), and finally to organelles like the mitochondrion and nucleus (which run the cell’s operations). Overall, their neural network, which they call DCell, makes use of 2,526 subsystems from the yeast cell.
DCell allows researchers to change a cell’s DNA (its genetic code) and see how those changes ripple upward to change its biological processes, and subsequent to that, cell growth and reproduction. Its training data set consisted of several million examples of genetic mutations in real yeast cells, paired with information about the results of those mutations.
The researchers found that DCell could use its simulated yeast to accurately predict cell growth. And since it’s a “visible” neural network, the researchers could see the cellular mechanisms that were altered when they messed around with the DNA.
This transparency means that DCell could potentially be used for in silico studies of cells, obviating the need for expensive and time-consuming lab experiments. If the researchers can figure out how to model not just a simple yeast cell but also complex human cells, the effects could be dramatic. “If you could construct a whole working model of a human cell and run simulations on it,” says Ideker, “that would utterly revolutionize precision medicine and drug development.”
Cancer is the most obvious disease to study, because each cancer patient’s tumor cells contain a unique mix of mutations. “You could boot up the model with the patient’s genome and mutations, and it would tell you how quickly those cells will grow, and how aggressive that cancer is,” Ideker says.
What’s more, pharma companies searching for new cancer drugs use cell growth as the metric of success or failure. They look at a multitude of molecules that turn different genes on or off, asking for each: Does this potential drug cause the tumor cell to stop multiplying? With billions of dollars going to R&D for cancer drugs, an in silico shortcut has clear appeal.
Upgrading from yeast to human cells won’t be an easy task. Researchers need to gather enough information about human patients to form a training data set for a neural network—they’ll need millions of records that include both patients’ genetic profiles and their health outcomes. But that data will accumulate fairly quickly, Ideker predicts. “There’s a ton of attention going into sequencing patient genomes,” he says.
The trickier part is gathering the knowledge of how a human cancer cell works, so the neural network can be mapped to its component parts. Ideker is part of a consortium called the Cancer Cell Map Initiative that aims to help with this challenge. Cataloging a cancer cell’s biological processes is tough because the mutations don’t only switch cellular functions on and off, they can also dial them up or down, and can act in concert in complicated ways.
Still, Ideker is hopeful that he can employ a machine learning technique called transfer learning to get from a neural network that models yeast cells to one that models human cells. “Once you’ve built a system that recognizes cats, you don’t need to retrain the whole neural network to recognize squirrels,” he says.
For years, scores of engineers have been trying to develop a more unobtrusive, convenient device for blood pressure monitoring. Now, researchers at Michigan State University and University of Maryland appear to have succeeded.
In a paper published today in Science Translational Medicine, the researchers described a prototype blood pressure sensor that can be incorporated into a smartphone, and requires only the press of a fingertip.
The convenient device could encourage people to check their blood pressure more often, allowing them to catch hypertension—persistently high blood pressure—sooner, says Ramakrishna Mukkamala, a biomedical engineer at Michigan State, in East Lansing, who led the study.
As the cost of DNA sequencing continues to drop, academics and biotech companies have been waiting for more individuals to sequence and share their full genomes. But so far, that isn’t happening.
Personal genomics companies, such as 23andMe and Ancestry, perform consumer genotyping, a relatively inexpensive process that identifies single DNA letters at regular intervals across the genome. While such genotyping has become popular, academics, medical researchers, and pharmaceutical companies want something different. They seek whole genome sequences—every single one of the roughly 6.4 billion letters in the human genome—to do research, develop drugs, and more. But they’re not getting them: Consumers have been loath to pay upwards of US $1,000 for full genome sequencing and even more wary of sharing that detailed, private data.
Nebula Genomics, a new startup co-founded by Harvard biologist and sequencing pioneer George Church, says it can solve those problems using blockchain, the decentralized technology that enables cryptocurrencies like bitcoin. In a 28-page white paper published quietly in February, the company’s founders describe their aims: To use blockchain to reduce the costs of personal genome sequencing, cut out the middlemen to make it easy for individuals to share full genome sequences with companies and academics, and to allay privacy concerns.
In the image above, there's a picture of a cat on the left. On the right, can you tell whether it's a picture of the same cat, or a picture of a similar looking dog? The difference between the two pictures is that the one on the right has been tweaked a bit by an algorithm to make it difficult for a type of computer model called a convolutional neural network (CNN) to be able to tell what it really is. In this case, the CNN think it's looking at a dog rather than a cat, but what's remarkable is that most people think the same thing.
This is an example of what's called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they're looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they're looking at something they aren't.
It’s still relatively rare for artificial intelligence to deliver a crushing victory over human physicians in a head-to-head test of medical expertise. But a deep neural network approach managed to beat 42 dermatology experts in diagnosing a common nail fungus that affects about 35 million Americans each year.
Hand sanitizer just isn’t cutting it this winter. Much of the US remains in the throws of its worst flu season this decade, according to federal officials. One out of every 13 doctor visits during the second week of February was for fever, cough and other flu-like symptoms, matching the peak levels during the 2009 swine flu pandemic, the US Centers for Disease Control and Prevention (CDC) reported this month.
We wondered if there was any new technology out there that might help. It turns out some engineers are on it, with new software and sanitizing gadgets. In the hope that it might inspire further ingenuity or provide a resource for consumers, here’s our short list of the latest trends in flu fighting tech.
DNA data storage just got bigger and better. Scientists have reported the first random-access storage system from which they can recover individual data files, error free, from over 200 megabytes of digital information encoded into DNA.
Random access is key for a practical DNA-based memory, but until now, researchers have been able to achieve it with only up to 0.15 megabytes of data.
Since submitting their research, published in Nature Biotechnology, the team from Microsoft Research and the University of Washington has already improved on what they reported. Their storage system now offers random access across 400 megabytes of data encoded in DNA with no bit errors, says Microsoft Research’s Karin Strauss, who led the new work with Luis Ceze from the University of Washington.
Microsoft and other tech companies are seriously considering the possibility of archiving data in DNA. Current data storage technologies are not keeping up with the breakneck pace at which we generate digital content, Strauss says. Synthetic DNA is an attractive storage medium because it can, in theory, store 10 million times as much data as magnetic tape in the same volume, and it survives for thousands of years. Technology Reviewreports that Microsoft Research aims to have an operational DNA-based storage system working inside a data center toward the end of this decade.
DNA data storage involves translating the binary 0s and 1s of digital data into sequences of the four bases A, C, G, and T that make up DNA. The encoded sequences are synthesized and stored in vials. A DNA sequencing machine then decodes the data by recovering the sequences from DNA molecules. But it has been hard to access specific data files. Most research efforts until now have sequenced and decoded the entire bulk of the information stored in a vial. “It is not economical to sequence all the data you have stored every time you want to read a portion of it,” Strauss says.
To make a random access system, Strauss, Ceze, and their colleagues devised clever coding algorithms and turned to the polymerase chain reaction, a well-known lab technique used to make thousands of copies of DNA strands, called amplifying DNA.
True to form, artificial intelligence continues to equal and even surpass doctors in the prediction and diagnosis of condition after condition. Most of this work, however, has occurred in carefully controlled laboratory experiments, with clean databases and images acquired and reviewed by experts.
Now, companies are making a concerted push to bring AI into real healthcare settings, where things are messier and far less controlled.