Nanoclast iconNanoclast

The memtransistor symbol is overlaid on an artistic rendering of a hypothetical circuit layout in the shape of a brain.

'Memtransistor' Forms Foundational Circuit Element to Neuromorphic Computing

Computers that operate more like the human brain than computers—a field sometimes referred to as neuromorphic computing—have promised a new era of powerful computing.

While this all seems promising, one of the big shortcomings in neuromorphic computing has been that it doesn’t mimic the brain in a very important way. In the brain, for every neuron there are a thousand synapses—the electrical signal sent between the neurons of the brain. This poses a problem because a transistor only has a single terminal, hardly an accommodating architecture for multiplying signals.

Now researchers at Northwestern University, led by Mark Hersam, have developed a new device that combines memristors—two-terminal non-volatile memory devices based on resistance switching—with transistors to create what Hersam and his colleagues have dubbed a “memtransistor” that performs both memory storage and information processing.

Read More
llustration of waves propagating away from a point-like source. Left: Regular wave propagation. Right: Wave propagation on a hyperbolic metasurface. 

Mesmerizing Metasurface Manipulates Light

Researchers from CIC nanoGUNE in Spain in collaboration with the Donostia International Physics Center (DIPC) along with Kansas State University have developed a real-world version of what theorists had dubbed a “hyperbolic metasurface.”

In these strange materials, light propagates with completely reshaped wavefronts than in typical surfaces, even other metasurfaces. The researchers believe that this new metasurface will enable greater control over light so that it can be used for creating increasingly smaller devices.

Metasurfaces enable the shortening of wavelengths of light below the diffraction limit, making it possible to design chip-sized devices that can manipulate light for information processing as well as shrink the sizes of devices based on traditional optics.

Read More
For first time the magnetic porphyrin molecule has been directly connected to an electronic circuit

Graphene Nanoribbons Reach Out to the Molecular World

A collaboration among Spanish research institutes—led by the nanoGUNE Cooperative Research Center (CIC)—has made a significant breakthrough in so-called molecular electronics by devising a way to connect magnetic porphyrin molecules to graphene nanoribbons. These connections may be another example of how graphene could enable the potential of molecular electronics.

Porphyrin is a hemogloblin-like molecule that is responsible for making photosynthesis possible in plants and transporting oxygen in our blood. But recently, researchers have been experimenting with so-called magnetic porphyrins and discovered that they can form the basis of spintronic devices.

Spintronics involves manipulating the spin of electrons and in this way differs from conventional electronics that manipulates their movement. It is this spin that is responsible for magnetism: When a majority of electrons in a material have their spins pointing in the same direction, the material is magnetized. If you can move all the spins up or down and can read that direction, you can create the foundation of the “0” and “1” of digital logic.

Spintronic devices based on the porphyrin molecule exploit the magnetic atom—typically iron, which has spin-polarized states—that is in the middle of each molecule. There are a number of ways of exploiting the spin of these magnetic atoms to polarize the transported current. If magnetic molecules with a larger spin are used—the so-called a single-molecule magnet—a “1” or “0” state could be stabilized by a magnetic field and read by currents.

The Spanish researchers have taken a unique approach to setting this up. They’ve created direct connections to the molecules with atomically precise graphene wires, which covalently bond to specific sites of the molecules.

“This allows the injection of electronic currents into the molecule,” says Nacho Pascual, Ikerbasque Professor and leader of the Nanoimaging Group at nanoGUNE. “We further show that even after the connection, the molecule maintains its magnetic property.”

Pascual adds that the Spanish collaborators have demonstrated that small variations in the way the graphene nanoribbons are attached to a molecule can alter its magnetic properties. Further, a  molecule’s spin can be manipulated via the injected currents.

“We tested the magnetization by performing tunneling spectroscopy,” says Pascual. “We saw that the iron ion maintained its spin and its preferred direction after connection to the graphene nanoribbons, but in a few cases where the bonding was different it completly vanished. So the way of contacting is crucial.”

Pascual sees this work as bringing spintronics into molecular electronics, and has unofficially dubbed it “Molecular Spintronics.” 

In future research, Pascual and his colleagues aim to use single-molecule magnets, and improve the magnetic function in transport experiments by injecting current though the ribbons. “This will be closer to the real usage of these devices,” he added.

Microscopy image of a nanostructured photodetector looks like the letter C with a dark black center.

Silicon Nanostructures Bend Light to Make Faster Photodiodes

There are some things silicon doesn’t do well. It neither absorbs nor emits light efficiently. So, silicon photonics systems, which connect racks of servers in data centers via high-data rate optical fiber links, are dependent on photodiode receivers made of germanium or other materials to turn optical signals into electronic ones. Saif Islam, professor of electrical and computer engineering at University of California Davis, and colleagues have come up with a way for silicon photodiodes to do the job, potentially driving down the cost of optical computer-to-computer communications.

Read More
Diagram showing annihilation or creation of a single magnetic skyrmion

Swirly Skyrmions Could Be the Future of Data Storage

About five years ago, researchers at the University of Hamburg demonstrated that tiny, swirling magnetic spin patterns on thin films—known as skyrmions—could be used to store and erase data on magnetic media.

At that time, these spinning magnetic swirls that had been proposed over 60 years ago by British physicist Tony Skyrme—from whom the name derives—had suddenly become a potentially game changing magnetic data storage system. And what a change it represented: skyrmions are 10 times smaller than the magnetic regions used on traditional hard drives.

Now a team of researchers from CNRS/Thales joint lab in France with European Funding under the MAGicSky program have taken a critical step in the commercial realization of this technology by electrically detecting for the first time a single small skyrmion at room temperature.

“We believe this is an important advance because it demonstrates one of the unavoidable functions for any type of future concept of devices: electrical detection,” said Vincent Cros, a researcher at CNRS and co-author of the paper published in Nature Nanotechnology.

While the electrical signal of skyrmion lattices or an ensemble of skyrmions has been measured before, mostly at low temperatures, this is the first time that measuring an electrical signal has been demonstrated for a single skyrmion and at room temperature.

As one might imagine, the electrical signals associated with these 100 nanometer skyrmions remain relatively small. These signals are so small, according to Cros, that they had to be sure that the measured electrical signal is actually associated with the presence of a skyrmion. “That is exactly what we demonstrate here by a concomitant electrical measurement and magnetic imaging on the very same devices,” said Cros.

While it was necessary to use magnetic imaging to ensure that they were measuring a skyrmion for their research, in future memory devices the only possible reading procedure will be through electrical measurement and not by imaging the magnetic configuration of the skyrmion.

Nonetheless, the ability to make electrical measurements of the sample while imaging it magnetically at the same time using magnetic force microscopy is extremely significant, according to Cros, and had never been done before.

As for the actual device, this goes back to work Cros and his colleagues did in 2013 that suggested the best memory device for exploiting skyrmions would be what’s known as “racetrack memory.” 

Nearly a decade ago, Stuart Parkin and his colleagues at IBM Almaden Research demonstrated a three-bit version of so-called “racetrack memory,” which is a solid-state non-volatile memory that promises much higher storage density than conventional solid-state memory devices.

When Parkin first envisioned racetrack memory, they were based on magnetic features known as domain walls, which essentially separate the magnetic direction of a material into different areas. Electric currents could push those domain walls around the track and a sensor could detect the changes, leading to the “0” and “1” of digital memory.

What Cros and his colleagues suggested five years ago was that the skyrmions could replace the domain walls and they could move along the track and their presence or absence could be detected electrically, leading to a digital memory device.

By using a basic skyrmion-race-track memory, the researchers designed electrical contacts on both ends of the tracks. In order to detect the electrical skyrmion signal, they also designed lateral contacts. The electrical signal is simply detected by measuring the associated electrical voltage using a commercial voltmeter.

It sounds all pretty straight forward, however, controlling the position and the density of the skyrmions remained a challenge. The main obstacle revolved around the creation of the skyrmions in the material where prior to their formation it had all been in a uniform magnetized state. The traditional method for producing the skyrmions was based on the use of a magnetic field.

“In the present work, we have employed a new approach in which we inject short current pulses into the materials, which allows us to create isolated skyrmions located in a strip (or track) designed by electron-beam lithography,” explained Cros.

The result is that Cros and his colleagues can now adjust the total number of nucleated skyrmions by tuning different parameters, such as the current pulse width or the intensity of the external magnetic field.

While all of this will certainly go down as a significant step towards using skyrmions in memory devices, Cros concedes that commericialization is still a ways down the road.

“We are not yet at the stage where skyrmion devices can be used and implemented as a real new electronic device,” said Cros. “The standard and reasonable time scale between fundamental discoveries and consumer electronics is often between 10 and 15 years.”

To realize this 15 year time line, Cros believes that more efforts are needed to further decrease the skyrmion size, targeting sub 10-nm diameter, to increase the skyrmion speed, better understand and control the interaction of skyrmions with material grains (typically of the same sizes) and to increase the electrical signal.

Aledia's gallium nitride nanowire LEDs on 200-mm silicon wafers.

Cash Comes in for Nanowire Display Startups

While much of the near-term innovation of future TVs will come from the processing horsepower behind the screen, farther out you can expect a big change in the pixels themselves: micro-LEDs. These displays would be made up of pixels made of miniaturized gallium-nitride LEDs, which are so efficient that displays would consume half or even one-third of the energy used by OLED or LCD displays while being considerably brighter than both.

Samsung, seemingly at great cost, assembled a huge microLED display for CES that it called “The Wall,” but the technology is likely to make its mark much sooner in small displays for augmented reality and smartwatches. Apple, for example, acquired micro-LED display startup LuxVue in 2014, which reportedly had raised $43-million to that point. MicroLED displays still haven’t appeared in the Apple Watch, though.

In the past four months, venture capital groups have poured cash into two microLED startups whose particular take on the technology could speed up its adoption. Both rely on growing nanometer-wide wires that each comprise an LED. In August, Glo, founded by Lund University nanowires expert Lars Samuelson and based in Sweden and Silicon Valley, got SEK241 million (US $15 million), with Google leading the investment. And in January Aledia, a spinoff of CEA in Grenoble, France, took in €30 million (US $37 million), adding Intel Capital to its investors.

“This is going to be a generational shift in technology,” says Aledia’s CEO, Georgio Anania. The main advantage of using gallium-nitride LEDs as pixels is efficiency. Today’s technologies, LCDs and OLED displays, are only around 5- to 7-percent efficient. But the efficiency of gallium-nitride LEDs for lighting is closer to 70 percent. Efficiency degrades as you make the LEDs tinier and tinier, Analia points out, but even a 15-percent-efficient display “would be a revolution.”

But gallium-nitride is expensive, costing multiples of silicon, so you have to limit how much of the material is used. Even for LEDs destined for lighting applications, the gallium nitride is grown as a thin layer atop a wafer of sapphire and in some cases silicon. Sapphire is used because its crystal lattice matches that of gallium nitride pretty well. That matchup means the gallium nitride grown atop it has few defects. In larger LEDs, the defects can sap power. But in the tiny ones needed for displays, it can kill the device entirely.

In an effort to make LEDs cheaper by using a more plentiful starting wafer that comes in larger sizes, LED makers have worked hard on ways to grow gallium nitride on silicon. Silicon isn’t a natural fit, so there are bound to be more defects. Part of Aledia’s allure is that those defects don’t matter much to its nanowire LEDs.

The company grows fields of gallium nitride nanowire LEDs on 200-millimeter silicon wafers. Each nanowire has an inner and outer core of gallium nitride sandwiching a series of what are called quantum wells—very thin layers of material that confine charges and have the effect of enhancing the recombination of electrons and holes to produce light. Doping the structure with specific types and concentrations of atoms makes LEDs that shine in either red, green, or blue.

In gallium nitride–on-silicon systems defects can occur because the two materials expand at different rates when heated. This stresses the gallium nitride, creating dislocations in its crystal structure. But nanowires have such a small footprint that the resulting stresses across one of them are pretty small. Even if a defect does occur, there are potentially hundreds of nanowires in each pixel, so one dud doesn’t make a difference.

Still, duds are a big problem for a display that’s supposed to be made up of thousands of individual LEDs. Such displays would be made by placing each tiny LED onto the screen substrate. If one LED doesn’t work, the whole screen is a waste. Samsung’s Wall demo is a 4K TV; meaning it had, at minimum, a preposterous 8.3 million perfectly operating, perfectly placed LEDs.

For large displays, Aledia’s advantage is that there will be no duds. But for small displays, such as smartwatches or the microdisplays that will enable future AR and VR systems—and someday even contact-lens systems—Aledia can take further advantage of its silicon base. It can build the whole display out of a single nanowire-studded silicon chip.

Such monolithic displays can have the silicon portion fully processed into the needed circuits to drive the pixels, and then the gallium nitride LEDs can be grown right on top, says Anania. The company’s first target is monolithic displays for smart watches and other small forms. It’s closest to producing such a display using only blue pixels, which would then be converted by phosphorescent chemicals to produce green and red.

But “Silicon Valley wanted native RGB,” he says. So the company is working on making pixels containing nanowires that have different chemical doping profiles to produce all three colors. “We’re still working through the tech challenges,” says Anania. “A display is a complicated subsystem.”

Aledia’s might not be the only path to monolithic displays, of course. Calls to Glo were not returned by press time; however, Google’s investment speaks to its progress. And indeed, developers of 2D (non-nanowire) microLED displays—both those that have been swallowed by giants like Apple and Facebook and startups such as Ostendo—are surely in the race as well. Significantly, Plymouth, England–based Plessey Semiconductor recently pledged to be first to market a microLED display in the first half of 2018 using its own gallium-nitride-on-silicon technology.

DNA origami nanostructure shapes.

Novel Lithography Technique Combines Speed With Accuracy

What happens when you combine DNA origami techniques with conventional lithography? You get a novel lithography technique dubbed DNA-assisted lithography (DALI) that has the resolution of electron beam lithography with the speed of conventional lithography.

In research described in the journal Science Advances, an international team of scientists from Finland, Denmark, and the United States have combined the programmable and accurate shapes made possible with DNA origami with conventional lithography to fabricate structures that are accurate below 10 nanometer resolution and are tens of nanometers in size.

The resulting method offers a unique example of combining bottom-up based approaches (i.e. the self-assembled DNA structures) with top-down techniques (conventional lithography), according to Jussi Toppari, a senior lecturer at University of Jyväskylä in Finland and co-author of the research. “It will extend the possibilities of both standard lithography as well as DNA origami techniques, said Toppari.

Read More
Illustrations showing the basic operation of NIST’s artificial synapse, which could connect processors and store memories in future neuromorphic computers operating like the human brain.

Superconducting Synapse Could Let Neuromorphic Chips Beat Brain’s Energy Efficiency

Scientists at the U.S. National Institute of Standards and Technology, in Boulder, Colo., have developed a superconducting device that acts like a hyperefficient version of a human synapse.

Neural synapses are the connections between neurons, and changes in the strength of those connections are how neural networks learn. The NIST team has come up with a superconducting synapse made with nanometer-scale magnetic components that is so energy efficient, it appears to beat human synapses by a factor of 100 or more.

“The NIST synapse has lower energy needs that the human synapse, and we don’t know of any other artificial synapse that uses less energy,” NIST physicist Mike Schneider said in a press release.

The heart of this new synapse is a device called a magnetic Josephson junction. An ordinary Josphson junction is basically a “weak link between superconductors,” explains Schneider. Up to a certain amperage, current will flow with no voltage needed through such a junction by tunneling across the weak spot, say a thin sliver of non-superconducting material. However, if you push more electrons through until you pass a “critical current,” the voltage will spike at an extremely high rate—100 gigahertz or more.

Read More
Illustration of graphene, with shining light.

Here's How Graphene Makes Photodetectors 100,000 Times More Responsive Than Silicon

Two years ago, we covered research out of the University of Manchester that demonstrated that graphene-based membranes could serve as a filter for cleaning up nuclear waste at nuclear power plants.

While it’s not clear that this particular application for the graphene membranes ever made much headway in nuclear waste cleanup, they did discover an interesting phenomenon about these graphene membranes in the ensuing two years: protons can transport through graphene.

Based on that knowledge, Andre Geim’s team at the University of Manchester began to investigate whether light could be used to enhance proton transport through graphene by the addition of other light sensitive materials, such as titanium dioxide (TiO2). Turns out that graphene did the job quite effectively on their own.

“We were not expecting that graphene on its own – without the addition these light sensitive ingredients – would show any response,” said Marcelo Lozada-Hidalgo of the University of Manchester and co-author of this research and the work from two years ago. “We were very surprised by our results.”

Read More
Test tubes and other objects forming a pattern of 0s and 1s.

Test Tube Hard Drives Compute with Chemicals

A group of scientists and engineers at Brown University is planning to use chemicals in a droplet of fluid to store huge amounts of data and, eventually, get them to do complex calculations instantly. They’ve just received US $4.1 million from the Defense Advanced Research Projects Agency to get started, and plan to borrow robots and automation from the pharmaceutical industry to speed their progress.

“We’re hoping that at the end of this we’ll have a hard drive in a test tube,” says Jacob Rosenstein, assistant professor of electrical engineering, who is co-leading the project with theoretical chemist Brenda Rubenstein.

There’s been a big push recently to store data as molecules of DNA, but the Brown chemical computing project will do things differently, potentially ending up with greater data density and quicker readouts.

Read More


IEEE Spectrum’s nanotechnology blog, featuring news and analysis about the development, applications, and future of science and technology at the nanoscale.

Dexter Johnson
Madrid, Spain
Load More