Most of the autonomous vehicles that you’re likely to encounter in the near future are either Level 2 or Level 4 autonomous. Level 2, which you’ll find in a Tesla on the highway, means that the car drives itself in specific situations but expects you to be paying attention the entire time. Level 4 you might see in some experimental “fully autonomous” vehicles: They can drive themselves in specific areas when the conditions are good, and, like taxis, you sit in the back while they do all the driving no matter what happens.
There’s a reason that automotive companies have mostly skipped Level 3 autonomy: It puts a human in the loop sometimes, which is way worse than having a human in the loop either all of the time or not at all. To help us help our cars make safe, prompt transitions in and out of intermediate autonomous modes, researchers from Stanford University are experimenting with a robotic steering wheel that can physically transform, giving you a “cute little nudge” to help you pay attention when necessary.
According to the World Health Organization, more than 1.25 million people around the world die from road accidents each year. Consequently, the United Nations has set a target of halving this number by 2020. A new technology being readied for its debut could be a step forward in achieving that ambitious goal: greatly improved automotive video cameras meant to replace mirrors on vehicles.
In its annual R&D Open House on 14 February, Mitsubishi Electric described the development of what it believes is the industry’s highest-performance rendition of mirrorless car technology. According to the company, today’s conventional camera-based systems featuring motion detection technology can detect objects up to about 30 meters away and identify them with a low accuracy of 14 percent. By comparison, Mitsubishi’s new mirrorless technology extends the recognition distance to 100 meters with an 81 percent accuracy.
“Motion detection can’t see objects if they are a long distance away,” says Kazuo Sugimoto, Senior Manager, at Mitsubishi Electric’s Image Analytics and Processing Technology Group, Information Technology R&D Center in Kamakura, 55 km south of Tokyo. “So we have developed an AI-based object-recognition technology that can instantly detect objects up to about 100 meters away.”
To achieve this, the Mitsubishi system uses two technology processes consecutively. A computational visual-cognition model first mimics how humans focus on relevant regions and extract object information from the background even when the objects are distant from the viewer.
The extracted object data is then fed to Mitsubishi's compact deep learning AI technology dubbed Maisart. The AI has been taught to classify objects into distinct categories: trucks; cars; and other objects such as lane markings. The detected results are then superimposed onto video that appears on a monitor for the driver to view.
Currently, this superimposing results in objects being displayed with colored rectangles surrounding them. For instance, a blue rectangle designates an approaching truck, a yellow rectangle an oncoming car. “But this can be done in a number of ways,” says Sugimoto. “We are now testing out various ideas to find the best method for drivers.”
He emphasizes that the modeling employs relatively simple algorithms so that even when combined with the processing of the compact AI system, detection takes place in real-time. And because drivers get advance warning of approaching vehicles in real time, they can make better decisions on when to change lanes, which should help reduce accidents.
Sugimoto notes that Mitsubishi still has work to do in improving the system so that it works better in bad weather conditions, during night driving, and on winding roads. “We also believe we can increase the recognition accuracy further by interpolating time-series data into the process,” he adds.
The Japanese government is eager to promote Japanese autonomous driving technology and wants to see driverless cars on the roads in time for the Tokyo Olympics in 2020. Consequently, Japan became one of the first counties to make mirrorless cars legal when it updated its laws in July 2016. Europe soon followed its lead. According to Sugimoto, the first commercial mirrorless cars are expected to appear on roads in Japan next year.
By driving smarter, autonomous cars have the potential to move people around and between cities with far greater efficiency. Estimates of their energy dividends, however, have largely ignored autonomous driving’s energy inputs, such as the electricity consumed by brawny on-board computers.
First-of-a-kind modeling published today by University of Michigan and Ford Motor researchers shows that autonomy's energy pricetag is substantial — high enough to turn some autonomous cars into net energy losers.
"We knew there was going to be a tradeoff in terms of the energy and greenhouse gas emissions associated with the equipment and the benefits gained from operational efficiency. I was surprised that it was so significant,” says to Greg Keoleian, senior author on the paper published today in the journal Environmental Science & Technology and director of the University of Michigan Center for Sustainable Systems.
Keoleian’s team modeled both conventional and battery-electric versions of Ford's Focus sedan carrying sensing and computing packages that enable them to operate without human oversight under select conditions. Three subsystems were studied: small and medium-sized equipment packages akin to those carried by Tesla's Model S and Ford's autonomous vehicle test platform, respectively, and the far larger package on Waymo's Pacifica minivan test bed [photo above].
For the small and medium-sized equipment packages, going autonomous required 2.8 to 4.0 percent more onboard power. This went primarily to power the computers and sensors, and secondarily to the extra 17-22 kilograms of mass the equipment contributed.
However, autonomy’s energy bill ate up only part of the overall energy reduction expected from the autonomous vehicles’ ability to drive smarter driving — such as platooning of vehicles through intersections and on highways to cut congestion in cities and aerodynamic drag on the highway. As a result the modeled Ford sedans still delivered a 6-9 percent net energy reduction over their life cycle with autonomy added, and promised a comparable reduction in greenhouse gas emissions.
EV and gas models offered comparable results. Adding equipment was less burdensome for the EVs, which provided extra power for the processors and sensors more efficiently than a gas vehicle. But autonomy delivered a slightly larger net energy reduction in the gas vehicles, whose relatively inefficient drivetrains should benefit more from smart driving.
In contrast adding the large Waymo equipment package yielded a comparatively dark picture for the modeled EVs and gasoline-fueled sedans. The larger equipment increased net energy consumption on the Ford sedans by 5 percent, thanks mostly to the aerodynamic drag induced by its rooftop sensors.
Keoleian says this modeling result likely overstates real impacts from future autonomous vehicles, which he expects will manage to streamline even substantial sensors arrays. What concerns him more is the likelihood that all of the modeled packages understate power consumption by future autonomous driving subsystems.
For instance, Keoleian says future autonomous vehicles may employ street maps of far higher resolution than those used today to ensure the safety of pedestrians, cyclists and other drivers. In fact, real-time updating of high-definition maps by autonomous cars is one of the applications pushing the development of next-generation 5G wireless data networks.
Higher-bandwidth data transmission via today's 4G network could boost power consumption by onboard computers by one third or more according to Keoleian and his coauthors. It is premature, they write in today's study, to judge the power consumption associated with 5G.
Another concern for Keoleian are the indirect effects of introducing autonomous vehicles. By making driving more convenient, for example, smart cars could encourage longer commutes. "There could be a rebound effect. They could induce travel, adding to congestion and fuel use,” says Keoleian.
Such indirect effects of smart cars could either slash energy consumption from driving by 60 percent, or increase it by 200 percent, according to a 2016 study by the U.S. National Renewable Energy Laboratory. Guiding the technology’s development to avoid an energy demand explosion, says Keoleian, will require a lot more study.
Nissan, like every other car manufacturer that doesn't want to be rendered mostly obsolete within the next few decades, has been gradually developing autonomous technology for its vehicles. They've been going about it very sensibly, introducing discrete modules like highway assist and parking assist, and they've managed to get the parking bit working well enough to take it beyond cars. One such attempt at an even more challenging and important self-parking application: slipper arrangements.
Every January, the California Department of Motor Vehicles (DMV) releases data from companies that operated highly automated vehicles on the state’s public roads the previous year. By law, each company must report how many times a safety driver took control from an autonomous vehicle, either because the system had failed or because the human was worried it had.
Companies get to decide how to record these so-called disengagements. In 2017, for instance, relative newcomer Nvidia logged every single time a human touched the steering wheel of its test vehicle, even at the planned end of a test. Waymo, on the other hand, ran complex computer simulations after each disengagement, and only reported to the DMV those where it believed the driver was correct to take charge, rather than being overly-cautious. GM chose not to report at least one instance where an autonomous car was about to block an intersection.
It seems like we've gotten to the point with self-driving vehicles where it's no longer enough to "just" be developing a car that could at some point be used as an autonomous rideshare vehicle. That space has gotten super crowded over the last few years and although it may not seem like it, the giant piles of money that self-driving vehicle (SDV) startups require are not infinite. So, what we're seeing instead is more specialization into niches where specific sorts of SDVs can fulfill specific business cases.
The most popular (and realistic in the near term) niche is almost certainly delivery, because delivering stuff is what vehicles do when they're not delivering people. This niche is being tackled at all scales, from Tesla's autonomous semi trucks to sidewalk delivery robots from Starship and Piaggio Fast Forward. And now, somewhere in the middle, there's Nuro.
Today, Nuro is announcing not only the fact that it exists, but also that it's got one of those aforementioned giant piles of money ($92 million in Series A funding) along with a fully autonomous self-driving vehicle "designed to transform local commerce" by bringing things you want from local businesses directly to your home.
You’ve probably heard of Mcity, the fake city built by the University of Michigan to test self-driving cars in Ann Arbor. GoMentum Station in the Bay Area has also been in the news, with Apple and Otto looking for a secure location to put highly automated vehicles through their paces.
But there is a facility in rural California where companies have quietly tested autonomous vehicles for decades without anyone noticing. Crows Landing Air Facility is a 1,500-acre former air base near Modesto with two vast concrete runways, surrounded by farmland.
For the past decade, the easiest way to spot a self-driving car was to look for the distinctive spinning bucket mounted to its roof. The classic lidar design pioneered by Velodyne spins 64 lasers through 360 degrees, producing a three-dimensional view of the car’s surroundings from the reflected laser beams.
That complicated and bulky set-up has traditionally also been expensive. Velodyne’s US $75,000 lidar famously cost several times the sticker price of the Toyota Priuses that formed the nucleus of Google’s original self-driving car fleet.
Those days are long gone. Velodyne now sells its cheapest 16-laser lidar for $4,000, and a host of startups are nipping at its heels with solid-state lidars that could soon be as cheap as $100 each. Many of the new lidars work in fundamentally different ways to Velodyne’s bucket—a shift that brings new capabilities and new challenges.
That’s how the self-driving car from GM’s Cruise will be fitted when it hits the streets in 2019, the company has just asserted in a statement. Just where the car is supposed to start driving and under what conditions was left unclear, but it’s safe to assume it’ll be somewhere with friendly road-safety regulators. Nevada, for instance.
This would be big news even if we assume that the company’s claims are half hype. In the tech business, where vaporware is a fact of life, you can learn something about the truth just by looking at how far people are willing to stretch it.
No company has yet demonstrated any game-changing technical advance in self-driving power, and the publicly available evidence suggests that things are moving incrementally. Of course, Cruise could be sitting on some big idea, but then why is it that the crossbred car industry—with its musical-chairs hiring practices—hasn’t gotten wind of it?
Phantom Auto engineer Ben Shukman watches the passing cars carefully before pulling out of the MGM Grand parking lot and onto busy Tropicana Avenue in Las Vegas. Shukman skillfully merges into fast-flowing traffic as we chat about how the uncharacteristic rain here this week has led to an uptick in accidents over the last few days.
But Shukman is not sitting next to me in the driver’s seat of Phantom’s Lincoln MKZ, and he hasn’t felt a drop of rain in weeks. Shukman is remotely controlling the car from Mountain View, Calif., more than 500 miles away.
In the first such demonstration on public roads, Phantom Auto hopes to convince skeptical car makers and wary regulators that the best backup for today’s experimental autonomous vehicles (AVs) is nothing more or less than an old-fashioned human driver.