Spy satellites and their commercial cousins orbit Earth like a swarm of space paparazzi, capturing tens of terabytes of images every day. The deluge of satellite imagery leaves U.S. intelligence agencies with the world’s biggest case of FOMO—“fear of missing out”—because human analysts can sift through only so many images to spot a new nuclear enrichment facility or missiles being trucked to different locations. That’s why U.S. intelligence officials have sponsored an artificial-intelligence challenge to automatically identify objects of interest in satellite images.
Since July, competitors have trained machine-learning algorithms on one of the world’s largest publicly available data sets of satellite imagery—containing 1 million labeled objects, such as buildings and facilities. The data is provided by the U.S. Intelligence Advanced Research Projects Activity (IARPA). The 10 finalists will see their AI algorithms scored against a hidden data set of satellite imagery when the challenge closes at the end of December.
The agency’s goal in sponsoring the Functional Map of the World Challenge aligns with statements made by Robert Cardillo, director of the U.S. National Geospatial-Intelligence Agency, who has pushed for AI solutions that can automate 75 percent of the workload currently performed by humans analyzing satellite images.
“It seems to me like these agencies want to generate maps automatically,” says Mark Pritt, a research scientist at Lockheed Martin, “without having to have a human person look at a satellite image and saying, ‘Oh, there’s a smokestack there, let me mark it on the map.’ Today’s maps are generated manually.”
Pritt and his colleagues at Lockheed make up one of many teams from academia, government labs, and the private sector that are competing for a total of US $100,000 in prize money. They and other contestants are eager to deploy deep-learning algorithms that can recognize specific patterns and identify objects of interest in Earth imagery. Such images are typically gathered through remote-sensing technologies aboard satellites, aircraft, and drones.
Satellite images present a far greater sorting challenge to deep-learning algorithms than do online photos of human faces, landmarks, or objects. Satellite images are shot from multiple angles, where objects such as buildings may appear upside down. And cloud cover can change how images of the same area appear from one hour to the next.
Satellite images also have much greater variety in resolution. That complicates the matter for deep-learning algorithms, which typically work best with fixed image sizes. Human engineers face trade-offs when deciding whether to resize the entire image and lose some detail in the lower resolution, or crop the image and focus on just one part. Furthermore, many satellites can capture Earth images beyond the visible-light spectrum, in the infrared band or at other wavelengths.
Individuals or teams experienced in working with satellite imagery may have advantages over other deep-learning researchers in the IARPA challenge. But everyone still faces big obstacles in making deep learning work under the imperfect conditions of real-world satellite imagery. And experts agree that deep-learning algorithms are not ready to do the entire job on their own, even if they achieve 80 or 90 percent accuracy. “I think the state of the technology right now enables a combination of man and machine to actually get to the answer,” says Mike Warren, CTO and cofounder of Descartes Labs.
A spin-off of the U.S. Energy Department’s Los Alamos National Laboratory, Descartes Labs already uses deep learning to automatically analyze satellite images for commercial purposes, such as forecasting the U.S. corn and soybean harvests. These applications represent an “80 percent solution for 10 percent of the effort,” Warren says.
Companies have developed many of the most interesting uses for deep learning and satellite imagery, says Grant Scott, a data scientist at the University of Missouri who leads another team participating in the IARPA challenge. By comparison, U.S. intelligence agencies are much quieter about their capabilities and plans. But the IARPA challenge makes it clear that these agencies wish to build better deep-learning tools for satellite-imagery analysis.
“There are currently programs in place within pockets of the U.S. intelligence community, but there is always room for improvement in both speed and approach,” says Hakjae Kim, program manager for IARPA.
Scott and his University of Missouri colleagues have already begun to show the power of combining publicly available commercial satellite imagery and open-source intelligence. In a paper published in October in the Journal of Applied Remote Sensing, they describe how deep-learning algorithms could accurately identify known locations of surface-to-air missile sites in China in an area of nearly 90,000 square kilometers.
Their best algorithm produced results that were verified by humans as 98 percent accurate. The algorithm took just 42 minutes to deliver readings that matched the accuracy of human analysts, whereas a traditional visual search by humans required an average of 60 hours.
Such results bode well for the IARPA challenge goal and could help establish deep learning as a necessary tool. Both governments and companies continue to launch swarms of imaging satellites to join the existing constellations peering down at Earth. The U.S. commercial satellite operator DigitalGlobe—which provided the imagery for the IARPA challenge—already captures more than 70 terabytes of raw imagery each day. Sooner, rather than later, human analysts will need all the AI help they can get.
A version of this article appears in the December 2017 print magazine as “Wanted: AI That Can Spy.”