How do you get a robot to recognize a surprise?
That's a question artificial intelligence researchers are mulling, especially as A.I. begins to change space research.
A new article in the journal Science: Robotics offers an overview of how A.I. has been used to make discoveries on space missions. The article, co-authored by Steve Chien and Kiri Wagstaff of NASA's Jet Propulsion Laboratory, Pasadena, California, suggests that autonomy will be a key technology for the future exploration of our solar system, where robotic spacecraft will often be out of communication with their human controllers.
In a sense, space scientists are doing field research virtually, with the help of robotic spacecraft.
"The goal is for A.I. to be more like a smart assistant collaborating with the scientist and less like programming assembly code," said Chien, a senior research scientist on autonomous space systems. "It allows scientists to focus on the 'thinking' things -- analyzing and interpreting data -- while robotic explorers search out features of interest."
Science is driven by noticing the unexpected, which is easier for a trained human who knows when something is surprising. For robots, this means having a sense of what's "normal" and using machine learning techniques to detect statistical anomalies.
"We don't want to miss something just because we didn't know to look for it," said Wagstaff, a principal data scientist with JPL's machine learning group. "We want the spacecraft to know what we expect to see and recognize when it observes something different."
Spotting unusual features is one use of A.I. But there's an even more complex use that will be essential for studying ocean worlds, like Jupiter's moon Europa.
"If you know a lot in advance, you can build a model of normality -- of what the robot should expect to see," Wagstaff said. "But for new environments, we want to let the spacecraft build a model of normality based on its own observations. That way, it can recognize surprises we haven't anticipated."
Imagine, for example, A.I. spotting plumes erupting on ocean worlds. These eruptions can be spontaneous and could vary greatly in how long they last. A.I. could enable a passing spacecraft to reprioritize its operations and study these phenomena "on the fly," Chien said.
JPL has led the development of several key examples for space A.I. Dust devils swirling across the Martian surface were imaged by NASA's Opportunity rover using a program called WATCH. That program later evolved into AEGIS, which helps the Curiosity rover's ChemCam instrument pick new laser targets that meet its science team's parameters without needing to wait for interaction with scientists on Earth. AEGIS can also fine-tune the pointing of the ChemCam laser.
Closer to home, A.I. software called the Autonomous Sciencecraft Experiment studied volcanoes, floods and fires while on board Earth Observing-1, a satellite managed by NASA's Goddard Spaceflight Center, Greenbelt, Maryland. EO-1's Hyperion instrument also used A.I. to identify sulfur deposits on the surface of glaciers -- a task that could be important for places like Europa, where sulfur deposits would be of interest as potential biosignatures.
A.I. allows spacecraft to prioritize the data it collects, balancing other needs like power supply or limited data storage. Autonomous management of systems like these is being prototyped for NASA's Mars 2020 rover (which will also use AEGIS for picking laser targets).
While autonomy offers exciting new advantages to science teams, both Chien and Wagstaff stressed that A.I. has a long way to go.
"For the foreseeable future, there's a strong role for high-level human direction," Wagstaff said. "But A.I. is an observational tool that allows us to study science that we couldn't get otherwise."