In case you haven’t noticed, we’re very, very pro-more-robots around here. As journalists, we don’t really think through the consequences of always wanting more robots, which (if left unchecked) can lead to an unfortunate case of having too many robots. This becomes particularly problematic when you have so many robots that they spend all of their time trying not to run into each other, and none of their time doing anything productive.
At Georgia Tech, Li Wang and professors Aaron D. Ames and Magnus Egerstedt have been developing ways to allow infinitely large teams of mobile robots to move around each other without colliding, and also without getting in each other’s way. This is very important for people like me, who have 37 Roombas at home, but also for anyone imagining a future where roads are packed with autonomous cars.
The fundamental issue here is robot paranoia. When robots move around, they typically maintain a sensor-based “panic zone” for safety, and if anything enters that space, they panic, and stop moving. If you have only two robots moving around, they can keep clear of one another, but as the number of robots increases, the odds that two “panic zones” will intersect also increases, to the point where they overlap and you just end up with a completely paralyzing global robot freakout. Or as the Georgia Tech researchers put it (in a much fancier way), “as the number of robots and the complexity of the task increases, it becomes increasingly difficult to design one single controller that simultaneously achieves multiple objectives, e.g., forming shapes, collision avoidance, and connectivity maintenance.”
Traditionally, robots use several different control systems to complete tasks. The primary controller is focused on getting the robot to do something, like “go over there.” The secondary controller, or safety controller, make sure that while the primary controller is doing its thing, the robot doesn’t run into stuff. Most of the time, the safety controller is passive, but it can override the primary controller if it thinks there’s danger of a collision. Problems start to happen when the safety controller ends up overriding the primary controller almost all of the time, which means that the robot is so busy “being safe” that it can’t complete its primary goal.
To solve this problem, the Georgia Tech team developed a safety controller for mobile robots that’s designed to be minimally invasive to the primary controller, meaning that “the avoidance behavior only takes place when collisions or losses of connectivity are truly imminent.” (To test their algorithms on an actual robotic swarm, they used the Khepera III, a small mobile robot [photo, right] developed by Swiss company K-Team.) Ideally, this is how all safety controllers would work all of the time, but it’s a tough problem when you also need your controllers to be provably able to complete “multiple non-negotiable objectives with provable guarantees” while also “provably guarantee[ing] collision avoidance and connectivity.” “Provably” and “guarantee” are great words to have in a controller that you’re using, but I imagine really frustrating ones to include in a controller that you’re designing.
Frustrating or not, Georgia Tech has made it work, and here’s video proof:
The “one robot that refuses to play along” sounds like code for “unpredictably wayward human” to me.
This technique works on quadrotors, as well:
The researchers suggest that techniques like these are going to become more and more important as we pack more and more autonomous cars on our roads along with more and more delivery drones into the sky. Focusing on safety is certainly important, but if they’re not able to reliably complete their objectives while being safe, robots won’t be very useful, no matter how many of them we have.