Future robots could deceive not just other robots but human adversaries — thanks to devious behavior common in nature.
Nefarious tricksters, squirrels gather acorns, store them in specific location and then routinely return to patrol the hidden caches. If a hungry squirrel sidles up for a raid on the caches, the nut owner will visit empty acorn sites to deceive the would-be squirrel acorn-nicker.
Could robots be just as clever? Ronald Arkin thinks so.
The Georgia Tech School of Interactive Computing professor took this crafty squirrel strategy and applied it to robots. His detailed research was published in this month’s IEEE Intelligent Systems, funded by the Office of Naval Research.
Arkin’s robot succeeded in luring a “predator” robot to a fake location, delaying the exposure and seizure of the resources it was tasked to protect.
“[Imagine] robots guarding ammunition or supplies on the battlefield,” Arkin said. “If an enemy were present, the robot could change its patrolling strategies to deceive humans or another intelligent machine, buying time until reinforcements are able to arrive.”
Arkin and student Justin Davis also created a simulation and demo based on birds that might bluff their way to safety.
If under threat of an attack, Arabian babbler birds will sometimes opt to launch a mob counterattack, banding up with other birds to harass the predator until it finally surrenders and departs.
The team researched whether a robot equipped with the babbling bird bluff would be more likely to survive. Their research showed this deception technique was the best approach — provided there were enough robots to create a group large enough to harass a predator into departing.
In the event this minimum threshold could be reached, the gain outweighed the risk of the robot deception being exposed.
“In military operations, a robot that is threatened might feign the ability to combat adversaries without actually being able to effectively protect itself,” Arkin said. “Being honest about the robot’s abilities risks capture or destruction. Deception, if used at the right time in the right way, could possibly eliminate or minimize the threat.”
Deception in War
Military robots employing deception would hardly be the first subterfuge in warfare. One readily familiar form is deception in the form of concealment, such as camouflage material designed to mislead an adversary by making a soldier or vehicle more difficult to identify, for example.
Equipping robots with the ability to deceive each other — not to mention humans — is rife with ethical issues, however.
“When these research ideas and results leak outside the military domain, significant ethical concerns can arise,” said Arkin, who has long been a proponent of thoughtful analysis on the ethics related to robotics. “We strongly encourage further discussion regarding the pursuit and application of research on deception for robots and intelligent machines.”
In 2010, Georgia Tech researchers made another key breakthrough in robot deception: Also funded by the Office of Naval Research, that project laid the groundwork for robots independently deciding whether to deceive another intelligent machine or human.
Essentially, their earlier work laid the foundation for robots to evade and hide from the enemy or create a false trail to prevent its capture by an enemy combatant, whether robot or human.