Advertisement

This year we took small, important steps toward the Singularity

This year we took small, important steps toward the Singularity
From Engadget - December 19, 2017

In the last year, we have seen Google form the DeepMind Ethics & Society to investigate the implications of its AI in society, and we have witnessed the rise of intelligent sex dolls. We have had to take a deep look at whether the warbots we are developing will actually comply with our commands and whether tomorrow's robo-surgeons will honor the Hippocratic Oath. So it's not to say that such restrictions ca not be hard-coded into an AI operating system, just that additional nuance is needed, especially as 2018 will see AI reach deeper into our everyday lives.

Asimov's famous three laws of robotics is "a wonderful literary vehicle but not a pragmatic way to design robotic systems," said Dr. Ron Arkin, Regents' professor and director of Mobile Robot Laboratory at the Georgia Institute of Technology. Envisioned in 1942, when the state of robotics was rudimentary at best, the laws were too rigid for use in 2017.

During his work with the Army Research Office, Arkin's team strived to develop an ethical robot architecture -- a software system that guided robots' behavior on the battlefield. "In this case, we looked at how a robotic software system can remain within the prescribed limits extracted from international humanitarian law," Arkin said.

"We do this in very narrow confines," Arkin continued. "We make no claims these kinds of systems are substitutes for human moral reasoning in a broader sense, but rather we can give the same guidelines -- in a different format, obviously -- that you would give for a human warfighter when instructed how to engage with the enemy, to a robotic system."

Specifically, the context of these instructions is dictated by us. "A human being is given the constraints, and restraints, if you will, for the robotic system to adhere to," he said. It's not simply a matter of what to shoot at, Arkin explained, but whether to shoot at all. "There are certain prohibitions that must be satisfied," Arkin said, so that "if it finds itself near cultural property which should not be destroyed, or if that individual or target is near civilian property like a mosque or a school, it should not initiate in those circumstances."

This "boundary morality," as Arkin puts it, likely wo not be enough for robots and drones to replace human warfighters, and certainly not next year. But in certain scenarios, such as clearing buildings or counter-sniper operations, where collateral damage is common, "put a robot in that situation and give it suitable guidance to perhaps do better, ultimately, than a given warfighter could," Arkin concluded.

In these narrowly-defined operations, it is possible to have a three-laws-like sense of ethics in an AI operating system. "The constraints are hard-coded," Arkin explained, "just like the Geneva Conventions say what is acceptable and what is not acceptable."

Machine-learning techniques may empower future AI systems to play an expanded role on the battlefield, though they are themselves not without risk. "There are some cases of machine-learning which I believe should not be used in the battlefield," Arkin said. "One is the in-the-field target designation where the system figures out who and what it should engage with under different circumstances." This level of independence is not one that we are currently ethically or technologically equipped to handle and should instead be vetted first by a human-in-the-loop "even at the potential expense of the mission. The rules of engagement do not change during the action."

"I believe that if we are going to be foolish enough to continue killing each other in warfare that we must find ways to better protect noncombatants. And I believe that this is one possible way to do that," Arkin concluded.

While 2017 saw the rise in interactions between robots and humans in the supermarket -- looking at you, Amazon Go. In the coming year, care must still be taken to avoid potential conflict. "These robots, as they actuate in the physical space, they will encounter more human bodies," said Manuela Veloso, Professor at Carnegie Mellon's School of Computer Science and head of CMU SCS's Machine Learning Department. "It's similar to autonomous cars and how they will interact with people: robots will eventually need to have to make ethical decisions." We are already seeing robots encroach on production lines and fulfillment centers. This sense of caution will be especially necessary when it comes to deciding who to run over.

And, unlike military applications, civil society has many more subtle nuances guiding social mores, making machine learning techniques a more realistic option. Veloso states, "Machine learning has a much higher probability of handling the complexity of the spectrum of things that may be encountered," but that "it probably will be a complement of both."

Advertisement

Continue reading at Engadget »