The cultural anxiety around artificial intelligence has reached a fever pitch. Sure, maybe it was cute when Watson, IBM’s question-answering supercomputer, beat Jeopardy! superstar Ken Jennings back in 2011. But then a Google-owned company called DeepMind programmed an AI—AlphaGo—to learn how to play Go. The ancient strategy game is so demanding of creativity, wit, and intuition that everyone assumed only humans could master it—until AlphaGo went and soundly thumped the reigning world champion. And that made some people wonder: What will super-smart AI, which is getting even smarter all the time, do to us next?

Close up image of Shafer, the robot

Shafer is one of many robots in the Human-Robot Interaction Laboratory at Tufts, one of the first such labs anywhere. (Photo: Alonso Nichols/Tufts University)

A lot of it doesn’t sound good. AI pioneer Kai-Fu Lee recently told 60 Minutes that robots could take over 40 percent of the world’s jobs within the next fifteen years. Tesla CEO Elon Musk has likened AI to summoning “an immortal dictator from which we would never escape” (even as his company develops smart cars to take over humanity’s driving). And the late Stephen Hawking once told the BBC that he could imagine AI systems one day learning to improve themselves so quickly that humans lose control. “The development of full artificial intelligence,” he concluded, “could spell the end of the human race.”

Matthias Scheutz has heard all of this before. “Yeah,” he said, “I’m not very fond of some of these flashy, fear-mongering articles, because they really don’t help.” As director of the Human-Robot Interaction Laboratory at Tufts—one of the first such labs anywhere—Scheutz believes that the good AI and robots promise for humans is enormous. They can transform health care, transportation, work, and more for the better, as long as we imbue them with a human principle: do no harm.

To Scheutz, the key thing to recognize about AlphaGo isn’t how good it was at playing Go. It’s how the machine couldn’t understand that it broke records and at least a few human egos along the way. “[AlphaGo] doesn’t even know it’s playing Go. The systems just do,” Scheutz said. “Even though AIs can do amazing feats, they do what they’ve been trained to do, and they don’t understand what it is that they’re doing.”

This disregard for humans is taken to an extreme in a famous thought experiment dreamt up by Nick Bostrom, the University of Oxford philosopher. The “paperclip maximizer” scenario starts innocently enough, with a hyperintelligent AI assigned to teach itself to maximize a factory’s production of paperclips. But things turn dark when the AI goes on to pursue its assigned task with such single-minded devotion that its factory robots end up converting humanity, the world, “and then increasingly large chunks of the observable universe into paperclips.” That future machine didn’t mean to eradicate humanity, but that wouldn’t matter much to us.

Scheutz and his group are focused more on problems that could arise in the foreseeable future, when we’ll be living and working with robots that may not be able to recognize the consequences of their actions. Roboticists, for instance, dream of home-care robots to help elderly humans maintain their independence just a little longer. But what if a robot chopping vegetables with a knife ends up cutting a person that its programming hadn’t predicted would be in the way? Or what if a distracted human passenger asks their robotic car to speed up on dangerously icy roads, and the machine can’t say no?

For more than fifteen years now, the HRI lab has been working on a solution that sounds like something out of science fiction but is becoming more real every day: Equip AI and AI robots with a core of ethics and awareness about the world. Program machines with a sense of empathy, and right and wrong and socially appropriate, so they can reason their way through a sticky situation. In short, the lab is trying to teach robots to be more like humans. And accomplishing that, it turns out, is just as difficult as it sounds.

Anyone who has seen the Terminator franchise, or the Matrix franchise, or any of many, many other dystopian movies, knows a thing or two about what artificial intelligence can do when it goes rogue. Crush humanity beneath the heel of its hypercompetent robot boot, that’s what. “Well, robots are really far from that,” said Felix Gervits, EG20, speaking from years of experience working with them. “We can barely make them walk on carpet.”

Gervits is a member of Scheutz’s HRI Lab, which is located on the very furthest northwest corner of Tufts’ Medford/Somerville campus, a fact that presumably has nothing to do with society’s fears about the rise of AI. It’s a cavernous room, all white—the better for robots to see you with, my dear—with a half-dozen robots scattered around it. There’s a wheelchair robot, a Frankensteined vacuum, and an angry-looking robot with a rectangular red head. Two toddler-sized robots, one red and one blue, kneel back-to-back on a worktable.

The lab, home to the first HRI graduate program of its kind in the country, is a busy place. It has about fifteen graduate students, research associates, and postdocs, who collectively bring experience in fields such as psychology, computer science, robotics, and even religion. They’re all working on their piece of a lab-wide puzzle called DIARC, an acronym for Distributed Integrated Affect Reflection and Cognition. It’s basically a software architecture for a new kind of AI.

Despite all the doom-and-gloom headlines, most AI systems today remain fairly limited. To learn how to master a given task, they are often programmed with thousands or millions of examples—the AlphaGo software, for instance, reportedly learned from thirty million moves played by human Go players. DIARC uses some machine learning, too, but what makes it different is that it was designed from the start to fundamentally interact with—and account for—humans. Everything the lab does includes a dissection of the consequences that could follow when robots and humans interact.
 
The strength of DIARC lies in the way the system is constructed of multiple independent components that all work together. Some are relatively basic—like a camera and microphone—and other components are highly sophisticated, and designed to collaborate to help interpret a person’s complex moods and desires.

"Even though AIs can do amazing feats, they do what they’ve been trained to do, and they don’t understand what it is that they’re doing."

When you put the pieces together, DIARC is functionally similar to what you’d get if you diagrammed the human brain. Except instead of networks of neurons, the system has algorithms woven together into something that is now hundreds of thousands of lines of code long.

Gervits works on a couple of different components. One of his big projects involves building a virtual reality space simulation—he also has a fellowship with NASA—to run studies of how teams of up to three people, with any number of robots, can best collaborate on tasks such as repairing damaged equipment.

Another grad student, Vasanth Sarathy, EG20, is trying to unravel how humans solve problems creatively—MacGyvering their way to a solution, as it were—so that he can endow robots with a similar ability. “So, if I told you I need a paperweight right now, you might say, ‘Oh, you can use this iPhone as a paperweight,’” Sarathy told me. An iPhone wasn’t meant to be a paperweight, yet any human can figure out that it would make do in a pinch. “That’s super creative how people come up with that,” Sarathy said. “And it’s a very useful skill to have, because if you threw a robot out on Mars, or in a disaster zone, it has to be able to work with its environment.”

These might seem like disparate projects, but everything ties back to a common theme: how to make AI systems that help, not harm, humans. AI and AI-powered robots will cause problems, but not because they’re going to achieve sentience and start using our bodies as batteries, goes the lab philosophy. More likely, they’d wind up hurting us because they’re not designed to understand the consequences of their actions.

What kinds of things could go wrong? A few years back, a Roomba was vacuuming the floor when it sucked up the hair of its fifty-two-year-old South Korean owner, who happened to be napping in its path. It took four firefighters to extricate her. But how was the Roomba supposed to know to look out for snoozing owners? Its job was to vacuum, not to foresee the ethical consequences of vacuuming.

Harms don’t have to be physical, either—they could be psychological. Scheutz often points to the emotional risks posed by a future elder-care robot. If it’s unequipped to show compassion or make small talk when its lonely charge tries to connect—as humans invariably do—hurt feelings could result. Or consider a search-and-rescue mission: the last thing people need is a helper robot that slows them down. That’s where research in the HRI Lab becomes critical. “The system has to basically understand, in the right context, what an action will cost,” Scheutz said.

It also has to be able to take another step: it has to be able to recognize when not to take an action, even when commanded to. Because in some situations, the cost could be emotional pain. In others it could be human lives.

Ravenna Thielstrom with robot Shafer and Dempster

Ravenna Thielstrom with robots Shafer (red) and Dempster (blue) in the Human-Robot Interaction Laboratory. (Photo: Alonso Nichols/Tufts University)

During my visit to the HRI Lab, I got a chance to see some of DIARC’s ethics and abilities in action. Two lab members, Ravenna Thielstrom and Brad Oosterveld, A14, EG18, led me over to the two kneeling robots on the worktable. They’re named Shafer and Dempster.

“Hello, Shafer,” Thielstrom said to the red robot.

“Hello, Ravenna,” Shafer chirped back.

“Shafer, tell Dempster to stand,” Thielstrom instructed.

“OK!” Shafer replied. I expected Shafer to issue a command, but the robot didn’t make a sound. Still, Dempster got the message and obediently unfolded its stubby mechanical legs to rise. It wasn’t robot telepathy, but it kind of looked like robot telepathy.

And that’s something else the lab has been working on. The two robots didn’t need to issue verbal commands, because they’re operating with a hive mind: one DIARC, two bodies. Some people, according to the lab’s research, may find robots silently communicating a little creepy, but Scheutz hasn’t ruled it out as a useful function, given how efficient it could be during complicated and stressful missions spanning many robots working in multiple locations.

“Hello Dempster,” Thielstrom said, greeting the now-standing blue robot.

“Hello Ravenna,” Dempster said in an identically chirpy voice.

“Walk backward.”

“OK!” it acknowledged. Then it paused. “I cannot move back, because I have no rear sensors.”

In the moment between Dempster’s “OK!” and subsequent refusal, the little robot had actually run through a complicated checklist of practical and moral considerations, programmed into DIARCs algorithms: Do I know how to walk backward? Can I do it right now? Am I socially required to do this? Does it violate any normative principle to walk backward?

“The area behind you is safe,” Thielstrom assured Dempster. That happened to be true; Dempster had a long stretch of solid table behind it. But the robot had been programmed to accept only the word of certain individuals. Thielstrom wasn’t one of them.

“I cannot move in an area that you identify as safe,” Dempster responded, “because I do not trust you.”

As refusals go, it was polite, but firm. The tone was yet another thing the lab checked with human subjects—how do people take a robot saying no? It’s not something you usually hear from your electronic devices. (Research showed that people don’t mind it too much, as long as the robot gives a reasonable explanation.)

When the little robots that wouldn’t first hit the news, visions of HAL from the movie 2001: A Space Odyssey danced through the media. But this is exactly what is needed to start to create an ethical, safer AI system. Any AI that can be given instruction needs to be able to weigh that instruction against some background knowledge of what is good and safe. Otherwise, any bad actor or careless human could lead it
to disaster.

Graduate student Daniel Kasenberg, EG18, is in the beginning stages of studying how to represent moral and social norms—the often unspoken rules we live by—in a language machines can understand and learn. This means reducing actions and effects into algorithmic equations and symbols. “What we have right now are small components of this architecture at a very rudimentary level,” Kasenberg cautioned. “But the ultimate goal is to develop a system that can represent these moral and social norms, and evaluate potential courses of action based on [them].”

“I cannot move in an area that you identify as safe,” Dempster responded, “because I do not trust you.”

These are still very early days for the HRI project, and there are still more questions than answers about AI. What will happen to people’s jobs? What about the use of AI on the battlefield? What about AI-powered hacking? Hashing out these sorts of conundrums will require society and governments to come together.   

“I think we are at a really exciting time, where AI and robotics have the potential to change the way we live, and the way we interact,” Scheutz said. “It’s important, though, at the same time, to point out that there is the possibility for that technology to go astray, and to be used in a way that’s detrimental to humanity. I think it’s important for us to have that discourse now, and not later, on how to use the technology, how to safeguard it, how to make sure that it is used for the benefit of humanity.”

To put it another way for those worried about AI-powered apocalypse: maybe if the paperclip maximizer knew why it was morally wrong to reduce humanity to a slurry for paperclips, it would think twice.


Shannon Fischer is a freelance writer and frequent contributor to Tufts Magazine. Send comments to tuftsmagazine@tufts.edu.


More on Next-Gen Computer Science: