philip lelyveld The world of entertainment technology

20Apr/19Off

AI IS SMART. CAN WE MAKE IT KIND?

120502_8037_robots081_2400x1600“Even though AIs can do amazing feats, they do what they’ve been trained to do, and they don’t understand what it is that they’re doing.”

This disregard for humans is taken to an extreme in a famous thought experiment dreamt up by Nick Bostrom, the University of Oxford philosopher. The “paperclip maximizer” scenario starts innocently enough, with a hyperintelligent AI assigned to teach itself to maximize a factory’s production of paperclips. But things turn dark when the AI goes on to pursue its assigned task with such single-minded devotion that its factory robots end up converting humanity, the world, “and then increasingly large chunks of the observable universe into paperclips.” That future machine didn’t mean to eradicate humanity, but that wouldn’t matter much to us.

Scheutz and his group are focused more on problems that could arise in the foreseeable future, when we’ll be living and working with robots that may not be able to recognize the consequences of their actions.

For more than fifteen years now, the HRI lab has been working on a solution that sounds like something out of science fiction but is becoming more real every day: Equip AI and AI robots with a core of ethics and awareness about the world. Program machines with a sense of empathy, and right and wrong and socially appropriate, so they can reason their way through a sticky situation. In short, the lab is trying to teach robots to be more like humans. And accomplishing that, it turns out, is just as difficult as it sounds.

DIARC uses some machine learning, too, but what makes it different is that it was designed from the start to fundamentally interact with—and account for—humans. Everything the lab does includes a dissection of the consequences that could follow when robots and humans interact.

More likely, they’d wind up hurting us because they’re not designed to understand the consequences of their actions.

Harms don’t have to be physical, either—they could be psychological. Scheutz often points to the emotional risks posed by a future elder-care robot. If it’s unequipped to show compassion or make small talk when its lonely charge tries to connect—as humans invariably do—hurt feelings could result.

. The two robots didn’t need to issue verbal commands, because they’re operating with a hive mind: one DIARC, two bodies. Some people, according to the lab’s research, may find robots silently communicating a little creepy, but Scheutz hasn’t ruled it out as a useful function, given how efficient it could be during complicated and stressful missions spanning many robots working in multiple locations.

Graduate student Daniel Kasenberg, EG18, is in the beginning stages of studying how to represent moral and social norms—the often unspoken rules we live by—in a language machines can understand and learn. This means reducing actions and effects into algorithmic equations and symbols. “What we have right now are small components of this architecture at a very rudimentary level,” Kasenberg cautioned. “But the ultimate goal is to develop a system that can represent these moral and social norms, and evaluate potential courses of action based on [them].”

See the full story here: https://tuftsmagazine.com/issues/magazine/2019/spring/ai-smart-can-we-make-it-kind?utm_source=email&utm_medium=university&utm_campaign=news_alumni_04202019_1164_(MAG)(AS)

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.