philip lelyveld The world of entertainment technology

20Feb/19Off

AI safety research needs social scientists to ensure AI succeeds when humans are involved. That’s the crux of the argument advanced in a new paperpublished by researchers at OpenAI (“AI Safety Needs Social Scientists“), a San Francisco-based nonprofit backed by tech luminaries Reid Hoffman and Peter Thiel.

“Most AI safety researchers are focused on machine learning, which we do not believe is sufficient background to carry out these experiments,” the paper’s authors wrote. “To fill the gap, we need social scientists with experience in human cognition, behavior, and ethics, and in the careful design of rigorous experiments.”

They believe that “close collaborations” between these scientists and machine learning researchers are essential to improving “AI alignment” — the task of ensuring AI systems reliably perform as intended. And they suggest these collaborations take the form of experiments involving people playing the role of AI agents.

In one scenario illustrated in the paper — a “debate” approach to AI alignment — two human debaters argue whatever questions they like while a judge observes; all three participants establish best practices, such as affording one party ample time to make their case before the other responds. The learnings are then applied to an AI debate in which two machines parry rhetorical blows.

Toward that end, OpenAI researchers recently organized a workshop at Stanford University’s Center for Advanced Study in the Behavioral Sciences (CASBS), and OpenAI says it plans to hire social scientists to work on the problem full time.

See the full story here: https://venturebeat.com/2019/02/19/openai-social-science-not-just-computer-science-is-critical-for-ai/

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.