Google has created a set of principles for its artificial-intelligence researchers to live by—and they prohibit weapons technology.
The company has created a set of seven principles for its artificial intelligence researchers to live by—and they prohibit weapons technology.
An AI code of ethics: Google’s new guidelines prohibit it from helping develop autonomous weapons in future, but leave sufficient wiggle-room for it to benefit from lucrative defense deals. Google’s CEO, Sundar Pichai, announced the new code in a blog post yesterday.
The principles state... Google’s AI should:
-
Benefit society
-
Avoid algorithmic bias
-
Respect privacy
-
Be tested for safety
-
Be accountable to the public
-
Maintain scientific rigor
-
Be made available to others in accordance with the same principles
Some background: The announcement comes in the wake of employee protestover the Department of Defense’s use of Google’s AI to improve the accuracy of drone strikes, among other things.
See the 7 principles in detail here:https://blog.google/topics/ai/ai-principles/
See the story here: https://www.technologyreview.com/s/611379/dont-be-ai-vil-google-says-its-algorithms-will-do-no-harm/