philip lelyveld The world of entertainment technology


When AI hurts people, who is held responsible?

“Where society decides that AI is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of AI’s use, and it should be one divorced from the need to find fault. This could be strict liability, it could be broad insurance, or it could be ex ante regulation,” the paper reads.

Secrecy in the AI industry is a major hurdle when it comes to accountability. Negligence law typically evolves over time to reflect common definitions of what constitutes reasonable behavior on the part of, for example, a doctor or driver accused of negligence. But corporate secrecy is likely to keep common occurrences that result in injury hidden from the public. As with Big Tobacco, some of that information may come into public view through whistleblowers, but a lack of transparency leaves people exposed in the interim. And AI’s rapid development threatens to overwhelm the pace of changes to negligence or tort law, exacerbating the situation.

“As a result of the secrecy, we know little of what individual companies have learned about the errors and vulnerabilities in their products. Under these circumstances, it is impossible for the public to come to any conclusions about what kinds of failures are reasonable or not,” the paper states.

See the full story here:

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.