philip lelyveld The world of entertainment technology

5May/23Off

AI exemplifies the ‘free rider’ problem – here’s why that points to regulation

... As a philosopher who studies technology ethics, I’ve noticed that AI research exemplifies the “free rider problem.” I’d argue that this should guide how societies respond to its risks – and that good intentions won’t be enough.

Riding for free

Free riding is a common consequence of what philosophers call “collective action problems.” These are situations in which, as a group, everyone would benefit from a particular action, but as individuals, each member would benefit from not doing it. ...

Ripe for regulation

Decades of social science research on collective action problems has shown that where trust and goodwill are insufficient to avoid free riders, regulation is often the only alternative. Voluntary compliance is the key factor that creates free-rider scenarios – and government action is at times the way to nip it in the bud.

Further, such regulations must be enforceable. After all, would-be subway riders might be unlikely to pay the fare unless there were a threat of punishment. ...

Effective regulation and enforcement of AI would require global collective action and cooperation, just as with climate change. In the U.S., strict enforcement would require federal oversight of research and the ability to impose hefty fines or shut down noncompliant AI experiments to ensure responsible development – whether that be through regulatory oversight boards, whistleblower protections or, in extreme cases, laboratory or research lockdowns and criminal charges.

Without enforcement, though, there will be free riders – and free riders mean the AI threat won’t abate anytime soon.

See the full story here: https://theconversation.com/ai-exemplifies-the-free-rider-problem-heres-why-that-points-to-regulation-203489

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.