How to Start an AI Panic
... In this moment of AI hype and uncertainty, Harris and Raskin are breaking the glass and pulling the alarm. It’s not the first time they’re triggering sirens. Tech designers turned media-savvy communicators, they cofounded the Center to inform the world that social media was a threat to society. The ultimate expression of their concerns came in their involvement in a popular Netflix documentary cum horror film called The Social Dilemma. While the film is nuance-free and somewhat hysterical, I agree with many of its complaints about social media’s attention-capture, incentives to divide us, and weaponization of private data. These were presented through interviews, statistics, and charts. But the doc torpedoed its own credibility by cross-cutting to a hyped-up fictional narrative straight out of Reefer Madness, showing how a (made-up) wholesome heartland family is brought to ruin—one kid radicalized and jailed, another depressed—by Facebook posts.
This one-sidedness also characterizes the Center’s new campaign called, guess what, the AI Dilemma. (The Center is coy about whether another Netflix doc is in the works.) Like the previous dilemma, a lot of points Harris and Raskin make are valid—such as our current inability to fully understand how bots like ChatGPT produce their output. ...
Instead, they warn of a world where the use of AI in a zillion different ways will cause chaos by allowing automated misinformation, throwing people out of work, and giving vast power to virtually anyone who wants to abuse it. The sin of the companies developing AI pell-mell is that they’re recklessly disseminating this mighty force. ...
But there’s another side to that coin—one where AI is humanity’s partner in improving life. This experiment also shows how AI might help us crack the elusive mystery of the brain’s operations, or communicate with people with severe paralysis. ...
What’s most frustrating about this big AI moment is that the most dangerous thing is also the most exciting thing. Setting reasonable guardrails sounds like a great idea, but doing that will be cosmically difficult, particularly when one side is going DEFCON and the other is going public, in the stock market sense. ...
It’s good business to disseminate innovations to the public, whose lives will be improved and even become more fun. But when the technologies are released with zero concern for their negative impact, those products are going to create misery. Holding researchers and companies accountable for such harms is a challenge that society has failed to meet. ...
See the full story here: https://www.wired.com/story/plaintext-how-to-start-an-ai-panic/?fbclid=IwAR0qyCa1c79RAWuSyJJziudHrsm1ap-d6bZuwlTRJGH97cyYAGcIyhuxnnk
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio