... However, in 2025, AI will still pose a massive risk: not from artificial superintelligence, but from human misuse. ...
Other misuses are intentional. In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms. ...
In 2025, it will get even harder to distinguish what’s real from what’s made up. The fidelity of AI-generated audio, text, and images is remarkable, and video will be next. This could lead to the "liar's dividend": those in positions of power repudiating evidence of their misbehavior by claiming that it is fake. In 2023, Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations that the CEO had exaggerated the safety of Tesla autopilot leading to an accident. ...
Meanwhile, companies are exploiting public confusion to sell fundamentally dubious products by labeling them “AI.” This can go badly wrong when such tools are used to classify people and make consequential decisions about them. ...
Mitigating these risks is a mammoth task for companies, governments, and society. It will be hard enough without getting distracted by sci-fi worries.
See the full story here: https://www.wired.com/story/human-misuse-will-make-artificial-intelligence-more-dangerous/