How GenAI makes foreign influence campaigns on social media even worse
...
The same campaign can post a message with one account and then have other accounts that its organizers also control “like” and “unlike” it hundreds of times in a short time span. Once the campaign achieves its objective, all these messages can be deleted to evade detection. Using these tricks, foreign governments and their agents can manipulate social media algorithms that determine what is trending and what is engaging to decide what users see in their feeds. ...
In addition to posting machine-generated content, harmful comments, and stolen images, these bots engaged with each other and with humans through replies and retweets. ...
In a recent paper, we introduced a social media model called SimSoM that simulates how information spreads through the social network., The model has the key ingredients of platforms such as Instagram, X, Threads, Bluesky, and Mastodon: an empirical follower network, a feed algorithm, sharing and resharing mechanisms, and metrics for content quality, appeal, and engagement. ...
These insights suggest that social media platforms should engage in more - not less - content moderation to identify and hinder manipulation campaigns and thereby increase their users’ resilience to the campaigns. ...
Regulation should therefore target AI content dissemination via social media platforms rather than AI content generation. For instance, before a large number of people can be exposed to some content, a platform could require its creator to prove its accuracy or provenance. ...
See the full story here: https://www.fastcompany.com/91205647/generative-ai-foreign-influence-campaigns-social-media-research
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio