philip lelyveld The world of entertainment technology

24Oct/23Off

This new data poisoning tool lets artists fight back against generative AI

This new data poisoning tool lets artists fight back against generative AI

What’s happening: A new tool lets artists make invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways. 

Why it matters: The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless.

How it works: Nightshade exploits a security vulnerability in generative AI models, one arising from the fact that they are trained on vast amounts of data—in this case, images that have been hoovered from the internet. Poisoned data samples can manipulate models into learning, for example, that images of hats are cakes, and images of handbags are toasters. And it’s almost impossible to defend against this sort of attack currently.

See the full story here: https://www.technologyreview.com/2023/10/23/1082189/data-poisoning-artists-fight-generative-ai/?truid=33b587ecf0755237a213721d72ba90e8&utm_source=the_download&utm_medium=email&utm_campaign=the_download.unpaid.engagement&utm_term=Active%20Qualified&utm_content=10-24-2023&mc_cid=220fdf4496&mc_eid=cf24d7da5b

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.