philip lelyveld The world of entertainment technology

14Dec/23Off

What Grok’s recent OpenAI snafu teaches us about LLM model collapse

... “It really shows that these models are not going to be reliable in the long run if they learn from post-LLM age data—without being able to tell what data has been machine-generated, the quality of the outputs will continue to decline,” says Catherine Flick, a professor of ethics and games technology at Staffordshire University.

The reason for that decline is the recursive nature of the LLM loop—and exactly what could have caused the snafu with Grok. ...

Winterbourne’s issues with Grok are just the tip of the iceberg. A visual representation of the damage that model collapse can have has been demonstrated by researchers at Stanford University and the University of California, Berkeley, who fed generative AI image creators with AI-generated output. The distortions and warping that happened turned perfectly normal human faces into grotesque caricatures, as the model begins to break. The fun “make it more” meme that has circulated on social media, where users ask AI image generators to make their output more extreme, also highlights what can happen when AI begins to train itself on AI-generated output. ...

“Each generation of a particular model will be that much less reliable as a source of true facts about the world because each will be trained with an ever less reliable data set,” says Mike Katell, ethics fellow at the Alan Turing Institute. “ ...

See the full story here: https://www.fastcompany.com/90998360/grok-openai-model-collapse

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.