OpenAI is developing an AI tool that can identify images created by artificial intelligence — specifically those made in whole or part by its Dall-E 3 image generator. Calling it a “provenance classifier,” company CTO Mira Murati began publicly discussing the detection app last week but said not to expect it in general release anytime soon. This, despite Murati’s claim it is “almost 99 percent reliable.” That is still not good enough for OpenAI, which knows there is much at stake when the public perception of artists’ work can be impacted by a filter applied by AI, which is notoriously capricious. ...
TechCrunch writes that in addition to achieving something closer to 100 percent accuracy across the board, OpenAI is also concerned about “the philosophical question of what, exactly, constitutes an AI-generated image.”
Artwork wholly generated by DALL-E 3 is an obvious example, “but what about an image from DALL-E 3 that’s gone through several rounds of edits, has been combined with other images and then was run through a few post-processing filters?” muses TechCrunch. ...
The usefulness of an OpenAI AI image detector optimized for Dall-E 3 may be limited if it cannot spot pictures generated by competing technologies like Midjourney, Stable Diffusion, and Firefly, Digital Trends points out, adding that “anything that can highlight fake images could have a positive impact.” ...
See the full story here: https://www.etcentric.org/openai-developing-provenance-classifier-for-genai-images/