OpenAI’s attempts to watermark AI text hit limits
PhilNote: this sounds like the DRM debates at the CPTWG (copy protection technical working group) all over again.
"We want it to be much harder to take [an AI system's] output and pass it off as if it came from a human," Aaronson said in his remarks. "This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda -- you know, spamming every blog with seemingly on-topic comments supporting Russia’s invasion of Ukraine without even a building full of trolls in Moscow. Or impersonating someone’s writing style in order to incriminate them."
... At their cores, the systems are constantly generating a mathematical function called a probability distribution to decide the next token (e.g., word) to output, taking into account all previously-outputted tokens. ...
OpenAI's watermarking tool acts like a "wrapper" over existing text-generating systems, Aaronson said during the lecture, leveraging a cryptographic function running at the server level to "pseudorandomly" select the next token. In theory, text generated by the system would still look random to you or I, but anyone possessing the "key" to the cryptographic function would be able to uncover a watermark. ...
Unaffiliated academics and industry experts, however, shared mixed opinions. They note that the tool is server-side, meaning it wouldn't necessarily work with all text-generating systems. And they argue that it'd be trivial for adversaries to work around. ...
Even if OpenAI were to share the watermarking tool with other text-generating system providers, like Cohere and AI21Labs, this wouldn't prevent others from choosing not to use it. ...
See the full story here: https://news.yahoo.com/openais-attempts-watermark-ai-text-131511322.html
Also see https://www.deseret.com/2022/12/10/23501933/does-ai-mean-the-death-of-the-college-essay
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio