philip lelyveld The world of entertainment technology

23Aug/24Off

Inconsistent Safeguards in AI Chatbots Can Lead to Health Disinformation

A study published earlier this year in BMJ evaluated how well large language models (LLMs) could prevent users from prompting chatbots to create health disinformation. It found that while some AI chatbots consistently avoided creating false information, other models frequently created false health claims, especially when prompted with ambiguous or complex health scenarios. In addition, the study found that the safeguards were inconsistent – some models provided accurate information in one instance but not in others under similar conditions. The researchers criticized the lack of transparency from AI developers, who often did not disclose the specific measures they had taken to mitigate these challenges.

Source: Menz, B. D., Kuderer, N. M., Bacchi, S., Modi, N. D., Chin-Yee, B., Hu, T., ... & Hopkins, A. M. (2024). Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross-sectional analysis. BMJ, 384.

23Aug/24Off

Sony Unveils Web3 Division and Layer 2 Network

...

Announced on Aug. 23, the newly created Sony Block Solutions Labs (Sony SBL) will lead all of Sony’s blockchain and web3 initiatives. Sony also revealed that the division is developing Soneium, a public Ethereum Layer 2 network, in partnership with Startale Labs, the development team behind Astar Network.

The network leveragesOptimism’s OP Stack and supports Ethereum Virtual Machine (EVM) smart contracts. Soneium is currently preparing to launch its forthcoming testnet, and plans to release technical documentation and software development kits “in the coming weeks.”

“The new blockchain unites leading web3 projects and infrastructure pioneers, including Astar Network, Circle, Chainlink, Alchemy, and The Graph… to bridge the gap between decentralized innovation and everyday consumer applications in entertainment, gaming, and finance,” Sony SBL said.

Sony SBL said it is exploring new mechanisms for profit-sharing between creators and fans, protecting creator-generated content, and fostering interoperability across digital and real-world environments. ...

See the full story here: https://thedefiant.io/news/blockchains/sony-unveils-web3-division-and-layer-2-network

22Aug/24Off

An ‘AI Scientist’ Is Inventing and Running Its Own Experiments

... This week, Clune’s lab revealed its latest open-ended learning project: an AI program that invents and builds AI agents. The AI-designed agents outperform human-designed agents in some tasks, such as math and reading comprehension. The next step will be devising ways to prevent such a system from generating agents that misbehave. “It's potentially dangerous,” Clune says of this work. “We need to get it right, but I think it's possible.”

See the full story here: https://www.wired.com/story/ai-scientist-ubc-lab/

20Aug/24Off

Decentralized Web3 AI firm Theoriq joins Google startup accelerator

... Theoriq’s primary product, its AI Agent Base Layer, is a decentralized, blockchain-based platform for developing and managing AI agent collectives. Essentially, it allows developers to deploy AI agents — models specifically designed to complete directed tasks — throughout their Web3 stack. ...

See the full story here: https://cointelegraph.com/news/decentralized-web3-ai-firm-theoriq-joins-google-startup-accelerator

20Aug/24Off

How a Law That Shields Big Tech Is Now Being Used Against It

... Section 230, introduced in the internet’s early days, protects companies from liability related to posts made by users on their sites, making it nearly impossible to sue tech companies over defamatory speech or extremist content. ...

The lawsuit, filed by Ethan Zuckerman, a public policy professor at the University of Massachusetts Amherst, is the first to use Section 230 against a tech giant in this way, his lawyers said. It is an unusual legal maneuver that could turn a law that typically protects companies like Meta on its head. And if Mr. Zuckerman succeeds, it could mean more power for consumers to control what they see online. ...

In 2021, after a developer released software to purge users’ Facebook feeds of everyone they follow, Facebook threatened to shut it down. But Section 230 says it is possible to restrict access to obscene, excessively violent and other problematic content. The language shields companies from liability if they censor disturbing content, but lawyers now say it could also be used to justify scrubbing any content users don’t want to see. ...

So Mr. Barclay, who is now 35, built a browser extension tool the same year that would automate the process, called Unfollow Everything. Roughly 12,000 people tried it, he said.

But on July 1, 2021, a law firm representing Facebook sent Mr. Barclay a cease-and-desist letter. His browser extension violated Facebook’s terms of service, including for “impairing the intended operation of Facebook,” the letter said. It also instructed Mr. Barclay to take down his browser extension or face a potential lawsuit. ...

But he and his lawyers were still looking for a legal argument on which to hang their lawsuit. Preparing for a graduate-level class called “Fixing Social Media” in 2022, Mr. Zuckerman read Section 230 and noticed the provision protecting “technical means” to block objectionable content. ...

Mr. Zuckerman is taking that argument a step further, asking the court to pre-emptively protect an effort to build software that filters content because an internet user simply does not want to see it.

“The purpose of the tool is to allow users who find the newsfeed objectionable, or who find the specific sequencing of posts within their newsfeed objectionable, to effectively turn off the feed,” Mr. Zuckerman’s lawyers said in the lawsuit. ...

See the full story here: https://www.nytimes.com/2024/08/20/technology/meta-section-230-lawsuit.html

19Aug/24Off

Google TV Streamer

The Google TV streamer, a Chromecast replacement, is truly an AI-first device, using Gemini to offer content summaries and screen savers. The Apple TV version, with Apple Intelligence, can’t be far behind. ...

See the full story here: https://www.fastcompany.com/91170817/google-tv-streamer-apple-intelligence-gemini-ai-chromecast-reviews-content-summary

19Aug/24Off

As DNC hits Chicago, Microsoft warns of deepfake artificial-intelligence attacks

...

Badanes says one of the most troubling political deepfake attacks worldwide happened in October in Slovakia just two days before the election for a seat in parliament in the central European country. AI technology was used to create a fake recording of a top political candidate bragging about rigging the election. It went viral. And the candidate lost by a slim margin.

AI also turned up in last year’s Chicago mayoral election. Candidate Paul Vallas, the former Chicago Public Schools chief, was the target of an audio deepfake posted on the social media platform X. In the clip, an artificial but realistic voice purporting to be that of Vallas endorsed rampant police violence, saying: “These days, people will accuse a cop of being bad if they kill one person that was running away. Back in my day, cops would kill, say, 17 or 18 civilians in their career, and nobody would bat an eye.” ...

She says Microsoft’s event at The Drake, 140 E. Walton Place, will be geared toward women, whom she says are disproportionately targeted by deepfakes and also online harassment. The training will focus on spotting deceptive AI content and providing tools to protect against illicit uses of the technology — including how to report a deepfake and how to check whether an image is bogus.

Badanes also will be part of a panel discussion Aug. 21 at the Erie Cafe, 536 W. Erie St., on the intersection of AI and politics, with a focus on regulations to combat deepfakes. ...

“We have a free tool,” Badanes says. “We encourage political campaigns to tag all of their official images and videos with this content-integrity marker.” ...

“There are real-world harms that are happening due to this technology,” Badanes says. “What I’m focusing on at the [DNC] is around the impact it has on elections, but we’re thinking about these harms in a much broader sense.” ...

See the full story here: https://chicago.suntimes.com/the-watchdogs/2024/08/18/deepfake-microsoft-ai-artificial-intelligence-ginny-badanes-content-integrity-marker-slovakia-iran-trump-kamala-harris

19Aug/24Off

Can AI truly replicate the screams of a man on fire? Video game performers want their work protected

...

“If motion-capture actors, video-game actors in general, only make whatever money they make that day ... that can be a really slippery slope,” said Dalal, who portrayed Bode Akuna in "Star Wars Jedi: Survivor." “Instead of being like, ‘Hey, we’re going to bring you back’ ... they’re just not going to bring me back at all and not tell me at all that they’re doing this. That’s why transparency and compensation are so important to us in AI protections.”

Hollywood's video game performers announced a work stoppage — their second in a decade — after more than 18 months of negotiations over a new interactive media agreement with game industry giants broke down over artificial intelligence protections. Members of the union have said they are not anti-AI. The performers are worried, however, the technology could provide studios with a means to displace them. ...

“It reminds me a lot of sampling in the ‘80s and ’90s and 2000s where there were a lot of people getting around sampling classic songs,” he said. “This is an art. If you don’t protect rights over their likeness, or their voice or body and walk now, then you can’t really protect humans from other endeavors.”

See the full story here: https://www.ajc.com/news/nation-world/can-ai-truly-replicate-the-screams-of-a-man-on-fire-video-game-performers-want-their-work-protected/T7CTKZHWARESDDDLUEEOAP7VTQ/

15Aug/24Off

Researchers Have Ranked AI Models Based on Risk—and Found a Wild Range

Bo Li, an associate professor at the University of Chicago who specializes in stress testing and provoking AI models to uncover misbehavior, has become a go-to source for some consulting firms. These consultancies are often now less concerned with how smart AI models are than with how problematic—legally, ethically, and in terms of regulatory compliance—they can be.

Li and colleagues from several other universities, as well as Virtue AI, cofounded by Li, and Lapis Labs, recently developed a taxonomy of AI risks along with a benchmark that reveals how rule-breaking different large language models are. “We need some principles for AI safety, in terms of regulatory compliance and ordinary usage,” Li tells WIRED.

The researchers analyzed government AI regulations and guidelines, including those of the US, China, and the EU, and studied the usage policies of 16 major AI companies from around the world. ...

A company looking to use a LLM for customer service, for instance, might care more about a model’s propensity to produce offensive language when provoked than how capable it is of designing a nuclear device. ...

See the benchmarking site here: https://crfm.stanford.edu/helm/air-bench/latest/#/leaderboard

See the full story here: https://www.wired.com/story/ai-models-risk-rank-studies/

15Aug/24Off

MIT releases comprehensive database of AI risks

... The AI Risk Repository tackles this challenge by consolidating information from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers and reports. This meticulous curation process has resulted in a database of more than 700 unique risks. ...

The repository uses a two-dimensional classification system. First, risks are categorized based on their causes, taking into account the entity responsible (human or AI), the intent (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment). This causal taxonomy helps to understand the circumstances and mechanisms by which AI risks can arise.

Second, risks are classified into seven distinct domains, including discrimination and toxicity, privacy and security, misinformation and malicious actors and misuse. 

The AI Risk Repository is designed to be a living database. It is publicly accessible and organizations can download it for their own use. The research team plans to regularly update the database with new risks, research findings, and emerging trends. ...

Beyond its practical implications for organizations, the AI Risk Repository is also a valuable resource for AI risk researchers. The database and taxonomies provide a structured framework for synthesizing information, identifying research gaps, and guiding future investigations. ...

“We will use this repository to identify potential gaps or imbalances in how risks are being addressed by organizations,” Thompson said. “For example, to explore if there is a disproportionate focus on certain risk categories while others of equal significance are being underaddressed.” ...

See the Risk Repository here: https://airisk.mit.edu

See the full story here: https://venturebeat.com/ai/mit-releases-comprehensive-database-of-ai-risks/