Can AI truly replicate the screams of a man on fire? Video game performers want their work protected
...
“If motion-capture actors, video-game actors in general, only make whatever money they make that day ... that can be a really slippery slope,” said Dalal, who portrayed Bode Akuna in "Star Wars Jedi: Survivor." “Instead of being like, ‘Hey, we’re going to bring you back’ ... they’re just not going to bring me back at all and not tell me at all that they’re doing this. That’s why transparency and compensation are so important to us in AI protections.”
Hollywood's video game performers announced a work stoppage — their second in a decade — after more than 18 months of negotiations over a new interactive media agreement with game industry giants broke down over artificial intelligence protections. Members of the union have said they are not anti-AI. The performers are worried, however, the technology could provide studios with a means to displace them. ...
“It reminds me a lot of sampling in the ‘80s and ’90s and 2000s where there were a lot of people getting around sampling classic songs,” he said. “This is an art. If you don’t protect rights over their likeness, or their voice or body and walk now, then you can’t really protect humans from other endeavors.”
See the full story here: https://www.ajc.com/news/nation-world/can-ai-truly-replicate-the-screams-of-a-man-on-fire-video-game-performers-want-their-work-protected/T7CTKZHWARESDDDLUEEOAP7VTQ/
Researchers Have Ranked AI Models Based on Risk—and Found a Wild Range
Bo Li, an associate professor at the University of Chicago who specializes in stress testing and provoking AI models to uncover misbehavior, has become a go-to source for some consulting firms. These consultancies are often now less concerned with how smart AI models are than with how problematic—legally, ethically, and in terms of regulatory compliance—they can be.
Li and colleagues from several other universities, as well as Virtue AI, cofounded by Li, and Lapis Labs, recently developed a taxonomy of AI risks along with a benchmark that reveals how rule-breaking different large language models are. “We need some principles for AI safety, in terms of regulatory compliance and ordinary usage,” Li tells WIRED.
The researchers analyzed government AI regulations and guidelines, including those of the US, China, and the EU, and studied the usage policies of 16 major AI companies from around the world. ...
A company looking to use a LLM for customer service, for instance, might care more about a model’s propensity to produce offensive language when provoked than how capable it is of designing a nuclear device. ...
See the benchmarking site here: https://crfm.stanford.edu/helm/air-bench/latest/#/leaderboard
See the full story here: https://www.wired.com/story/ai-models-risk-rank-studies/
MIT releases comprehensive database of AI risks
... The AI Risk Repository tackles this challenge by consolidating information from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers and reports. This meticulous curation process has resulted in a database of more than 700 unique risks. ...
The repository uses a two-dimensional classification system. First, risks are categorized based on their causes, taking into account the entity responsible (human or AI), the intent (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment). This causal taxonomy helps to understand the circumstances and mechanisms by which AI risks can arise.
Second, risks are classified into seven distinct domains, including discrimination and toxicity, privacy and security, misinformation and malicious actors and misuse.
The AI Risk Repository is designed to be a living database. It is publicly accessible and organizations can download it for their own use. The research team plans to regularly update the database with new risks, research findings, and emerging trends. ...
Beyond its practical implications for organizations, the AI Risk Repository is also a valuable resource for AI risk researchers. The database and taxonomies provide a structured framework for synthesizing information, identifying research gaps, and guiding future investigations. ...
“We will use this repository to identify potential gaps or imbalances in how risks are being addressed by organizations,” Thompson said. “For example, to explore if there is a disproportionate focus on certain risk categories while others of equal significance are being underaddressed.” ...
See the Risk Repository here: https://airisk.mit.edu
See the full story here: https://venturebeat.com/ai/mit-releases-comprehensive-database-of-ai-risks/
Popping the Bubble of Noise-Cancelling Headphones
... Still, I think we’ve reached the point of too much noise cancelling, because, when our individual audio realities become entirely avoidable, our public auditory landscapes get worse. Think of it as a version of the tragedy of the commons: If you can simply don your puffy AirPods Max and block out road construction outside or the loud stereo blaring from next door, there’s less impetus to address the underlying issues of urban noise pollution or neighborly accountability. In that sense, noise-cancelling headphones are a fundamentally antisocial technology. ...
A new, rather strange headphone design recently produced by the Japanese company N.T.T. Sonority (a spinoff of a major Japanese telecommunications corporation) attempts something different. The company’s nwm ONE headphones (which cost two hundred and ninety-nine dollars per pair) look like the denuded skeleton of the familiar Bose model. ... The pointed speakers are “directional,” beaming sound straight into the user’s ears so that it barely leaks; only a person standing within inches of you can hear any noise, and even then, according to my informal tests, not more than a slight buzz. The device offers a technological solution to a problem caused in the first place by an excess of technology. The nwm ONE’s tagline is “Unmute the world,” as if it were not also possible to do so simply by taking off your headphones. ...
See the full story here: https://www.newyorker.com/culture/infinite-scroll/popping-the-bubble-of-noise-cancelling-headphones
Tunable-focus lenses enhance see-through augmented reality glasses
... In AR devices, both real world and the virtual must be at the correct focus, so any lens that controls the focal length of the virtual object must do this without disturbing the view of the real world. To solve this, two lenses are required—one sits “eye side” of the waveguide (or equivalent), while the other sits “world side” of it (see Fig. 4).
The eye side or “pull” lens is responsible for focusing the virtual object (by applying +N diopters). Since this also unavoidably changes the focal length of the real world, a second lens on the world-side of the waveguide combiner must be included. This “push” lens is then driven to an equal and opposite lens power (-N diopters) to return the real world to its original focal length. The net effect of these two lenses operating in unison to equal and opposite lens powers is only to adjust the focal length of the virtual object. By combining this with eye tracking, for example, an efficient and comfortable way to adjust focus and reduce VAC can be achieved.
FlexEnable’s manufacturing processes and architectures allow for the creation of tunable-focus LC lenses on optically clear ultrathin plastic. This means that unlike glass LC cells, they are extremely thin (each cell is under 100 µm), lightweight, and can be biaxially curved to fit complex AR optics. ...
See the full story here: https://www.laserfocusworld.com/optics/article/55132443/tunable-focus-lenses-enhance-see-through-augmented-reality-glasses
A California Bill to Regulate A.I. Causes Alarm in Silicon Valley
... Some notable A.I. researchers have supported the bill, including Geoff Hinton, the former Google researcher, and Yoshua Bengio, a professor at the University of Montreal. The two have spent the past 18 months warning of the dangers of the technology. Other A.I. pioneers have come out against the bill, including Meta’s chief A.I. scientist, Yann LeCun, and the former Google executives and Stanford professors Andrew Ng and Fei-Fei Li. ...
The bill would require safety tests for systems that have development costs exceeding $100 million and that are trained using a certain amount of raw computing power. It would also create a new state agency that defines and monitors those tests. Dan Hendrycks, a founder of the Center for A.I. Safety, said the bill would push the largest tech companies to identify and remove harmful behavior from their most expensive technologies.
“Complex systems will have unexpected behavior. You can count on it,” Dr. Hendrycks said in an interview with The New York Times. “The bill is a call to make sure that these systems don’t have hazards or, if the hazards do exist, that the systems have the appropriate safeguards.” ...
See the full story here: https://www.nytimes.com/2024/08/14/technology/ai-california-bill-silicon-valley.html
Companies Prepare to Fight Quantum Hackers
... Some companies have already taken steps to replace current forms of encryption with post-quantum algorithms. The National Institute of Standards and Technology, an agency of the Commerce Department, published three new algorithms for post-quantum encryption Tuesday. ...
Government officials and cybersecurity professionals warn that hackers might collect troves of data today that is currently protected by encryption, and then decrypt it years from now using quantum computers.
“I personally feel like we should solve it as soon as possible. So in any event when quantum happens, the data is at least old. The older data is, the less useful it is, and the less harmful it is for a company,” Marty said. The idea, he said, is that if hackers are able to break today’s encryption in a few years, they will at least have access to a smaller volume of the bank’s data that wasn’t already protected by stronger post-quantum cryptography. ...
See the full story here: https://www.wsj.com/articles/companies-prepare-to-fight-quantum-hackers-c9fba1ae?tpl=cs&mod=hp_lead_pos1
Here’s how people are actually using AI
... We’re seeing a giant, real-world experiment unfold, and it’s still uncertain what impact these AI companions will have either on us individually or on society as a whole, argue Robert Mahari, a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School, and Pat Pataranutaporn, a researcher at the MIT Media Lab. They say we need to prepare for “addictive intelligence”, or AI companions that have dark patterns built into them to get us hooked. ... They look at how smart regulation can help us prevent some of the risks associated with AI chatbots that get deep inside our heads. ...
There’s already evidence that we’re connecting on a deeper level with AI even when it’s just confined to text exchanges. Mahari was part of a group of researchers that analyzed a million ChatGPT interaction logs and found that the second most popular use of AI was sexual role-playing. Aside from that, the overwhelmingly most popular use case for the chatbot was creative composition. People also liked to use it for brainstorming and planning, asking for explanations and general information about stuff. ...
Some of the most embarrassing failures of chatbots have happened when people have started trusting AI chatbots too much, or considered them sources of factual information. ...
See the full article here: https://www.technologyreview.com/2024/08/12/1096202/how-people-actually-using-ai/
Sphere to Spend $80 Million on Adapting The Wizard of Oz: Report
...
According to The New York Post, the venue is in talks with Warner Bros. Discovery to adapt the 1939 classic into a format that could be screened within the state-of-the-art, all-encompassing, LED-covered arena. In addition to expanding the visuals, the process would cut the film’s runtime from 102 minutes to 80. The whole production would likely cost around $80 million. (Notably, when adjusted for inflation, the original budget of The Wizard of Oz was just $25 million.)
While the price tag seems high (though, not in comparison to Sphere’s $2.3 billion construction cost), an anonymous source explained to The New York Post that the venue makes substantially more profit on original content than concerts. For example, should the Wizard of Oz deal go through, Warner Bros. Discovery will receive about 5% of the gross profit, according to The Post’s sources. ...
See the full story here: https://consequence.net/2024/08/sphere-the-wizard-of-oz/amp/
SAG President Fran Drescher slams ‘AI fraudsters’ as congressional bill on deepfakes receives massive support
... "Game over A.I. fraudsters! Enshrining protections against unauthorized digital replicas as a federal intellectual property right will keep us all protected in this brave new world," SAG-AFTRA President Fran Drescher said in a statement on the union’s website. "Especially for performers whose livelihoods depend on their likeness and brand, this step forward is a huge win!" ...
"What might surprise some people is that the technology companies, alongside the motion picture organizations, professional associations and creators, are actually for this bill," she told Fox News Digital. "So, why would an Open AI or Disney or an IBM Alliance WatsonX, why would they be interested? Well, it's because it's going to put some guardrails around the established market. And what's happening with these deepfakes is people are creating a substitute market. And this substitute market has no rules and no monetization."
Coons’ website summarizes the bill, explaining it would "hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved – including digital replicas created by generative artificial intelligence (AI)." ...
See the full story here; https://www.foxnews.com/entertainment/sag-president-fran-drescher-slams-ai-fraudsters-congressional-bill-deepfakes-receives-massive-support
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio