philip lelyveld The world of entertainment technology

15Aug/24Off

MIT releases comprehensive database of AI risks

... The AI Risk Repository tackles this challenge by consolidating information from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers and reports. This meticulous curation process has resulted in a database of more than 700 unique risks. ...

The repository uses a two-dimensional classification system. First, risks are categorized based on their causes, taking into account the entity responsible (human or AI), the intent (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment). This causal taxonomy helps to understand the circumstances and mechanisms by which AI risks can arise.

Second, risks are classified into seven distinct domains, including discrimination and toxicity, privacy and security, misinformation and malicious actors and misuse. 

The AI Risk Repository is designed to be a living database. It is publicly accessible and organizations can download it for their own use. The research team plans to regularly update the database with new risks, research findings, and emerging trends. ...

Beyond its practical implications for organizations, the AI Risk Repository is also a valuable resource for AI risk researchers. The database and taxonomies provide a structured framework for synthesizing information, identifying research gaps, and guiding future investigations. ...

“We will use this repository to identify potential gaps or imbalances in how risks are being addressed by organizations,” Thompson said. “For example, to explore if there is a disproportionate focus on certain risk categories while others of equal significance are being underaddressed.” ...

See the Risk Repository here: https://airisk.mit.edu

See the full story here: https://venturebeat.com/ai/mit-releases-comprehensive-database-of-ai-risks/

15Aug/24Off

Popping the Bubble of Noise-Cancelling Headphones

... Still, I think we’ve reached the point of too much noise cancelling, because, when our individual audio realities become entirely avoidable, our public auditory landscapes get worse. Think of it as a version of the tragedy of the commons: If you can simply don your puffy AirPods Max and block out road construction outside or the loud stereo blaring from next door, there’s less impetus to address the underlying issues of urban noise pollution or neighborly accountability. In that sense, noise-cancelling headphones are a fundamentally antisocial technology. ...

A new, rather strange headphone design recently produced by the Japanese company N.T.T. Sonority (a spinoff of a major Japanese telecommunications corporation) attempts something different. The company’s nwm ONE headphones (which cost two hundred and ninety-nine dollars per pair) look like the denuded skeleton of the familiar Bose model. ... The pointed speakers are “directional,” beaming sound straight into the user’s ears so that it barely leaks; only a person standing within inches of you can hear any noise, and even then, according to my informal tests, not more than a slight buzz. The device offers a technological solution to a problem caused in the first place by an excess of technology. The nwm ONE’s tagline is “Unmute the world,” as if it were not also possible to do so simply by taking off your headphones. ...

See the full story here: https://www.newyorker.com/culture/infinite-scroll/popping-the-bubble-of-noise-cancelling-headphones

15Aug/24Off

Tunable-focus lenses enhance see-through augmented reality glasses

... In AR devices, both real world and the virtual must be at the correct focus, so any lens that controls the focal length of the virtual object must do this without disturbing the view of the real world. To solve this, two lenses are required—one sits “eye side” of the waveguide (or equivalent), while the other sits “world side” of it (see Fig. 4).

The eye side or “pull” lens is responsible for focusing the virtual object (by applying +N diopters). Since this also unavoidably changes the focal length of the real world, a second lens on the world-side of the waveguide combiner must be included. This “push” lens is then driven to an equal and opposite lens power (-N diopters) to return the real world to its original focal length. The net effect of these two lenses operating in unison to equal and opposite lens powers is only to adjust the focal length of the virtual object. By combining this with eye tracking, for example, an efficient and comfortable way to adjust focus and reduce VAC can be achieved.

FlexEnable’s manufacturing processes and architectures allow for the creation of tunable-focus LC lenses on optically clear ultrathin plastic. This means that unlike glass LC cells, they are extremely thin (each cell is under 100 µm), lightweight, and can be biaxially curved to fit complex AR optics. ...

See the full story here: https://www.laserfocusworld.com/optics/article/55132443/tunable-focus-lenses-enhance-see-through-augmented-reality-glasses

14Aug/24Off

A California Bill to Regulate A.I. Causes Alarm in Silicon Valley

... Some notable A.I. researchers have supported the bill, including Geoff Hinton, the former Google researcher, and Yoshua Bengio, a professor at the University of Montreal. The two have spent the past 18 months warning of the dangers of the technology. Other A.I. pioneers have come out against the bill, including Meta’s chief A.I. scientist, Yann LeCun, and the former Google executives and Stanford professors Andrew Ng and Fei-Fei Li. ...

The bill would require safety tests for systems that have development costs exceeding $100 million and that are trained using a certain amount of raw computing power. It would also create a new state agency that defines and monitors those tests. Dan Hendrycks, a founder of the Center for A.I. Safety, said the bill would push the largest tech companies to identify and remove harmful behavior from their most expensive technologies.

“Complex systems will have unexpected behavior. You can count on it,” Dr. Hendrycks said in an interview with The New York Times. “The bill is a call to make sure that these systems don’t have hazards or, if the hazards do exist, that the systems have the appropriate safeguards.” ...

See the full story here: https://www.nytimes.com/2024/08/14/technology/ai-california-bill-silicon-valley.html

14Aug/24Off

Companies Prepare to Fight Quantum Hackers

... Some companies have already taken steps to replace current forms of encryption with post-quantum algorithms. The National Institute of Standards and Technology, an agency of the Commerce Department, published three new algorithms for post-quantum encryption Tuesday. ...

Government officials and cybersecurity professionals warn that hackers might collect troves of data today that is currently protected by encryption, and then decrypt it years from now using quantum computers. 

“I personally feel like we should solve it as soon as possible. So in any event when quantum happens, the data is at least old. The older data is, the less useful it is, and the less harmful it is for a company,” Marty said. The idea, he said, is that if hackers are able to break today’s encryption in a few years, they will at least have access to a smaller volume of the bank’s data that wasn’t already protected by stronger post-quantum cryptography.  ...

See the full story here: https://www.wsj.com/articles/companies-prepare-to-fight-quantum-hackers-c9fba1ae?tpl=cs&mod=hp_lead_pos1

13Aug/24Off

Here’s how people are actually using AI

... We’re seeing a giant, real-world experiment unfold, and it’s still uncertain what impact these AI companions will have either on us individually or on society as a whole, argue Robert Mahari, a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School, and Pat Pataranutaporn, a researcher at the MIT Media Lab. They say we need to prepare for “addictive intelligence”, or AI companions that have dark patterns built into them to get us hooked. ... They look at how smart regulation can help us prevent some of the risks associated with AI chatbots that get deep inside our heads. ...

There’s already evidence that we’re connecting on a deeper level with AI even when it’s just confined to text exchanges. Mahari was part of a group of researchers that analyzed a million ChatGPT interaction logs and found that the second most popular use of AI was sexual role-playing. Aside from that, the overwhelmingly most popular use case for the chatbot was creative composition. People also liked to use it for brainstorming and planning, asking for explanations and general information about stuff.  ...

Some of the most embarrassing failures of chatbots have happened when people have started trusting AI chatbots too much, or considered them sources of factual information.  ...

See the full article here: https://www.technologyreview.com/2024/08/12/1096202/how-people-actually-using-ai/

10Aug/24Off

Sphere to Spend $80 Million on Adapting The Wizard of Oz: Report

...

According to The New York Post, the venue is in talks with Warner Bros. Discovery to adapt the 1939 classic into a format that could be screened within the state-of-the-art, all-encompassing, LED-covered arena. In addition to expanding the visuals, the process would cut the film’s runtime from 102 minutes to 80. The whole production would likely cost around $80 million. (Notably, when adjusted for inflation, the original budget of The Wizard of Oz was just $25 million.)

While the price tag seems high (though, not in comparison to Sphere’s $2.3 billion construction cost), an anonymous source explained to The New York Post that the venue makes substantially more profit on original content than concerts. For example, should the Wizard of Oz deal go through, Warner Bros. Discovery will receive about 5% of the gross profit, according to The Post’s sources. ...

See the full story here: https://consequence.net/2024/08/sphere-the-wizard-of-oz/amp/

10Aug/24Off

SAG President Fran Drescher slams ‘AI fraudsters’ as congressional bill on deepfakes receives massive support

... "Game over A.I. fraudsters! Enshrining protections against unauthorized digital replicas as a federal intellectual property right will keep us all protected in this brave new world," SAG-AFTRA President Fran Drescher said in a statement on the union’s website. "Especially for performers whose livelihoods depend on their likeness and brand, this step forward is a huge win!" ...

"What might surprise some people is that the technology companies, alongside the motion picture organizations, professional associations and creators, are actually for this bill," she told Fox News Digital. "So, why would an Open AI or Disney or an IBM Alliance WatsonX, why would they be interested? Well, it's because it's going to put some guardrails around the established market. And what's happening with these deepfakes is people are creating a substitute market. And this substitute market has no rules and no monetization."

Coons’ website summarizes the bill, explaining it would "hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved – including digital replicas created by generative artificial intelligence (AI)." ...

See the full story here; https://www.foxnews.com/entertainment/sag-president-fran-drescher-slams-ai-fraudsters-congressional-bill-deepfakes-receives-massive-support

9Aug/24Off

The New A.I. Deal: Buy Everything but the Company

In 2022, Noam Shazeer and Daniel De Freitas left their jobsdeveloping artificial intelligence at Google. They said the tech giant moved too slowly. So they created Character.AI, a chatbot start-up, and raised nearly $200 million.

Last week, Mr. Shazeer and Mr. De Freitas announced that they were returning to Google. They had struck a deal to rejoin its A.I. research arm, along with roughly 20 percent of Character.AI’s employees, and provide their start-up’s technology, they said.

But even though Google was getting all that, it was not buying Character.AI.

Instead, Google agreed to pay $3 billion to license the technology, two people with knowledge of the deal said. About $2.5 billion of that sum will then be used to buy out Character.AI’s shareholders, including Mr. Shazeer, who owns 30 percent to 40 percent of the company and stands to net $750 million to $1 billion, the people said. What remains of Character.AI will continue operating without its founders and investors.

The deal was one of several unusual transactions that have recently emerged in Silicon Valley. While big tech companies typically buy start-ups outright, they have turned to a more complicated deal structure for young A.I. companies. It involves licensing the technology and hiring the top employees — effectively swallowing the start-up and its main assets — without becoming the owner of the firm.

These transactions are being driven by the big tech companies’ desire to sidestep regulatory scrutiny while trying to get ahead in  ...

See the full story here: https://www.nytimes.com/2024/08/08/technology/ai-start-ups-google-microsoft-amazon.html

9Aug/24Off

6G: the catalyst for artificial general intelligence

6G might integrate 5G and AI to merge physical, cyber and sapience spaces, transforming network interactions and enhancing AI-driven decision-making and automation. The semantic approach to communication will train AI while selectively informing on goal achievement, moving towards artificial general intelligence, presenting new challenges and opportunities. ...

See the full story here: https://www.nature.com/articles/s44287-024-00090-1