philip lelyveld The world of entertainment technology

25Mar/24Off

Ex-Google Gemini guy heads Samsung’s AGI venture, eyes OpenAI and Meta collaborations

... Samsung is reportedly reassessing various aspects of chip architecture to significantly reduce the power consumption required to run LLMs. The company's targets for enhancement include memory design, lightweight model optimization, high-speed interconnects, and advanced packaging.

Additionally, Samsung plans to introduce new chip designs from the AGI Computing Lab through rapid iterations. These designs aim to provide sufficient performance for continuously growing models while helping to save power consumption and costs. ...

See the full story here: https://www.digitimes.com/news/a20240325PD210/agi-google-ai-device-solutions-division-east-asia-ic-design-distribution-it+ce-meta-openai-samsung-server-ipc-cloud-computing-iot-software-big-data-south-korea.html

25Mar/24Off

OpenAI Promoting AI Text-to-Video Model Sora to Entertainment Industry

OpenAI is reportedly working to promote the integration of its unreleased artificial intelligence (AI) text-to-video model, Sora, into film production.

The company is scheduling meetings with Hollywood studios, media executives and talent agencies in Los Angeles to foster partnerships, Bloomberg reported Friday (March 22).

The meetings are part of a broader outreach initiative by OpenAI, according to the report. In February, the startup’s chief operating officer, Brad Lightcap, held introductory talks in Hollywood to demonstrate Sora’s capabilities. ...

See the full story here: https://www.pymnts.com/artificial-intelligence-2/2024/openai-promoting-ai-text-to-video-model-sora-to-entertainment-industry/

21Mar/24Off

Here’s Proof You Can Train an AI Model Without Slurping Copyrighted Content

... A group of researchers backed by the French government have released what is thought to be the largest AI training dataset composed entirely of text that is in the public domain. And the nonprofit Fairly Trained announced that it has awarded its first certification for a large language model built without copyright infringement, showing that technology like that behind ChatGPT can be built in a different way to the AI industry’s contentious norm. ...

Today, Fairly Trained announced it has certified its first large language model. It’s called KL3M and was developed by Chicago-based legal tech consultancy startup 273 Ventures, using a curated training dataset of legal, financial, and regulatory documents.

The company’s cofounder, Jillian Bommarito, says the decision to train KL3M in this way stemmed from the company’s “risk-averse” clients like law firms. ...

Although the dataset is tiny (around 350 billion tokens, or units of data) compared to those compiled by OpenAI and others that have scraped the internet en masse, Bommarito says the KL3M model performed far better than expected, something she attributes to how carefully the data had been vetted beforehand. ...

On Wednesday, researchers released what they claim is the largest available AI dataset for language models composed purely of public domain content. Common Corpus, as it is called, is a collection of text roughly the same size as the data used to train OpenAI’s GPT-3 text generation model and has been posted to the open source AI platform Hugging Face. ...

The Authors Guild, along with actors and radio artists labor union SAG-AFTRA and a few additional professional groups, was recently named an official supporter of Fairly Trained. ...

See the full story here: https://www.wired.com/story/proof-you-can-train-ai-without-slurping-copyrighted-content/

21Mar/24Off

What it means for nations to have “AI sovereignty”

... These language models are trained in English, but there’s 13 Indian scripts, and within that there’s probably a couple of hundred languages or language variants. So the cultural context for these languages is different. We do think it deserves an effort to have cultural context and nuances, like in India: You don’t speak Hindi and you don’t speak English, you mix the two, what’s sometimes called Hinglish. So those kinds of things have to be taken into account. Then you go to the other level. Will India rely on something that the technology could be banned, like a U.S. model? ...

Jamali: “Diversity in the kinds of algorithms.” What kind of diversity are we talking about?

Khosla: If you take the human brain, sometimes we do pattern matching, and there’s all kinds of emergent behavior that emerge from that. And [large language models] are going to keep going. ... But it’s possible there’s other approaches, what’s called sometimes neurosymbolic computing. ...

See the full story here: https://www.marketplace.org/shows/marketplace-tech/what-it-means-for-nations-to-have-ai-sovereignty/

21Mar/24Off

UN General Assembly to address AI’s potential risks, rewards

The UN General Assembly will turn its attention to artificial intelligence on Thursday, weighing a resolution that lays out the potentially transformational technology's pros and cons while calling for the establishment of international standards.

The text, co-sponsored by dozens of countries, emphasizes the necessity of guidelines "to promote safe, secure and trustworthy artificial intelligence systems," while excluding military AI from its purview.

On the whole, the resolution focuses more on the technology's positive potential ...

The draft resolution, which is the first on the issue, was brought forth by the United States and will be submitted for approval by the assembly on Thursday. ...

See the full story here: https://www.france24.com/en/live-news/20240321-un-general-assembly-to-address-ai-s-potential-risks-rewards

20Mar/24Off

Indiana University – 24-foot LED immersive soundstage for virtual productions to anchor new extended reality labIndiana University –

A new extended reality lab at Indiana University Bloomington will feature “The Wall,” a 24-foot LED immersive soundstage that will enable faculty and graduate students to use virtual and augmented reality in research and creative projects across disciplines, including public art, science, health and education. ...

With its interdisciplinary nature, the KIX Lab’s potential applications are vast. For example, filmmakers can use the facility as an adaptable film set; public health researchers can create spatial experiences to study mental illnesses such as addiction; and optometrists can develop assistive devices and rehabilitation for people with visual impairments. The lab can serve as a venue for live performances using immersive visual projections and for screenings of experimental films.

But the KIX Lab’s impact will reach beyond campus. Collaborating with local communities, faculty will use the lab to create interactive public art, science, health and education projects. And the facility will create connections with industry, with the soundstage available to rent for local commercial productions. ...

See the full story here: https://news.iu.edu/live/news/35427-24-foot-led-immersive-soundstage-for-virtual

19Mar/24Off

A Tale of Two SXSWs: An AI Divide So Wide You Could Drive a Film Industry Through It

... But discuss it we did at the SXSW Conferences, where AI was the topic dominating hundreds of panels, workshops, talks, and meet-up-sessions spread over 24 different tracks. The overwhelming message coming out the film track was AI is a tool — a powerful one, poised to upend the entertainment industry, but with the potential (if properly implemented) to enhance human creativity. ...

Meanwhile, the community at the SXSW Film Festival had a healthy skepticism of the conference’s tech innovation-speak. As one colleague, and a long time SXSW attendee, said: “Every year they tell us this new thing will change everything and ‘democratize filmmaking.’”  ...

Over the last three decades, VFX artists have been forced to reinvent their workflow multiple times to incorporate new innovations, making them far more open to the potential of AI. The most recent seismic evolution was the introduction of real-time game engines that further break down the wall between visual effects and the other filmmaking crafts, a trend that will only accelerate in the coming years. ...

Over the last three decades, VFX artists have been forced to reinvent their workflow multiple times to incorporate new innovations, making them far more open to the potential of AI. The most recent seismic evolution was the introduction of real-time game engines that further break down the wall between visual effects and the other filmmaking crafts, a trend that will only accelerate in the coming years. ...

There’s plenty to be skeptical about with AI, along with good reason to mistrust how Hollywood and corporations will use it to create “content,” but that’s also why we must educate ourselves rather than keep our collective heads in the sand. If we don’t understand how technology is being adapted as a filmmaking tool, we leave ourselves helpless to shape the conversation around what we collectively hold dear: the creation of motion pictures through eyes and heart of imperfect human beings. ...

See the full story here: https://www.indiewire.com/features/commentary/sxsw-2024-hollywood-ai-1234964964/

19Mar/24Off

MrBeast strikes Amazon deal for biggest competition series in TV history

... Known for outrageous stunts such as burying himself alive or re-creating the show “Squid Game” as a reality TV-style competition, Donaldson also has a reputation for combining his internet shows with a charitable aspect, such as rescuing 1,000 abandoned dogs or building 100 wells in Africa. Until now, however, his entertainment has not extended beyond the internet. ...

“Beast Games” will consist of 1,000 contestants competing for a $5 million cash prize, the largest single prize that’s ever been offered on television or streaming. Donaldson will host and executive produce the show, which will be available in 240 countries and territories. ...

See the full story here: https://www.washingtonpost.com/technology/2024/03/18/mrbeast-mgm-amazon-game-show-creator/

19Mar/24Off

The most innovative augmented and virtual reality companies of 2024

...

6. JIGSPACE

For helping people show, not tell ...

8. AMAZEVR

For making virtual reality concerts the real deal

9. BILT

For demystifying home, auto, and bike repair with interactive walk-throughs

10. JOURNEE

For bridging the gap between augmented reality and the web

See the full story here: https://www.fastcompany.com/91033867/augmented-reality-virtual-reality-most-innovative-companies-2024

18Mar/24Off

Why Are Large AI Models Being Red Teamed? 

Intelligent systems demand more than just repurposed cybersecurity tools

In February, OpenAI announced the arrival of Sora, a stunning “text-to-video” tool. Simply enter a prompt, and Sora generates a realistic video within seconds. But it wasn’t immediately available to the public. Some of the delay is because OpenAI reportedly has a set of experts called a red team who, the company has said, will probe the model to understand its capacity for deepfake videos, misinformation, bias, and hateful content.

Red teaming, while having proved useful for cybersecurity applications, is a military tool that was never intended for widespread adoption by the private sector. 

“Done well, red teaming can identify and help address vulnerabilities in AI,” says Brian Chen, director of policy from the New York–based think tank Data & Society. “What it does not do is address the structural gap in regulating the technology in the public interest.” ...

 The purpose of red-teaming exercises is to play the role of the adversary (the red team) and find hidden vulnerabilities in the defenses of the blue team (the defenders) who then think creatively about how to fix the gaps. ...

Zenko also reveals a glaring mismatch between red teaming and the pace of AI advancement. The whole point, he says, is to identify existing vulnerabilities and then fix them. “If the system being tested isn’t sufficiently static,” he says, “then we’re just chasing the past.” ...

Dan Hendrycks, executive and research director of the San Francisco–based Center for AI Safety, says red teaming shouldn’t be treated as a turnkey solution either. “The technique is certainly useful,” he says. “But it represents only one line of defense against the potential risks of AI, and a broader ecosystem of policies and methods is essential.”

NIST’s new AI Safety Institute now has an opportunity to change the way red teaming is used in AI. The Institute’sconsortium of more than 200 organizations has already reportedly begun developing standards for AI red teaming. Tech developers have also begun exploring best practices on their own. For example, Anthropic, GoogleMicrosoft, and OpenAI have established the Frontier Model Forum (FMF) to develop standards for AI safety and share best practices across the industry.

See the full story here: https://spectrum.ieee.org/red-team-ai-llms