philip lelyveld The world of entertainment technology

19Sep/24Off

Complementary Roles of Human and AI Endorsers in Advertising

This paper concludes that both human and AI-generated endorsers in advertising campaigns offer unique advantages and can be effective depending on the context. Human endorsers tend to foster emotional connections, authenticity, and trust, while AI-generated endorsers excel in personalization and tailoring content to consumer preferences. Rather than viewing them as competitors, the paper suggests that these two approaches should be seen as complementary. Future research is encouraged to explore the best ways to combine human and AI endorsers for enhanced advertising outcomes, considering different demographic segments and product types.

See the full paper here: https://journal.uhamka.ac.id/index.php/agregat/article/view/12491

18Sep/24Off

US to convene global AI safety summit in November

... Commerce Secretary Gina Raimondo and Secretary of State Anthony Blinken will host on Nov. 20-21 the first meeting of the International Network of AI Safety Institutes in San Francisco to "advance global cooperation toward the safe, secure, and trustworthy development of artificial intelligence."

The network members include Australia, Canada, the European Union, France, Japan, Kenya, South Korea, Singapore, Britain, and the United States. ...

The San Francisco meeting will include technical experts from each member’s AI safety institute, or equivalent government-backed scientific office, to discuss priority work areas, and advance global collaboration and knowledge sharing on AI safety. ...

See the full story here: https://www.reuters.com/technology/artificial-intelligence/us-convene-global-ai-safety-summit-november-2024-09-18/

18Sep/24Off

Artificial intelligence laws in the US states are feeling the weight of corporate lobbying

... So far, there is limited evidence that states are following the EU’s lead when drafting their own AI legislation. There is strong evidence of lobbying of state legislators by the tech industry, which does not seem keen on adopting the EU’s rules, instead pressing for less stringent legislation that minimizes compliance costs but which, ultimately, is less protective of individuals. Two enacted bills in Colorado and Utah and two draft bills in Oklahoma and Connecticut, among others, illustrate this. ...

A major difference between the state bills and the AI Act is their scope. The AI Act takes a sweeping approach aimed at protecting fundamental rights and establishes a risk-based system, where some uses of AI, such as the ‘social scoring’ of people based on factors such as their family ties or education, are prohibited. ...

In contrast, the state bills are narrower. The Colorado legislation directly drew on the Connecticut bill, and both include a risk-based framework, but of a more limited scope than the AI Act. ...

Another explanation is the hesitancy embodied by Governor Lamont. In the absence of unified federal laws, states fear that strong legislation would cause a local tech exodus to states with weaker regulations, a risk less pronounced in data-protection legislation. ...

For these reasons, lobbying groups claim to prefer national, unified AI regulation over state-by-state fragmentation, a line that has been parroted by big tech companies in public. But in private, some advocate for light-touch, voluntary rules all round, showing their dislike of both state and national AI legislation. ...

See the full story here: https://www.nature.com/articles/d41586-024-02988-0

18Sep/24Off

Here’s what I made of Snap’s new augmented-reality Spectacles

... These fifth-generation Spectacles can display visual information and applications directly on their see-through lenses, making objects appear as if they are in the real world. The interface is powered by the company’s new operating system, Snap OS.  ...

In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony. 

But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room. ...

To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. ...

Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. ...

Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there ...

See the full story here: https://www.technologyreview.com/2024/09/17/1104025/snap-spectacles-ar-glasses/

18Sep/24Off

Lionsgate signs deal to train AI model on its movies and shows

...

Today, Lionsgate — the studio behind films like the John Wick and Hunger Games franchises — announced that it is partnering with Runway to create a new customized video generation model intended to help “filmmakers, directors and other creative talent augment their work.”

In a statement about the deal, Lionsgate vice chair Michael Burns described it as a path toward creating “capital-efficient content creation opportunities” for the studio, which sees the technology as “a great tool for augmenting, enhancing and supplementing our current operations.” Burns also insisted that “several of our filmmakers are already excited about its potential applications to their pre-production and post-production process.” ...

See the full story here: https://www.theverge.com/2024/9/18/24248115/lionsgate-runway-ai-deal

18Sep/24Off

California’s 5 AI-related bills

Do California's legislative efforts to regulate AI reflect a growing concern for digital ethics, personal rights, and democratic integrity, or are they legislative overreach that will stifle innovation? There are five key bills ready to be signed into law that address issues ranging from the use of digital replicas in contracts and posthumous rights of deceased personalities to the transparency and safety of AI platforms. I've read them. You should, too. The links and short descriptions are below.

AB 2602: Contracts: digital replicas - This bill mandates that contracts for personal or professional services involving digital replicas must clearly specify the intended uses of the replica. It also requires that individuals involved have access to legal counsel or labor union representation during contract negotiations in order to protect performers' rights in the digital age.

SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act - This bill establishes safety regulations for "covered AI models," defined by computational power and training costs. Developers of these models must implement safety measures, conduct regular audits, and report significant incidents to the California Department of Technology.

AB 2013: Artificial intelligence: transparency - This bill requires businesses that use generative AI systems to disclose any use of copyrighted materials in the training data. It also mandates that clear information about the AI system's capabilities and limitations be provided to users.

AB 1836: Deceased personalities: digital replicas - This bill prohibits the use of digital replicas of deceased personalities in audiovisual works without prior consent from their estate. It extends protections to ensure that digital replicas are not used posthumously without authorization, addressing concerns of exploitation.

AB 2655: Defending Democracy from Deepfake Deception Act of 2024 - This bill requires clear disclosure of AI-generated content in political advertisements and campaign materials. It also prohibits the distribution of deceptive audio or visual media of candidates that could mislead voters, aiming to protect electoral integrity.

See the full story here: https://shellypalmer.com/2024/09/california-vs-ai-a-battle-for-democracy-or-a-war-on-progress/

18Sep/24Off

CONSORTIUM SECURES £1.04M FUNDING TO EXPLORE AI-DRIVEN STORYTELLING FOR TV & FILM INDUSTRIES 

... A key element of the consortium’s work is to develop business models that enable creators to be remunerated for their ideas, and specifically to support creators disadvantaged through lack of access to funds or the industry to compete with better funded organisations. Charismatic will also be used by established, larger producers to prototype and previsualise their productions more easily and efficiently. ...

See Charismatic demo here https://vimeo.com/1007038791?&login=true#

16Sep/24Off

The best way to regulate AI might be not to specifically regulate AI. This is why

... But, as a specialist in competition and consumer protection, I have formed the view that calls for new AI-specific regulationsare largely misguided. ...

Here’s my thinking: most of the potential uses of AI are already covered by existing rules and regulations designed to do things such as protect consumers, protect privacy and outlaw discrimination. 

These laws are far from perfect, but where they are not perfect the best approach is to fix or extend them rather than introduce special extra rules for AI. ...

The best approach is to make existing rules work

One of Australia’s great advantages is the strength and expertise of its regulators, among them the Competition and Consumer Commission, the Communications and Media Authority, the Australian Information Commissioner, the Australian Securities and Investments Commission, and the Australian Energy Regulator.

Their job ought to be to show where AI is covered by the existing rules, to evaluate the ways in which AI might fall foul of those rules, and to run test cases that make the applicability of the rules clear. ...

Last mover advantage

Finally, there’s a lot to be said for becoming an international “regulation taker”. Other jurisdictions such as the European Union are leading the way in designing AI-specific regulations.

Product developers worldwide, including those in Australia, will need to meet those new rules if they want to access the EU and those other big markets. ...

See the full story here: https://theconversation.com/the-best-way-to-regulate-ai-might-be-not-to-specifically-regulate-ai-this-is-why-238788

16Sep/24Off

OpenAI’s mission to develop AI that ‘benefits all of humanity’ is at risk as investors flood the company with cash

  • OpenAI may be planning a corporate restructuring within the year.
  • Once a nonprofit that "benefits all of humanity," OpenAI shifted to a "capped-profit" model in 2019.
  • OpenAI CEO Sam Altman is now considering lifting that cap, prioritizing investors over humanity.

See the full story here: https://finance.yahoo.com/news/openais-mission-develop-ai-benefits-192336227.html

15Sep/24Off

MEDIA CTOS REFLECT ON THE DISRUPTION CURVE AND AI

Phil Wiser, EVP and Global CTO, Paramount  ...

“In less than 10 years, most of music consumption was shifted to the new model,” said Wiser.

Print with its ad-based model had been “stickier”, he said, while in the case of video, the legacy model had been protected by the fact that it generated good revenue streams tied to physical infrastructure.

“We are just at a point where the erosion of that has really started,” he said, predicting an acceleration in the decline of the traditional pay TV model, from now on. ...

To support the economics of the business, broadcast and streaming need to be converged, said Wiser. ...

One part of the transformation was to partner aggressively with big scale players like AWS, which invested alongside Paramount in its cloud infrastructure. Paramount alone could not have invested the “hundreds of millions of dollars required”, said Wiser, adding that this lesson now applies even more to investment in AI. ...

Investment in AI

Nevertheless, investment in AI shows signs of paying off. With AI, he said, there is “no need to overinvest in data science or data teams”, even if it was important to have data capability.

“My data science team is probably going to shoot me [for that comment],” he said.

Wiser said media companies spend most time creating content and then marketing it. Production takes longer and it is hard to effect change in this area, he said. Now with AI, it is possible to create some change. However, it is unlikely to replace large parts of the creative process. ...

Girish Bajaj, VP, Prime Video and Amazon MGM Studios Technology ...

Amazon has been using AI for over 20 years in video and is now extending that to tap into the potential of Generative AI. “We are definitely leaning into it. The important part is everyone is trying to find the chatbot that is going to work.”

Generative AI is now being used for personalisation.

However, using AI for practical use cases meant reskilling and retooling the entire Prime Video team. He said the technology could be used for customer facing features but also on the back end to help process media files and create cover art, for example. ...

See the full story here: https://www.ibc.org/features/media-ctos-reflect-on-the-disruption-curve-and-ai/12015.article