philip lelyveld The world of entertainment technology

16Oct/25Off

Hollywood turns to K Street as AI threatens their livelihoods

...

The latest sign of the seriousness came when famed talent agency Creative Artists Agency retained Washington lobbying shop Brownstein Hyatt Farber Schreck.

CAA, which represents some of the biggest stars in music, movies, TV, sports, social media and fashion, in addition to the world’s most iconic brands, tapped the heavyweight firm in mid-September to lobby on artificial intelligence and other issues related to the entertainment industry, according to disclosures this month.

...

There’s “no question that the risks and rewards created by AI are prompting more engagement in D.C. than I’ve seen in over a decade,” ...

Trump’s pressure campaign on major media companies is another cause for Hollywood concern.  ...

This month, OpenAI unveiled a new AI video tool that generates content using copyrighted characters from film and TV, eliciting swift blowback from studios and talent managers.

Charles Rivkin, the chief executive of the Motion Picture Association, which represents leading studios in Washington, demanded that OpenAI “take immediate and decisive action to address this issue.” ...

In a statement, the agency mused whether OpenAI believes it can “just steal” artists’ work, “disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work.” ...

See the full story here: https://www.politico.com/news/2025/10/15/hollywood-lobbying-firms-ai-threat-00608521

16Oct/25Off

Nano Banana AI Image Tool Is Added to Search, NotebookLM

...

“The buzz about Google’s Nano Banana hasn’t stopped since its release,” per Tom’s Guide, conceding that “the popularity of Sora 2,” which OpenAI released September 30, may have stolen some of its thunder. (And Microsoft’s MAI-Image-1, now in pre-release, is teed-up to get in on that action.)

Nano Banana is “currently rolling out in English for users in the U.S. and India (with broader language support planned),” says Tom’s, calling it “yet another step toward bridging generative AI with everyday visual tools.”

See the full story here: https://www.etcentric.org/nano-banana-ai-image-tool-is-added-to-search-notebooklm/

12Oct/25Off

I’ve Seen How AI ‘Thinks.’ I Wish Everyone Could.

... In my experience, AI works best when I can touch the guts of the model I’m using, when I can feel around for the training data, visualize the math that gives it its structure and tweak the code that generates its outputs.

Most users of large language models don’t get this opportunity, since AI companies don’t make it easy—or even possible....

Plotting words in a “vector space” makes it possible for an LLM to detect the connections among them: Distance is an easily computable property in a vector space, and closeness encapsulates relationships. ...

A (very simplified) language model would “read” these lines by running through them again and again, each time “hiding” one word from itself and trying to guess how it should fill in the blank. After each pass, the model would assess how far off its guess was from the correct word, tweak its calculations and try again. ...

When we find that, to a model, an angry fruit is really a vegetable, ... . We learn, in short, how the model thinks. ...

For every dollar that AI companies pump into computing power to make their models bigger, why not spend a penny on explaining how those models work? Open up the training data. Make programming “playgrounds” where model parameters can be tweaked. Let us traverse vector space hand-in-hand with our machines.

See the full story here: https://www.wsj.com/tech/ai/ive-seen-how-ai-thinks-i-wish-everyone-could-41c81370

12Oct/25Off

Numerous Billionaires Preparing for End of Society

... “We’re definitely going to build a bunker before we release AGI,” Sutskever reportedly once told the OpenAI team, according to the journalist Karen Hao’s book Empire of AI. ...
See the full story here: https://futurism.com/artificial-intelligence/billionaires-preparing-for-end

12Oct/25Off

Prince Harry and Meghan Markle ask families to join fight against predatory social media policies

 Prince Harry and Meghan Markle urged parents to stand against social media companies that they said prey upon children with exploitative algorithms as the “explosion of unregulated artificial intelligence” adds to their concerns that technologies' benefits are inseparable from its dangers.

To underscore that point, the Duke and Duchess of Sussex cited research from advocacy group ParentsTogether that found researchers posing as children experienced harmful interactions every five minutes they spent with an artificial intelligence chatbot.

“This wasn’t content created by a third party. These were the companies’ own chatbots working to advance their own depraved internal policies," said Prince Harry at Spring Studios in Manhattan Thursday night as he and Markle were named Humanitarians of the Year by the nonprofit Project Healthy Minds. "But here’s what gives us hope: these families aren’t facing this alone.” ...

See the full story here: https://abcnews.go.com/Entertainment/wireStory/prince-harry-meghan-markle-families-join-fight-predatory-126387673

3Oct/25Off

Harvard Business School Uses AI To Evaluate Students’ Work, Dean Says

...

Evaluating homework is just one of the many areas where Datar said HBS is experimenting with AI in the classroom. He pointed to initiatives like Foundry, an AI platform that allows entrepreneurs outside Harvard to access HBS entrepreneurial management content and use it to bolster their ventures.

“By doing so, we create this very wonderful, connected community, matching people, ideas and resources,” he said. “I think as we continue to build out Foundry, it’s going to be a huge impact on the world.” ...

HBS administrators who spoke at the talk said that the school is also using the technology to condense student feedback submitted to the Christensen Center for Teaching and Learning into “actionable insights” for professors.

...

“Nothing replaces our classroom experience here — the chalkboards, the sound, the faculty experience,” Negri said. “Learners learn in a lot of different ways, and sometimes at different times of night. So at least one of the ways that AI complements teaching and learning here is being always available 24/7.”

...

See the full story here: https://www.thecrimson.com/article/2025/10/2/hbs-dean-ai-use/

3Oct/25Off

Kiss reality goodbye: AI-generated social media has arrived

...

"You can create insanely real looking videos, with your friends saying things that they would never say," said Solomon Messing, an associate professor at New York University in the Center for Social Media and Politics. "I think we might be in the era where seeing is not believing."

Deepfake TikTok

The Sora 2 app looks and feels remarkably like other vertical video social media apps like TikTok. It comes with a few different settings– it's possible to choose videos by mood, for example. Users are allowed control over how their face is in used "end-to-end" in AI-generated videos, according to OpenAI. That means users can allow their faces to be used by everyone, a small circle of friends, or only themselves. What's more, they are allowed to remove videos showing their likeness at any time.

Sora also comes with ways to identify its content as AI-generated. Videos downloaded from the app contain moving watermarks bearing the Sora logo, and the files have embedded metadata that identifies them as AI-made, according to the company.

...

But NPR's brief time using the app found that the guardrails appeared to be somewhat loose around Sora. ... a quick review of content shows that Sora is being used to generate an enormous volume of videos depicting trademarked brands and copyrighted material. ...

OpenAI told NPR that it was aware of the use of copyrighted material in Sora but felt it was giving its users more freedom by allowing it. ...

See the full story here: https://www.npr.org/2025/10/03/nx-s1-5560200/openai-sora-social-media

1Oct/25Off

Does The 400 Year History Of AI Predict Its Future?

...

So, how helpful is the 400-year history of artificial intelligence (AI) in predicting what its future will be? Sadly, not very. AI's power and reach has expanded more in the past 4 years than in its previous 400—and it's beginning to behave in ways unintended by its programmers. AI has become less an "artificial" form of human intelligence than a new form of "alien" intelligence, rapidly evolving, far beyond our understanding and only partly under our control. The recent giant leap in computer prowess may actually be a tipping point in human history¾making our human past an untrustworthy prologue to AI's uncertain future. ...

Join me along the fascinating and intricate 400-year history of artificial intelligence:

...

1966: Weizenbaum created ELIZA—the first chatbot (and first chatbot therapist). ELIZA was far too primitive to pass the Turing Test, but still powerful enough in seducing user interest to convince Weizenbaum that chatbots could quickly evolve into a threat to human society. He immediately renounced all work on artificial intelligence and instead spent the next 42 years of his life warning about its dangers.6

...

2015: Sam Altman and Elon Musk create OpenAI as a nonprofit company with the noble mission of protecting humanity from the potential risks of rapidly emerging artificial intelligence.

...

2023: Geoffrey Hinton, father of neural networks, left his research leadership position at Google so that he could warn the public about the existential danger posed by AI (and the reckless competition among the companies racing to develop it).11

...

We also may have very little control over the direction of our future. Governments have irresponsibly refused to regulate artificial intelligence and greedy Big AI companies have recklessly refused to regulate themselves.

There are 3 radically different predictions of how the future will unfold:

...

 There does not seem to be any limit to AI's potential power, to corporate greed, to inventor grandiosity, to government irresponsibility, or to human folly. AI is getting smarter and smarter while humans seem to be getting dumber and dumber.

See the full story here: https://www.psychiatrictimes.com/view/does-the-400-year-history-of-ai-predict-its-future

1Oct/25Off

Palantir Technologies Faces a New Threat: This Artificial Intelligence (AI) Company Just Launched a New Business Unit That Focuses on National Security

  • Salesforce has recently launched a new business unit, Missionforce, which will focus on national security.
  • The company's CEO believes Palantir's software is overpriced.
  • Palantir's growth rate has been accelerating over the past year, but a slowdown could be inevitable.

See the full story here; https://finance.yahoo.com/news/palantir-technologies-faces-threat-artificial-081500320.html

1Oct/25Off

AI startup Character.AI removes Disney characters from its chatbot platform after legal letter

... Chatbots on the Character.AI platform impersonated well-known Disney characters such as Elsa, Moana, Peter Parker and Darth Vader and generated replies that simulated the “essence, goodwill, and look and feel of each character” and also incorporated their backstories, according to a letter dated Sept. 18 from a law firm representing Disney.

“These actions mislead and confuse consumers, including vulnerable young people, to believe that they are interacting with Disney’s characters, and to falsely believe that Disney has licensed these characters to, and endorsed their use by, Character.ai,” the letter said. “In fact, Character.ai is freeriding off the goodwill of Disney’s famous marks and brands, and blatantly infringing Disney’s copyrights.” ...

See the full story here: https://www.latimes.com/entertainment-arts/business/story/2025-09-30/ai-startup-character-ai-removes-disney-characters-from-its-chatbot-platform-after-legal-letter