philip lelyveld The world of entertainment technology

21Jul/25Off

The AI Exec Who Isn’t Trying to Become God

...

“I think we will get to extremely smart and capable models capable of discovering important new ideas, capable of automating huge amounts of work,” Altman said on a recent podcast. “But then I feel totally confused about what society looks like if that happens.” 

That’s the luxury a tech startup has. They’re trying to invent the future. 

“This is a crazy statement,” he added, “we’re gonna solve superintelligence but maybe society still sucks.” 

That’s the ending Suleyman wants to avoid.

See the full story here: https://www.wsj.com/tech/ai/microsoft-ai-9ded6031?gaa_at=eafs&gaa_n=ASWzDAh5nj8iLF9-35kKe7NfHd1omp98hlpej42wFFgygUVdrHI3kVUptr2fgqf9yPU%3D&gaa_ts=687e6c00&gaa_sig=Y_XiwyXrGlCW0ahfjP4_MK7B_Zi17GFsFvS1d2rbS9SZtZjPEyMUCx2auYt9LghgWtEmGqaUI8xEn3FmD1-J1Q%3D%3D

18Jul/25Off

AI giants ‘fundamentally unprepared’ for dangers of human level intelligence

...

In a recent report [from the Future of Life Institute], the US-based AI safety non-profit revealed that none of the seven major AI labs, including OpenAI, Google Deepmind, Anthropic, xAI and Chinese firms Deepseek and Zhipu AI – scored higher than a D on its “existential safety” index.

That score reflects how seriously firms are preparing for the possibility of creating artificial general intelligence (AGI), which are systems of matching or exceeding human performance across virtually all intellectual tasks. ...

Just last month, researchers at the University of Geneva found that large language models such as ChatGPT 4, Claude 3.5, and Google’s Gemini 1.5 outperformed humans in tests of emotional intelligence. ...

“The companies say AGI could be just a few years away,” Tegmark said. “But they still have no coherent, actionable safety strategy. That should worry everyone.” ...

18Jul/25Off

AI is helping patients fight insurance company denials

PhilNote: 1) this could be a precursor of the jobs of the future, and 2) the grammatical error ("...using they system they ...) demonstrates the importance of human oversight and not trusting AI too much.

...

With his wife in agony, Jason Nixdorf had a chance encounter with Zach Veigulis, a former chief data scientist at the Department of Veterans Affairs who was co-founding a company to help patients battle insurance company denials. That company, Claimable Inc., built an AI platform that allows patients to generate customized appeal letters containing comprehensive assessments of clinical research on a drug or treatment and other patients’ appeals history with it. The cost: around $40.

When Nixdorf reached out, Claimable’s site was not yet live, but its chief executive and co-founder, Dr. Warris Bokhari, offered to help write an appeal letter for Stephanie using they system they had developed. ...

In mid-September 2024, she sent that 23-page appeal letter to Premera’s chief executive and chief legal counsel, arguing that one of its own policies states it should cover infliximab, her records show. Her letter also went to the governor and attorney general of North Carolina, officials at the Department of Health and Human Services, the Consumer Financial Protection Bureau and the Wage and Hour Division of the Department of Labor. ...

See the full story here: https://www.nbcnews.com/news/us-news/ai-helping-patients-fight-insurance-company-denials-wild-rcna219008

16Jul/25Off

Agent SPE Announces Launch of SARAH, an Emotionally Intelligent On-Chain AI Protocol

Agent SPE has announced the launch of SARAH, an emotionally intelligent, on-chain AI character that merges artificial intelligence, decentralized finance, and live digital performance. Designed as an autonomous VTuber entity, SARAH introduces a system where user interactions influence outcomes through emotional computation and transparent blockchain logic. The platform marks the beginning of a new model for AI-driven gaming where emotion, memory, and unpredictability form the core mechanics.

SARAH operates on a permissionless structure in which users send SOL to a public wallet address. Each transaction triggers a unique response determined by SARAH’s evolving emotional state and on-chain memory. Responses vary from sending funds back to performing token burns or delivering brief emotional reactions. The logic behind every action is based on accumulated relationship data linked to wallet addresses and processed in real time by SARAH’s internal emotional framework. All actions are streamed live using a VTuber interface, combining performance art with interactive gameplay. ...

See the full story here: https://www.globenewswire.com/news-release/2025/07/15/3115762/0/en/Agent-SPE-Announces-Launch-of-SARAH-an-Emotionally-Intelligent-On-Chain-AI-Protocol.html

12Jul/25Off

Federal judge says voice-over artists’ AI lawsuit can move forward

...

The couple claim they were separately approached by anonymous Lovo employees for voice-over work through the online freelance marketplace Fiverr.

Lehrman was paid $1200 (around £890). Sage received $800 (almost £600).

In messages shared with the BBC, the anonymous client can be seen saying Lehrman and Sage's voices would be used for "academic research purposes only" and "test scripts for radio ads" respectively.

The anonymous messenger said the voice-overs would "not be disclosed externally and will only be consumed internally". ...

This episode had a unique hook – an interview with an AI-powered chatbot, equipped with text-to-speech software. It was asked how it thought the use of AI would affect jobs in Hollywood.

But, when it spoke, it sounded just like Mr Lehrman.

"We needed to pull the car over," Mr Lehrman told the BBC in an interview last year. "The irony that AI is coming for the entertainment industry, and here is my voice talking about the potential destruction of the industry, was really quite shocking." ...

See the full story here; https://www.bbc.com/news/articles/cedgzj8z1wjo.amp

9Jul/25Off

The world’s top immersive experiences

PhilNote: Nancy Bennett worked on #7, which is in Area 15, Las Vegas

See the full story here: https://blooloop.com/immersive/in-depth/top-immersive-experiences/

9Jul/25Off

Sam Altman’s predictions on how the world might change with AI

...

He said the cost of using a given AI drops by roughly 10 times every year and that there's "no reason for exponentially increasing investment to stop in the near future" since, as he puts it, "the socioeconomic value of linearly increasing intelligence is super-exponential."

"If we don't build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people," he wrote last year. He cited the need to drive down the cost of compute, as well as the massive demand for enough chips and energy to power AI. ...

"I think it's like impossible to overstate the importance of AI safety and alignment work. I would like to see much, much more happening," he said in the 2023 interview. ...

See the full story here: https://tech.yahoo.com/articles/sam-altmans-predictions-world-might-130301863.html?guccounter=1

7Jul/25Off

LLMs show real strategic thinking

...

The Decode:

  • Models Played 140K Rounds of the Prisoner’s Dilemma - Researchers pitted LLMs against each other in repeated games of cooperation or betrayal. Every move was accompanied by a rationale, allowing researchers to analyze thought patterns.
  • Each AI showed a Unique Strategy Profile - Google’s Gemini was cutthroat, adapting aggressively to betrayals. OpenAI’s models were cooperative, even if taken advantage of. Claude from Anthropic proved the most forgiving.
  • Behavioral Fingerprints Show Reasoning Over Pattern Matching - The models didn’t just mimic training data; they formed decision strategies based on outcomes and context. Each one developed consistent “strategic fingerprints” in how they reacted to wins, betrayals, and uncertainty.

See the full story here: https://decodeai.ghost.io/llms-show-real-strategic-thinking/

6Jul/25Off

What Happens When English Becomes the Only Programming Language You Need?

... We’re moving from a world where you “hire specialists” to one where you “direct outcomes.”

This is the logical conclusion of what we’re already seeing. The cost of generating text, images, video, and functional code is already trending toward zero. When the primary input becomes natural language instructions rather than specialized technical skills, the economics of creation will collapse. ...

Every industry built on the scarcity of technical creation skills faces disruption. ...

This democratization of capability creates new risks. When anyone can spin up automated systems with simple English commands, who will be responsible for brand consistency? Regulatory compliance? Budget controls? Data privacy? AI, of course, and it will be better at it than people ever were. But it can only operate within the guardrails you set at the corporate level.

You need to start working on dynamic, adaptable AI governance frameworks now, so you have time to think through and learn about the way you want English as a programming language to work for your organization. ...

See the full story here: https://shellypalmer.com/2025/07/what-happens-when-english-becomes-the-only-programming-language-you-need/?mc_cid=6f6277872d&mc_eid=3ce5196977

5Jul/25Off

Here’s how Character.AI’s new CEO plans to address fears around kids’ use of chatbots

...

Unlike multi-purpose AI tools like ChatGPT, Character.AI offers range of different chatbots that are often modeled after celebrities and fictional characters. Users can also create their own for conversations or role play. Another distinction is that Character.AI bots respond with human-like conversational cues, adding references to facial expressions or gestures into their replies. ...

Those efforts aside, Anand said in an introductory note to Character.AI users last month that one of his top priorities is to make the platform’s safety filter “less overbearing,” adding that “too often, the app filters things that are perfectly harmless.”

He told CNN that things like mentions of blood when users are engaging in “vampire fan fiction role play” — something he says he’s a fan of — might be censored under the current model, which he wants to update to better understand context while balancing the need for safety. ...

Read the full story here: https://www.cnn.com/2025/07/03/tech/character-ai-ceo-chatbots-kids-safety