philip lelyveld The world of entertainment technology

13Nov/25Off

Meta’s Top AI Scientist Is Quitting as Zuckerberg’s Spending Spree Sputters

...

LeCun is famously something of an LLM skeptic, believing that the architecture is incapable of one day achieving human-levels of cognition, resulting in a so-called artificial general intelligence. He even advised up-and-coming programmers to not pursue LLMs at all, and instead work “on next-gen AI systems that lift the limitations of LLMs.” That makes him an outlier in the industry, as one of the driving promises fueling the boom is that the tech provides a direct line to creating AGI, if it isn’t already on the verge of doing so. With his focus on more esoteric forms of AI and his distaste of AI boosterism, LeCun was always an odd figure to be working at a titan like Meta. ...

Separate from LeCun’s research division, FAIR, it aims to create a “superintelligent” AI using LLM technology. LeCun, by contrast, is adamant about creating “world” models that are designed to understand the three-dimensional world by training them on a variety of physical data, rather than only language. LeCun says these advances could take decades, but Zuckerberg is clearly obsessed with market dominance in the immediate term. ...

Following the report of LeCun’s planned departure, Meta’s stock dipped by nearly another 3 percent.

See the full story here: https://futurism.com/artificial-intelligence/meta-top-ai-scientist-quitting

11Nov/25Off

Sony Debuts Benchmark for Measuring Computer Vision Bias

Sony AI has introduced the Fair Human-Centric Image Benchmark (FHIBE, pronounced “Fee-bee”), a new global benchmark for fairness evaluation in computer vision models. FHIBE addresses the industry challenge of identifying biased and ethically compromised training data for AI, aiming to trigger “industry-wide improvements for responsible and ethical protocols throughout the entire life span of data — from sourcing and management to utilization — including fair compensation for participants and clear consent mechanisms,” Sony AI says.  ...

Xiang notes that facial recognition systems on mobile phones in China have mistakenly let family members unlock each other’s phones and make payments, a mistake that could result from a lack of images of Asian people in model training data or undetected model bias.

The Register points out that there are other fairness benchmarks, including Meta FACET (FAirness in Computer Vision EvaluaTion) computer vision evaluation.

See the full story here: https://www.etcentric.org/sony-debuts-benchmark-for-measuring-computer-vision-bias/

8Nov/25Off

Google is hiring an economist to understand how advanced AI could affect our wallets

  • Google DeepMind is looking to hire an economist to explore how advanced AI may impact the economy.
  • The economist will research the long-term effects of AI on "scarcity, wealth, and distribution."
  • CEO Demis Hassabis has called for an institute of experts to govern artificial general intelligence.

...

Demis Hassabis has spoken a lot about AI's impact on large economic systems. The Google DeepMind CEO said in August that reaching AGI could usher in an era of "radical abundance," but has warned that it could be harmful to society if not handled correctly.

"One of the big things economists should be thinking about is, what does that do to money, the capitalist system, even the notion of companies?" he said at Davos in January, speaking about AGI. "I think probably all that changes." ...

See the full story here: https://www.businessinsider.com/google-deepmind-hiring-ai-economist-money-agi-abundance-scarcity-2025-11

7Nov/25Off

We need accountability in human–AI agent relationships

Abstract: We argue that accountability mechanisms are needed in human-AI agent relationships to ensure alignment with user and societal interests. We propose a framework according to which AI agents’ engagement is conditional on appropriate user behaviour. The framework incorporates design-strategies such as distancing, disengaging, and discouraging.

See the full story here: https://www.nature.com/articles/s44387-025-00041-7

7Nov/25Off

AI Readiness Project opens doors to state governments

The project's organizer called it a way for states to “move from curiosity to capability” and gain “a trusted place to learn, experiment and lead.”

The project expands the CCF’s previous AI work, led through its state chief AI officer community of practice, with $500,000 in new funding from Rockefeller intended to “professionalize” the practice, as one executive with the philanthropy put it. The project was born from a growing sensethat many state and local governments, after having spent several years, or sometimes longer, drafting AI policies and taking inventory of datasets, are prepared to step up their efforts and begin testing AI tools at a larger scale and for a wider array of government functions.

Technologies classed as “AI” have been under development and in use in government agencies for decades, but it was the commercial release of OpenAI’s ChatGPT in November 2022 that spurred interest in large language models and widely renewed interest in exploring what additional tasks software might automate. Cass Madison, CCF’s executive director, said the renewed interest among state and local government leaders has led to a “wide variation in capacity, maturity, and risk tolerance” among states when it comes to AI. ...

See the full story here: https://statescoop.com/ai-readiness-project-ccf-state-local-government/

2Nov/25Off

China’s AI Dinosaur Robots Blend Education and Prehistoric Fun

China's robotics firms, like EX Robots, are creating AI-powered dinosaur robots that interact via voice, simulate prehistoric behaviors, and teach paleontology for edu-tainment. This innovation, fueled by government support, targets a billion-dollar market by 2030, competing globally while raising ethical and scalability concerns. These bots could transform interactive learning into adventurous experiences. ...

These robots aren’t mere toys; they’re equipped with advanced artificial intelligence that allows them to respond to voice commands, simulate behaviors from the prehistoric era, and even teach basic paleontology facts through interactive sessions. ...

Futurism highlights how firms are “pumping out” these robots to capture market share, competing with Western players like Boston Dynamics, whose own robotic feats have grabbed headlines but lag in consumer-facing applications. ...

This aligns with China’s national AI strategy, as outlined in reports from the South China Morning Post, which emphasizes robotics as a driver of future economic power through sophisticated supply chains. ...

Looking ahead, China’s foray into AI robot dinosaurs could influence international standards in robotics safety and AI ethics. ...

See the full story here: https://www.webpronews.com/chinas-ai-dinosaur-robots-blend-education-and-prehistoric-fun/

2Nov/25Off

The Man Who Invented AGI

... That same year [1997], Gubrud submitted and presented a paper at the Fifth Foresight Conference on Molecular Nanotechnology, called “Nanotechnology and International Security.” He argued that breakthrough technologies will redefine international conflicts, making them potentially more catastrophic than nuclear war. He urged nations to “give up the warrior tradition.” The new sciences he discussed included nanotechnology, of course, but also advanced AI—which he referred to as, yep, “artificial general intelligence.” ...

“My concern was the arms race. The whole point of writing that paper was to warn about that.” Gubrud hasn’t been prolific in producing work after that—his career has been peripatetic, and he now spends a lot of time caring for his mother—but he has authored a number of papers arguing for a banon autonomous killer robots and the like.

Gubrud can’t ignore the dissonance between his status and that of the lords of AGI. “It’s taking over the world, worth literally trillions of dollars,” he says. “And I am a 66-year-old with a worthless PhD and no name and no money and no job.” ...

See the full story here: https://www.wired.com/story/the-man-who-invented-agi/

2Nov/25Off

A.I. Is Making Death Threats Way More Realistic

...

But threatening images are rapidly becoming easier to make, and more persuasive. One YouTube page had more than 40 realistic videos — most likely made using A.I., according to experts who reviewed the channel — each showing a woman being shot. (YouTube, after The New York Times contacted it, said it had terminated the channel for “multiple violations” of its guidelines.) A deepfake video of a student carrying a gun sent a high school into lockdown this spring. In July, a lawyer in Minneapolis said xAI’s Grok chatbot had provided an anonymous social media user with detailed instructions on breaking into his house, sexually assaulting him and disposing of his body.

Until recently, artificial intelligence could replicate real people only if they had a huge online presence, such as film stars with throngs of publicly accessible photos. Now, a single profile image will suffice, said Dr. Farid, who co-founded GetReal Security, a service that identifies malicious digital content. ...

The Times tested Sora and produced videos that appeared to show a gunman in a bloody classroom and a hooded man stalking a young girl. Grok also readily added a bloody gunshot wound to a photo of a real person. ...

Experts in A.I. safety, however, said companies had not done nearly enough. Alice Marwick, director of research at Data & Society, a nonprofit organization, described most guardrails as “more like a lazy traffic cop than a firm barrier — you can get a model to ignore them and work around them.” ...

Some of the harassers also claimed to have used Grok not just to create the images but to research how to find the women at home and at local cafes.

Fed up, Ms. Roper decided to post some examples. Soon after, according to screenshots, X told her that she was in breach of its safety policies against gratuitous gore and temporarily locked her account. ...

A.I. is also making other kinds of threats more convincing. For example: swatting, the practice of placing false emergency calls with the aim of inciting a large response from the police and emergency personnel.  ...

“How does law enforcement respond to something that’s not real?” Mr. Asmus asked. “I don’t think we’ve really gotten ahead of it yet.”

See the full story here: https://www.nytimes.com/2025/10/31/business/media/artificial-intelligence-death-threats.html

31Oct/25Off

Sora 2: The Rise of AI Characters

...

Sora 2 feels like a preview of what user-generated content (UGC) will look like in the agentic era: synthetic characters, algorithmic storytelling, and social loops built on remixing AI-generated content. Some will call this "AI slop," but I remember when everyone thought UGC was just "cats using toilets." Managing a universe of synthetic creators and synthetic creations may be the new-new thing.

...

See the full story here: https://shellypalmer.com/2025/10/sora-2-the-rise-of-ai-characters/

29Oct/25Off

Expert panel will determine AGI arrival in new Microsoft-OpenAI agreement

On Monday, Microsoft and OpenAI announced a revised partnership agreement that introduces an independent expert panel to verify when OpenAI achieves so-called artificial general intelligence (AGI), a determination that will trigger major shifts in how the companies share technology and revenue. The deal values Microsoft’s stake in OpenAI at approximately $135 billion and extends the exclusive partnership through 2032 while giving both companies more freedom to pursue AGI independently. ...

Under a previous arrangement, OpenAI alone would determine when it achieved AGI, which is a nebulous concept that is difficult to define. The revised deal requires an independent expert panel to verify that claim, a change that adds oversight to a determination with billions of dollars at stake. When the panel confirms that AGI has been reached, Microsoft’s intellectual property rights to OpenAI’s research methods will expire, and the revenue-sharing arrangement between the companies will end, though payments will continue over a longer period. ...

See the full story here: https://arstechnica.com/information-technology/2025/10/expert-panel-will-determine-agi-arrival-in-new-microsoft-openai-agreement/