philip lelyveld The world of entertainment technology

11Mar/26Off

The Top 100 Gen AI Consumer Apps — 6th Edition

Three distinct ecosystems are forming across Western, Chinese, and Russian platforms, while AI is increasingly embedded across browsers, developer tools, and productivity software, suggesting that traditional rankings may soon underestimate where real AI usage actually happens. 

Source: https://a16z.com/100-gen-ai-apps-6/

11Mar/26Off

Amazon Puts Health AI on Everything

... Amazon says the assistant works two ways: general health questions without accessing your medical records, or personalized guidance that connects to your actual health data through the national Health Information Exchange.

Amazon claims HIPAA compliance, encryption, and “strict access controls” without specifying who has access or how the encryption actually works. ...

Amazon connects its AI to actual medical records, prescription systems, and a network of doctors who can write prescriptions.

The fossilized business models of traditional healthcare providers will practically force consumers to adopt AI healthcare. It will be interesting to watch the adoption curve.

See the full story here: https://shellypalmer.com/2026/03/amazon-puts-health-ai-on-everything/

10Mar/26Off

Yann LeCun’s $1 Billion Bet Against ChatGPT

Yann LeCun just raised $1.03 billion for AMI Labs to build “world models” instead of large language models. ...

No product in three months, no revenue in six months, no $10 million ARR in twelve months. That timeline puts AMI Labs completely at odds with the venture capital model that has funded almost every other AI startup.

World models sound like a probable future for AI. Instead of predicting the next word in a sentence, a world model would predict how the world actually works. Said differently, a world model would “understand” causality instead of just correlation. ...

See the full story here: https://shellypalmer.com/2026/03/yann-lecuns-1-billion-bet-against-chatgpt/

9Mar/26Off

‘AI Has An Important Role In National Security’: OpenAI Robotics Chief Caitlin Kalinowski Resigns Over Pentagon AI Deal

...

“AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorisation are lines that deserved more deliberation than they got," Kalinowski said in a post on X, adding that the decision to resign was difficult. 

OpenAI confirmed Kalinowski’s exit in an emailed statement, saying it believes the Defense Department agreement “creates a workable path for responsible national security uses of AI while making clear our red lines, no domestic surveillance and no autonomous weapons.”

“We recognise that people have strong views about these issues and we will continue to engage in discussion with employees, government, civil society and communities around the world,” the company said. ...

See the full story here: https://www.freepressjournal.in/tech/ai-has-an-important-role-in-national-security-openai-robotics-chief-caitlin-kalinowski-resigns-over-pentagon-ai-deal

7Mar/26Off

US draws up strict AI guidelines amid Anthropic clash, FT reports

A draft of ⁠the guidelines reviewed by the FT says AI groups seeking business ​with the government must grant the U.S. an irrevocable license to use their ​systems for all legal purposes.

...

The GSA ​draft ⁠mandates that contractors "must not intentionally encode partisan or ideological judgments into the AI systems data outputs," the FT reported.

It requires companies to disclose whether their models ⁠have ​been "modified or configured to comply with any non-U.S. ​federal government or commercial compliance or regulatory framework," the newspaper said.

See the full story here: https://www.reuters.com/business/media-telecom/us-draws-up-strict-new-ai-guidelines-amid-anthropic-clash-ft-reports-2026-03-07/

4Mar/26Off

OpenAI Employees Don’t Get to Choose Wars

...

There is a disconnect between OpenAI's safety-first branding and its defense revenue reality as employees who joined to "benefit humanity" are now coming to understand that they are building tools for battlefield intelligence. Altman can add constitutional language and exclude domestic surveillance, but the core product still enables lethal autonomous weapons. ...

Altman has positioned OpenAI as a utility provider, not a moral arbiter, saying he’d "rather go to jail" than follow an unconstitutional order. ...

See the full editorial here: https://shellypalmer.com/2026/03/openai-employees-dont-get-to-choose-wars/

4Mar/26Off

‘The world is looking to you for clarity’, UN chief tells AI experts

AI is advancing at lightning speed... no country, no company, and no field of research can see the full picture alone,” he added that “the world urgently needs a shared, global understanding of artificial intelligence; grounded not in ideology, but in science.” 

See the full story here: https://news.un.org/en/story/2026/03/1167074

3Mar/26Off

OpenAI CEO Sam Altman defends decision to strike Pentagon deal after Anthropic blacklisting, admits ‘optics don’t look good’

... Some of these critics have even started a campaign to persuade ChatGPT users to stop using that AI model and switch to Anthropic’s Claude chatbot. There was some evidence the campaign was having an effect, too: Claude surged past ChatGPT to become the most downloaded free app in Apple’s App Store. The sidewalk outside OpenAI’s offices in San Francisco was also covered with chalk graffiti attacking its decision to cut a deal with the Pentagon, while graffiti outside Anthropic’s offices largely praised its decision to refuse a contract that did not include prohibitions on the use of its AI models for mass surveillance and autonomous weapons. ...

“I very deeply believe in the democratic process, and that our elected leaders have the power, and that we all have to uphold the Constitution. I am terrified of a world where AI companies act like they have more power than the government,” Altman said on X. “I would also be terrified of a world where our government decided mass domestic surveillance was okay.”

See the full story here: https://fortune.com/2026/03/02/openai-ceo-sam-altman-defends-decision-to-strike-pentagon-deal-amid-backlash-against-the-chatgpt-maker-following-anthropic-blacklisting/

3Mar/26Off

I checked out one of the biggest anti-AI protests yet

...

This is all familiar stuff. Researchers have long called out the harms, both real and hypothetical, caused by generative AI—especially models such as OpenAI’s ChatGPT and Google DeepMind’s Gemini. What’s changed is that those concerns are now being taken up by protest movements that can rally significant crowds of people to take to the streets and shout about them.  ...

Miller is a PhD student at Oxford University, where he studies mechanistic interpretability, a new field of research that involves trying to understand exactly what goes on inside LLMs when they carry out a task. His work has led him to believe that the technology may forever be beyond our control and that this could have catastrophic consequences. ...

It doesn’t have to be a rogue superintelligence, he said. You just needed someone to put AI in charge of nuclear weapons. “The more silly decisions that humanity makes, the less powerful the AI has to be before things go bad,” he said.

After a week in which the US government tried to force Anthropic to let it use its LLM Claude for any “legal” military purposes, such fears seem a little less far-fetched. ...

See the full story here: https://www.technologyreview.com/2026/03/02/1133814/i-checked-out-londons-biggest-ever-anti-ai-protest/

2Mar/26Off

OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI

...

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” ...

​Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions – ETH Zurich, EPFL, and the Swiss National Supercomputing Centre – released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on. ...

​The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. ...

See the full story here: https://www.theglobeandmail.com/business/commentary/article-openai-tumbler-ridge-chatgpt/