philip lelyveld The world of entertainment technology

30Aug/24Off

Oprah Winfrey Lines Up AI Heavy Hitters for ABC Special

Winfrey will host an ABC special titled AI and the Future of Us: An Oprah Winfrey Special on Sept. 12, in which she’ll talk to some leading figures in the artificial-intelligence space, including Microsoft founder Bill Gates and OpenAI head Sam Altman, as well as FBI Director Christopher Wray and a couple of skeptical voices. ...

In addition to Gates, Altman and Wray, the special will feature interviews with technologist and widely followed YouTube tech reviewer Marcus Brownlee; Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology who warn of the risks of AI growing too powerful, too fast; and author Marilynne Robinson, who “reflects on AI’s threat to human values and the ways in which humans might resist the convenience of AI,” per ABC’s description. 

The goal of the hour, according to ABC, is to “provide a serious, entertaining and meaningful base for every viewer to understand AI, and empowers everyone to be a part of one of the most important global conversations of the 21st century.” ...

See the full story here: https://www.hollywoodreporter.com/tv/tv-news/oprah-winfrey-abc-special-ai-1235987248/

30Aug/24Off

Post-apocalyptic education

What comes after the Homework Apocalypse

... As of eight months ago, a representative survey in the US found that 82% of undergraduates and 72% of K12 students had used AI for school. ...

[Cheating on homework] is not a new problem. One of the first uses of any new technology has always been to get help with homework. A study of thousands of students at Rutgers found that when they did their homework in 2008, it improved test grades for 86% of them (see, homework really does help!), but homework only helped 45% of students in 2017. Why? The rise of the Internet. By 2017, a majority of students were copying internet answers, rather than doing the work themselves. ...

The Homework Apocalypse has already happened and may even have happened before generative AI! Why are more people not seeing this as an emergency? I think it has to do with two illusions. ...

Detection Illusion: teachers believe they can still easily detect AI use, and therefore can prevent it from being used in schoolwork. ...

While teachers grapple with the Detection Illusion, students face their own misconception: Illusory Knowledge. They don’t actually realize that getting help with homework is undermining their learning. ... As the authors of the study at Rutgers wrote: “There is no reason to believe that the students are aware that their homework strategy lowers their exam score... they make the commonsense inference that any study strategy that raises their homework quiz score raises their exam score as well.” ...

We know that almost three-quarters of teachers are already using AI for work, but we have just started to learn the most effective ways for teachers to use AI. ...

See the full article here: https://www.oneusefulthing.org/p/post-apocalyptic-education

28Aug/24Off

Sony Drops New Over-the-Counter Hearing Aid

... Sony says there's advanced sound technology in the C20, with a feature that prioritizes speech clarity while maintaining awareness of surrounding noises. 

The new C20 hearing aids will retail for $1,000. With the new model on the market, the C10 will have a suggested retail of $800; the E10 hearing aids, which resemble earbuds and -- like the new C20 -- are rechargeable, will be priced at $1,100. They'll be available through Sony and at Best Buy, Amazon, Walmart, CVS, HearUSC  and through some hearing care places. ...

See the full story here: https://www.cnet.com/health/medical/sony-drops-new-over-the-counter-hearing-aid/

28Aug/24Off

Sony Pictures and Music to Help Launch Incubator for Web3 Push

The company wants to "catalyze ecosystem growth and accelerate adoption by leveraging its vast global reach and technological expertise across entertainment, gaming, and consumer electronics."

Sony Group is looking for its Sony Pictures and Sony Music units and a new global incubator to help it with a push into the blockchain and Web3 space.

“Web3” is a term that describes a vision for a decentralized internet that is built on blockchaintechnology and communally controlled by its users.

Last week, Sony Block Solutions Labs, a joint venture between the Japanese conglomerate’s former Sony Network Communications Labs unit and Startale Labs, a Sony company established for the express purpose of “building new network infrastructure using blockchain technology, said that it has developed the Soneium blockchain as the infrastructure network on which the company wants “to accelerate Web3 innovation.” ...

On Wednesday morning Tokyo time, early L.A. evening time, Sony said it was launching the “Soneium Minato” public testnet and an “ambitious” developer incubation program dubbed “Soneium Spark” to “catalyze ecosystem growth and accelerate adoption by leveraging its vast global reach and technological expertise across [the] entertainment, gaming, and consumer electronics sectors.” ...

Importantly, Sony subsidiaries, such as its film and music units, will participate in the incubation program. “We have opened our testnet as a first step to foster a fan community centered on creators that can connect diverse values through Soneium,” said Jun Watanabe, chairman of Sony Block Solution Labs. “Let’s work together to create new value in Web3 toward a world where Web3 services are used in people’s daily life.”

Among Web3 apps that have attracted users include the likes of Flickplay, which allows users to unlock digital assets and create videos with them using augmented reality (AR). ...

See the full story here: https://www.hollywoodreporter.com/business/business-news/sony-pictures-music-web3-blockchain-sonieum-initiative-1235984907/

28Aug/24Off

Elon Musk voices support for California bill requiring safety tests on AI models

... "For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public," Musk said in a post, opens new tab on X, while calling on the state to pass the SB 1047 bill.

California state lawmakers attempted to introduce 65 bills touching on AI this legislative season, according to the state’s legislative database, including measures to ensure all algorithmic decisions are proven unbiased and protect the intellectual property of deceased individuals from exploitation by AI companies. Many of the bills are already dead.

Earlier in the day, Microsoft (MSFT.O), opens new tab backed OpenAI voiced support for another AI bill from California, called AB 3211, that would require tech companies to label AI-generated content, which can range from harmless memes to deepfakes aimed at spreading misinformation about political candidates. ...

See the full story here: https://www.reuters.com/technology/artificial-intelligence/elon-musk-voices-support-california-bill-requiring-safety-tests-ai-models-2024-08-27/

27Aug/24Off

This AI Learns Continuously From New Experiences—Without Forgetting Its Past

...

Tackling a new task sometimes requires a whole new round of training and learning, which erases what came before and costs millions of dollars. For ChatGPT and other AI tools, this means they become increasingly outdated over time.

This week, Dohare and colleagues found a way to solve the problem. The key is to selectively reset some artificial neurons after a task, but without substantially changing the entire network—a bit like what happens in the brain as we sleep. ...

Called continual back propagation, the strategy is “among the first of a large and fast-growing set of methods” to deal with the continuous learning problem, wrote Drs. Clare Lyle and Razvan Pascanu at Google DeepMind, who were not involved in the study. ...

It still uses back propagation, but with a small difference. A tiny portion of artificial neurons are wiped clean during learning in every cycle. To prevent disrupting whole networks, only artificial neurons that are used less get reset. The upgrade allowed the algorithm to tackle up to 5,000 different image recognition tasks with over 90 percent accuracy throughout. ...

AI networks that can no longer learn could also be due to network interactions that destabilize the way the AI learns. Scientists are still only scratching the surface of the phenomenon.

Meanwhile, for practical uses, when it comes to AIs, “you want them to keep with the times,” said Dohare. ...

“These capabilities are crucial to the development of truly adaptive AI systems that can continue to train indefinitely, responding to changes in the world and learning new skills and abilities,” wrote Lyle and Pascanu.

See the full story here: https://singularityhub.com/2024/08/22/this-ai-learns-continuously-from-new-experiences-without-forgetting-its-past/

26Aug/24Off

Stephen Wolfram thinks we need philosophers working on big questions around AI

...

For example, when you start talking about how to put guardrails on AI, these are essentially philosophical questions. “Sometimes in the tech industry, when people talk about how we should set up this or that thing with AI, some may say, ‘Well, let’s just get AI to do the right thing.’ And that leads to, ‘Well, what is the right thing?’” And determining moral choices is a philosophical exercise.

He says he has had “horrifying discussions” with companies that are putting AI out into the world, clearly without thinking about this. “The attempted Socratic discussion about how you think about these kinds of issues, you would be shocked at the extent to which people are not thinking clearly about these issues. Now, I don’t know how to resolve these issues. That’s the challenge, but it’s a place where these kinds of philosophical questions, I think, are of current importance.” ...

 “Science is an incremental field where you’re not expecting that you’re going to be confronted with a major different way of thinking about things.” ...

“And this question of ‘if the AIs run the world, how do we want them to do that? How do we think about that process? What’s the kind of modernization of political philosophy in the time of AI?’ These kinds of things, this goes right back to foundational questions that Plato talked about,” he told students. ...

See the full story here: https://techcrunch.com/2024/08/25/stephen-wolfram-thinks-we-need-philosophers-working-on-big-questions-around-ai

25Aug/24Off

Sony Launches Blockchain to Boost Web3 in Entertainment and Gaming

  • Sony announced its blockchain, “Soneium.”
  • Soneium will be a public blockchain for developers to build their experiences.
  • Sony will enable developers to incorporate cryptocurrency transactions for purchases of assets in games, NFTs, entertainment products, etc.

See the full story here: https://www.techopedia.com/news/sony-launches-blockchain-to-boost-web3-in-entertainment-and-gaming

25Aug/24Off

THERE’S A HUMONGOUS PROBLEM WITH AI MODELS: THEY NEED TO BE ENTIRELY REBUILT EVERY TIME THEY’RE UPDATED

... In other words, if you want to teach an existing deep learning model something new, you'll likely have to retrain it from the ground up — otherwise, according to the research, the artificial neurons in their proverbial minds will sink to a value of zero. This results in a loss of "plasticity," or their ability to learn at all. ...

And training advanced AI models, as the researchers point out, is a cumbersome and wildly expensive process — making this a major financial obstacle for AI companies, which burn through a ton of cash as it is. ...

This phenomenon of plasticity loss is also a major moat between current AI models and the imagined "artificial general intelligence," or a theoretical AI that would be considered generally as intelligent as humans.  ...

"A solution to continual learning is literally a billion-dollar question," Dohare told New Scientist. "A real, comprehensive solution that would allow you to continuously update a model would reduce the cost of training these models significantly."

See the full story here: https://futurism.com/the-byte/ai-models-rebuilding-problem

23Aug/24Off

AI researchers call for ‘personhood credentials’ as bots get smarter

... In the paper, published online last week but not yet peer-reviewed, a group of 32 researchers from OpenAI, Microsoft, Harvard and other institutions call on technologists and policymakers to develop new ways to verify humans without sacrificing people’s privacy or anonymity. They propose a system of “personhood credentials” by which people prove offline that they physically exist as humans and receive an encrypted credential that they can use to log in to a wide range of online services. ...

But the authors argue that existing systems for proving one’s humanity, such as requiring users to submit a selfie or solve a CAPTCHA puzzle, are increasingly “inadequate against sophisticated AI.” In the near future, they add, even holding a video chat with someone may not be enough to tell whether they’re the person they claim to be, another person disguising themselves with AI or “even a complete AI simulation of a real or fictitious person.” ...

The researchers propose instead that personhood credentials should allow people to interact online anonymously without their activities being tracked. ...

In an emailed statement via an OpenAI representative, Adler said the paper’s goal is to establish the value of personhood credentials in general, while highlighting the criteria and design challenges that any such system should take into account. ...

If artificial intelligence systems can convincingly impersonate humans, he mused, presumably they could also hire humans to do their bidding. ...

Chris Gilliard, an independent privacy researcher and surveillance scholar,  said it’s worth asking why the onus should be on individuals to prove their humanity rather than on the AI companies to prevent their bots from impersonating humans, as some experts have suggested. ...

See the full story here: https://www.washingtonpost.com/politics/2024/08/21/human-bot-personhood-credentials-worldcoin/