Sony Pictures and Music to Help Launch Incubator for Web3 Push
The company wants to "catalyze ecosystem growth and accelerate adoption by leveraging its vast global reach and technological expertise across entertainment, gaming, and consumer electronics."
Sony Group is looking for its Sony Pictures and Sony Music units and a new global incubator to help it with a push into the blockchain and Web3 space.
“Web3” is a term that describes a vision for a decentralized internet that is built on blockchaintechnology and communally controlled by its users.
Last week, Sony Block Solutions Labs, a joint venture between the Japanese conglomerate’s former Sony Network Communications Labs unit and Startale Labs, a Sony company established for the express purpose of “building new network infrastructure using blockchain technology, said that it has developed the Soneium blockchain as the infrastructure network on which the company wants “to accelerate Web3 innovation.” ...
On Wednesday morning Tokyo time, early L.A. evening time, Sony said it was launching the “Soneium Minato” public testnet and an “ambitious” developer incubation program dubbed “Soneium Spark” to “catalyze ecosystem growth and accelerate adoption by leveraging its vast global reach and technological expertise across [the] entertainment, gaming, and consumer electronics sectors.” ...
Importantly, Sony subsidiaries, such as its film and music units, will participate in the incubation program. “We have opened our testnet as a first step to foster a fan community centered on creators that can connect diverse values through Soneium,” said Jun Watanabe, chairman of Sony Block Solution Labs. “Let’s work together to create new value in Web3 toward a world where Web3 services are used in people’s daily life.”
Among Web3 apps that have attracted users include the likes of Flickplay, which allows users to unlock digital assets and create videos with them using augmented reality (AR). ...
See the full story here: https://www.hollywoodreporter.com/business/business-news/sony-pictures-music-web3-blockchain-sonieum-initiative-1235984907/
Elon Musk voices support for California bill requiring safety tests on AI models
... "For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public," Musk said in a post, opens new tab on X, while calling on the state to pass the SB 1047 bill.
California state lawmakers attempted to introduce 65 bills touching on AI this legislative season, according to the state’s legislative database, including measures to ensure all algorithmic decisions are proven unbiased and protect the intellectual property of deceased individuals from exploitation by AI companies. Many of the bills are already dead.
Earlier in the day, Microsoft (MSFT.O), opens new tab backed OpenAI voiced support for another AI bill from California, called AB 3211, that would require tech companies to label AI-generated content, which can range from harmless memes to deepfakes aimed at spreading misinformation about political candidates. ...
See the full story here: https://www.reuters.com/technology/artificial-intelligence/elon-musk-voices-support-california-bill-requiring-safety-tests-ai-models-2024-08-27/
This AI Learns Continuously From New Experiences—Without Forgetting Its Past
...
Tackling a new task sometimes requires a whole new round of training and learning, which erases what came before and costs millions of dollars. For ChatGPT and other AI tools, this means they become increasingly outdated over time.
This week, Dohare and colleagues found a way to solve the problem. The key is to selectively reset some artificial neurons after a task, but without substantially changing the entire network—a bit like what happens in the brain as we sleep. ...
Called continual back propagation, the strategy is “among the first of a large and fast-growing set of methods” to deal with the continuous learning problem, wrote Drs. Clare Lyle and Razvan Pascanu at Google DeepMind, who were not involved in the study. ...
It still uses back propagation, but with a small difference. A tiny portion of artificial neurons are wiped clean during learning in every cycle. To prevent disrupting whole networks, only artificial neurons that are used less get reset. The upgrade allowed the algorithm to tackle up to 5,000 different image recognition tasks with over 90 percent accuracy throughout. ...
AI networks that can no longer learn could also be due to network interactions that destabilize the way the AI learns. Scientists are still only scratching the surface of the phenomenon.
Meanwhile, for practical uses, when it comes to AIs, “you want them to keep with the times,” said Dohare. ...
“These capabilities are crucial to the development of truly adaptive AI systems that can continue to train indefinitely, responding to changes in the world and learning new skills and abilities,” wrote Lyle and Pascanu.
See the full story here: https://singularityhub.com/2024/08/22/this-ai-learns-continuously-from-new-experiences-without-forgetting-its-past/
Stephen Wolfram thinks we need philosophers working on big questions around AI
...
For example, when you start talking about how to put guardrails on AI, these are essentially philosophical questions. “Sometimes in the tech industry, when people talk about how we should set up this or that thing with AI, some may say, ‘Well, let’s just get AI to do the right thing.’ And that leads to, ‘Well, what is the right thing?’” And determining moral choices is a philosophical exercise.
He says he has had “horrifying discussions” with companies that are putting AI out into the world, clearly without thinking about this. “The attempted Socratic discussion about how you think about these kinds of issues, you would be shocked at the extent to which people are not thinking clearly about these issues. Now, I don’t know how to resolve these issues. That’s the challenge, but it’s a place where these kinds of philosophical questions, I think, are of current importance.” ...
“Science is an incremental field where you’re not expecting that you’re going to be confronted with a major different way of thinking about things.” ...
“And this question of ‘if the AIs run the world, how do we want them to do that? How do we think about that process? What’s the kind of modernization of political philosophy in the time of AI?’ These kinds of things, this goes right back to foundational questions that Plato talked about,” he told students. ...
See the full story here: https://techcrunch.com/2024/08/25/stephen-wolfram-thinks-we-need-philosophers-working-on-big-questions-around-ai
Sony Launches Blockchain to Boost Web3 in Entertainment and Gaming
- Sony announced its blockchain, “Soneium.”
- Soneium will be a public blockchain for developers to build their experiences.
- Sony will enable developers to incorporate cryptocurrency transactions for purchases of assets in games, NFTs, entertainment products, etc.
See the full story here: https://www.techopedia.com/news/sony-launches-blockchain-to-boost-web3-in-entertainment-and-gaming
THERE’S A HUMONGOUS PROBLEM WITH AI MODELS: THEY NEED TO BE ENTIRELY REBUILT EVERY TIME THEY’RE UPDATED
... In other words, if you want to teach an existing deep learning model something new, you'll likely have to retrain it from the ground up — otherwise, according to the research, the artificial neurons in their proverbial minds will sink to a value of zero. This results in a loss of "plasticity," or their ability to learn at all. ...
And training advanced AI models, as the researchers point out, is a cumbersome and wildly expensive process — making this a major financial obstacle for AI companies, which burn through a ton of cash as it is. ...
This phenomenon of plasticity loss is also a major moat between current AI models and the imagined "artificial general intelligence," or a theoretical AI that would be considered generally as intelligent as humans. ...
"A solution to continual learning is literally a billion-dollar question," Dohare told New Scientist. "A real, comprehensive solution that would allow you to continuously update a model would reduce the cost of training these models significantly."
See the full story here: https://futurism.com/the-byte/ai-models-rebuilding-problem
AI researchers call for ‘personhood credentials’ as bots get smarter
... In the paper, published online last week but not yet peer-reviewed, a group of 32 researchers from OpenAI, Microsoft, Harvard and other institutions call on technologists and policymakers to develop new ways to verify humans without sacrificing people’s privacy or anonymity. They propose a system of “personhood credentials” by which people prove offline that they physically exist as humans and receive an encrypted credential that they can use to log in to a wide range of online services. ...
But the authors argue that existing systems for proving one’s humanity, such as requiring users to submit a selfie or solve a CAPTCHA puzzle, are increasingly “inadequate against sophisticated AI.” In the near future, they add, even holding a video chat with someone may not be enough to tell whether they’re the person they claim to be, another person disguising themselves with AI or “even a complete AI simulation of a real or fictitious person.” ...
The researchers propose instead that personhood credentials should allow people to interact online anonymously without their activities being tracked. ...
In an emailed statement via an OpenAI representative, Adler said the paper’s goal is to establish the value of personhood credentials in general, while highlighting the criteria and design challenges that any such system should take into account. ...
If artificial intelligence systems can convincingly impersonate humans, he mused, presumably they could also hire humans to do their bidding. ...
Chris Gilliard, an independent privacy researcher and surveillance scholar, said it’s worth asking why the onus should be on individuals to prove their humanity rather than on the AI companies to prevent their bots from impersonating humans, as some experts have suggested. ...
See the full story here: https://www.washingtonpost.com/politics/2024/08/21/human-bot-personhood-credentials-worldcoin/
Prompt hacking is an Achilles’ heel for AI
- "Prompt hacking" is becoming a concern as hackers figure out how to manipulate LLMs to retrieve restricted information
- Outsmarting an LLM in many environments can be done with little to no hacking experience
- New security measures need to be put in place and LLMs themselves will have to adapt
See the full story here; https://www.fierce-network.com/cloud/how-hackable-your-llm
Fake celebrity endorsements become latest weapon in misinformation wars
... Roughly 1 in 10 viral posts analyzed by the News Literacy Project contained fake endorsements, according to data provided exclusively to CNN. Those posts described supposed endorsements - or alternatively, public snubs - from celebrities including NFL quarterback Aaron Rodgers, actor Morgan Freeman, musician Bruce Springsteen, and political figures like former First Lady Michelle Obama. ...
Experts say the problem has been exacerbated by X's AI-powered chatbot, Grok, which has already drawn the ire of election officials for spreading false information about Harris' eligibility in the 2024 election. Last week, X began allowing users to use Grok to create AI-generated images from text prompts, unleashing a flood of fake content about Trump and Harris.
"Going forward, Grok is likely to be one of the main sources of these sorts of images because it generates high-quality images, is easily available, and was intentionally made to have a low refusal rate," Hansen said, adding that he was able to use Grok to create images of "Swifties for Trump" that closely resemble the ones Trump shared. ...
See the full story here: https://abc11.com/post/fake-celebrity-endorsements-become-latest-weapon-misinformation-wars-sowing-confusion-ahead-2024-election/15218719/
TCL Names Finalists for AI TV/Film Accelerator Program
... “This is a pivotal moment of realignment in the industry and TCL is leading the way of demystifying the use of AI tools through our production initiatives,“ TCL North America chief content officer Chris Regina said. ...
See the full story here: https://www.nexttv.com/news/tcl-names-finalists-for-ai-tvfilm-accelerator-program
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Trustworthy AI – A Market-Driven approach
- Tufts Alumni Bio