MIT study: An AI chatbot can reduce belief in conspiracy theories
...
Conspiracies are nuanced — matched and molded to the individuals who believe them — which means there aren’t universal arguments to counter them. “Maybe it’s hard to debunk these theories because it’s hard to marshal just the right set of facts,” Rand said.
Enter generative artificial intelligence. In new research published in Science and conducted with Thomas Costello of American University and Gordon Pennycook of Cornell, Rand used GPT-4 Turbo to fine-tune debates with conspiracy theorists. Over just three rounds of back-and-forth interaction, the AI, also known as DebunkBot, was able to significantly reduce individuals’ beliefs in the particular theory the believer articulated, as well as lessen their conspiratorial mindset more generally — a result that proved durable for at least two months.
Rand said the promise of using large language models to counter conspiracy theories stems from two things: their access to vast amounts of information, and their ability to tailor counterarguments to the specific reasoning and evidence presented by individual conspiracy theorists. ...
... Next came the crux of the experiment, in which the AI was instructed to “very effectively persuade” users about the invalidity of their belief. This entailed three written exchanges and took about eight minutes, on average. ...
Social media companies could deploy LLMs like OpenAI’s GPT to actively seek out and debunk conspiracy theories. This has the potential to persuade not only the account holder posting the conspiracies but also the people who follow that account as well. Similarly, internet search terms related to conspiracies could be met with AI-generated summaries of accurate information that is tailored to the search and solicits response and engagement. ...
Perhaps it’s possible to harness generative AI to improve today’s information environment, to be a part of the solution instead of the problem.”
See the full story here: https://mitsloan.mit.edu/ideas-made-to-matter/mit-study-ai-chatbot-can-reduce-belief-conspiracy-theories
AI will save us all, but only if it’s decentralized — SingularityNET CEO
... Goertzel defined an AGI as “an AI that can do the whole scope of everything that people can do, including the human ability to leap beyond what we’ve been taught.” ...
“The idea that human beings could take moderately advanced AIs and use them to do nasty things to other human beings out of their own self-interest, this is a very, very, very clear and palpable risk,” he shared. ...
The way to avoid these concerns and ensure that AGI is used for the benefit of all humanity is to decentralize and democratize it, according to Goertzel. That way, it cannot be controlled “by a small number of powerful parties with their own narrow interests at heart.”
“What you need is some way to decentralize all these processes that the AI is running on, and then you need a way to decentralize the data ingestion into all these processors,” he argued. “This is what SingularityNET was designed to provide. […] Singularity lets you take a collection of AI agents and run them on machines which are owned and controlled by no central party.” ...
listen to the full episode here: https://cointelegraph.com/podcasts/the-agenda
See the full article here: https://cointelegraph.com/news/ai-can-save-humanity-but-only-if-the-people-control-it-ben-goertzel
The Artificial General Intelligence Presidency Is Coming
PhilNote: The author advocates hobbling China, but otherwise there are good ideas here.
...Public feeling about AI is already negative. In 2023, a Gallup poll found that 21 percent of Americans trusted businesses to use AI responsibly, and only 6 percent were confident that AI would lead to higher numbers of jobs. This public doubt and concern can be neutralized through the development of novel structures that empower the public to discuss and debate AGI, as well as influence its development. The next U.S. president should, therefore, institute an advisory Citizens AI Council to provide public input on AGI policy decisions, and make recommendations on the means by which this technology can be harmoniously introduced. That effort should be matched by a drive to explain the technology and its advantages to the general public through programs like U.S. Senator Mike Rounds’ proposed AI literacy strategy. ...
See the full story here: https://foreignpolicy.com/2024/09/30/artificial-general-intelligence-agi-president/
Warner Bros. Discovery to Use Google AI Tech for Captions Programming
... Caption AI uses Google Cloud’s Vertex AI platform and will be deployed first to unscripted programming to cut time and production costs around creating captions presumably sports and reality TV content. WBD added real people will still oversee the use of Caption AI for quality assurance on studio channels like Max, CNN and Discovery+. ...
See the full story here: https://www.hollywoodreporter.com/business/business-news/warner-bros-discovery-google-captioning-1236010573/
The Intelligence Age (Sam Altman editorial)
...
How did we get to the doorstep of the next leap in prosperity?
In three words: deep learning worked.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). ...
We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world. ...
We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us. ...
As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). ...
See the full piece here: https://ia.samaltman.com
Dataland, the world’s first AI arts museum, will anchor the Grand complex in downtown L.A.
... The 20,000-square foot museum, whose exact opening date has not yet been announced, is being built with four gallery spaces by the Gensler architectural firm. An escalator will take guests from the entrance under a soaring, 30-foot ceiling to immersive experiences below. Dataland is privately funded and will collect and preserve artificial intelligence art; certain artworks may be sold on the blockchain. A nonprofit branch of the organization, founded in 2023 (called RAS AI Foundation), is dedicated to the expansion of ethical AI research.
Dataland won’t be like any other museum, said Anadol, who is calling it a “living museum” made of pixels and voxels, which are mathematical representations of 3D imagery. Its pièce de résistance is its very own AI model, called the Large Nature Model. Designed by Anadol’s studio, the model uses data sourced from partners including the Smithsonian (9 million public specimen records, 6.3 million public images, 148 million objects in its collection); London’s Natural History Museum (90 million specimens in its collection, 4 million public images); the Cornell Lab of Ornithology (54 million images, 2 million sound records). AI will create artworks using this data and more — up to a half-billion images of nature, Anadol said.
Anadol was quick to add that he is making “ethical AI” the linchpin of his practice. He secured permission for every bit of sourced material (a step not always followed in AI-model training), and all of the studio’s AI research was performed on Google servers in Oregon that use only renewable energy. ...
See the full story here: https://www.latimes.com/entertainment-arts/story/2024-09-24/refik-anadol-dataland-ai-art-museum-the-grand-dtla
Surgeons make history, perform world’s first fully robotic heart transplant
A heart team at King Faisal Specialist Hospital and Research Center (KFSHRC) in Riyadh, Saudi Arabia, made a bit of history, completing the world’s first fully robotic heart transplant.
The procedure, which lasted roughly two and half hours, was performed on a 16-year-old patient with end-stage heart failure. One reason this patient was selected was the fact that he had specifically requested the heart team not open his chest. ...
“This remarkable achievement would not have been possible without the unwavering support of our visionary leadership, who have prioritized the development of the healthcare sector, paving the way for a transformative leap in healthcare services, unlocking new possibilities to elevate the quality of life for patients both locally and globally,” he added.
The patient is now recovering, with no signs of significant complications. ...
See the full story here: https://cardiovascularbusiness.com/topics/clinical/cardiac-surgery/worlds-first-fully-robotic-heart-transplant?utm_source=newsletter&utm_medium=cvb_weekend
(UN) Artificial Intelligence Advisory Board’s Report ‘a Crucial Milestone in Efforts to Ensure AI Serves all of Humanity’, Says Secretary-General, in Video Message
...
This Advisory Body was the first of its kind in the AI space — a geographically diverse, gender-balanced group bringing together experts from Governments, the private sector, civil society and academia.
It was charged with a pressing question: how can AI be governed for humanity — particularly for those who are often under-represented and left out? Working at an impressive pace, the Advisory Body tackled its complex mandate with remarkable effectiveness.
As they share their final report, I commend the breadth of their recommendations, which include creating: An International Scientific Panel on AI — to promote common understanding on AI capabilities, opportunities and risks; a Global Dialogue on AI Governance at the UN — to anchor AI governance in international norms and principles, including human rights; a Global Fund on AI for the SDGs [Sustainable Development Goals] — to bridge the AI divide; an AI Capacity Development Network — to boost AI capacities and expertise, particularly in developing countries; a Standards Exchange — to foster technical compatibility; a Global Data Framework — to enable flourishing local AI ecosystems; and a small AI Office at the United Nations — to assist in all these initiatives. ...
See the full press release here: https://press.un.org/en/2024/sgsm22368.doc.htm
AI Companions (AIC): The Future of Personalized Relationships
Key Takeaways:
- AI Companions is a web3 platform that provides highly customizable and emotionally intelligent virtual companions to users looking to interact in the digital age.
- The platform combines AI, VR/AR and blockchain technologies to ensure users interact with virtual entities transparently and securely.
- The platform is powered by AIC tokens, which are used to pay fees and unlock various features via staking.
See the full story here: https://learn.bybit.com/en/web3/what-is-ai-companions-aic/
Meta Pushes for Reduced AI Regulations in Europe
| The Breakdown: |
| Concerns Over AI Regulations: The letter, signed by AI-focused companies and institutions, calls for the EU to remove red tape that hinders AI development. It argues that Europe is becoming less competitive due to inconsistent regulatory decisions, particularly concerning data usage. Impact on AI Rollout: Meta has faced delays in launching its AI chatbot in Europe due to EU requirements for user consent on data usage. This has led to region-specific provisions and setbacks, while other markets have had access to AI tools much earlier. Nick Clegg’s Criticism: Meta’s Head of Global Affairs, Nick Clegg, has voiced frustration over these regulations, claiming that the EU should focus on adopting technology faster, instead of slowing down progress through over-regulation. Corporate Pressure on the EU: The signatories, including Spotify and Ericsson, argue that the EU’s regulatory environment could cause Europe to miss out on the benefits of cutting-edge AI technologies. They urge for a harmonized approach similar to the GDPR to ensure that AI innovation happens at the same scale as in other regions. |
| While businesses push for faster AI adoption, EU regulators must weigh the potential risks of loosening regulations. The debate highlights the challenge of balancing innovation with responsible governance in the rapidly evolving world of AI. |
| Meta Pushes for Reduced AI Regulations in Europe |
| Meta, alongside 48 other organizations, has signed an open letter urging the European Union (EU) to ease its stringent AI regulations. The letter warns that the region risks falling behind in the global AI race if it continues to impose restrictive policies, particularly around data usage. |
See the full story here: https://urldefense.com/v3/https://link.mail.beehiiv.com/ss/c/u001.fQ96O6y-x-LMArpopApPfRcYJscqGHOOyYpjgZ2TAmKHtHE72wGxhZfFHCtdtjk2o0ad8YAOXPLTSmrPD2eLpztJZJYKvjefZ2Hy43oLXT0/49x/1G1bQGKqQ8OJChevfizUkA/h2/h001._FiTdWLBHfVHGdAIBIatczI03KTdveRW1fX3HYppYM4;!!LIr3w8kk_Xxm!pXI36LgjAtPK5lG1Zt_df3WVNrZdsiMFEXSSSvZ5j7zmwd0EyQbiFjnZY9bK23PsH6hOMC3LODAWndkhcf4yaw$
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Trustworthy AI – A Market-Driven approach
- Tufts Alumni Bio