Quantized Intelligence: How Quantum Computing and Advanced AI Are Redefining the Boundaries of Human ThoughtQuantized Intelligence:
...
In the 21st century, our economies have hinged on the notion that humans are the central drivers of innovation and value creation. But that premise is crumbling as quantum computing and AI merge. Once machines begin to improve themselves, the “human bottleneck” is removed. AI shifts from being a mere tool to becoming an active economic agent, pursuing optimization and innovation, potentially independent of human input.
Here lies a paradox: Free-market forces can spur unprecedented innovation and prosperity, but the unbridled chase for efficiency could also unlock humanity’s greatest risk — our own irrelevance – the Homo Obsoletus. ...
What defines our value and role in the world if machines surpass our cognitive abilities? If AI learns to simulate even “intangible” human traits, such as creativity or empathy, what remains uniquely ours? ...
In contrast to AI systems that simulate intelligence from the outside, AHI proposes an internal revolution, hacking biology and chemistry to co-evolve our capacities consciously.
This may include the following:
- Quantum-biological interfaces that enhance cognition and perception.
- Human-machine symbiosis where digital augmentation supports but does not replace human judgment.
- Extended consciousness via neurotechnologies aimed not at control, but at deeper self-awareness and empathy.
...
AHI isn’t about ceding control to superior machines; it’s about integrating AI capabilities within our very biology. ...
Personal Evolution Labs
Institutions where individuals train their mental, emotional and physical intelligences using quantum-tech-driven feedback loops and biosensors.
Mensch-Centric Design
Cities and workspaces designed not just for efficiency, but for enhancing creativity, contemplation, and communal vitality.
Civic Intelligence Ecosystems
AHI-guided democratic systems where policymaking involves empathic simulations of long-term outcomes, co-generated by citizens.
Spiritual Simulations
Tools to explore and extend consciousness, helping us confront mortality, identity and meaning in a post-material age.
...
See the full story here; https://builtin.com/articles/quantized-intelligence
AI’s ‘Oppenheimer moment’: Why new thinking is needed on disarmament
... The dual-use nature of AI technologies – where they can be used in civilian and military settings alike – means that developers could lose touch with the realities of battlefield conditions, where their programming could cost lives, warned Arnaud Valli, Head of Public Affairs at Comand AI. ...
At Microsoft, teams are focusing on the core principles of safety, security, inclusiveness, fairness and accountability, said Michael Karimian, Director of Digital Diplomacy. ...
“AI development is outpacing our ability to manage its many risks,” said Sulyna Nur Abdullah, who is strategic planning chief and Special Advisor to the Secretary-General at the International Telecommunication Union (ITU).
“We need to address the AI governance paradox, recognizing that regulations sometimes lag behind technology makes it a must for ongoing dialogue between policy and technical experts to develop tools for effective governance,” Ms. Abdullah said, adding that developing countries must also get a seat at the table. ...
More than a decade ago in 2013, renowned human rights expert Christof Heyns in a report on Lethal Autonomous Robotics (LARs) warned that “taking humans out of the loop also risks taking humanity out of the loop”. ...
But identifying AI-guided weapons, he says, poses a whole new challenge which nuclear arms – bearing forensic signatures – do not.
“There is a practical problem in terms of how you police any sort of regulation at an international level,” the CEO said. “It's the bit nobody wants to address. ...
“AI is complicated, but the real world is even more complicated,” said Robert in den Bosch, Disarmament Ambassador and Permanent Representative of the Netherlands to the Conference on Disarmament. “For that reason, I would say that it is also important to look at AI in convergence with other technologies and in particular cyber, quantum and space.”
See the full story here: https://news.un.org/en/story/2025/04/1161921
How AI is steering the media toward a ‘close enough’ standard
... In journalism, accuracy isn’t optional—and that’s exactly where AI stumbles. Just ask Bloomberg, which has already hit turbulence with its AI-generated summaries. The outlet began publishing AI-generated bullet points for some news stories back in January this year, and it’s already had to correct more than 30 of them, according to The New York Times. ...
...if you had to issue 30-plus corrections for an intern’s work in three months, you’d probably tell that intern to start looking at a different career path. ...
But the fact that the problem is still happening, more than two years after ChatGPT debuted, pinpoints a primary tension when AI is applied to media: To create novel audience experiences at scale, you need to let the generative technology create content on the fly. But because AI often gets things wrong, you also need to check its output with “humans in the loop.” You can’t do both. ...
See the full story here: https://www.fastcompany.com/91310978/ai-steers-the-media-toward-a-close-enough-standard
First look: Universal’s Epic Universe gives Disney theme parks a run for their money
... Dark Universe is one of five lands at Epic Universe, the first major theme park to launch in the U.S. since 2001, when Disney California Adventure opened its turnstiles in Anaheim. A brand new theme park is a rarity, and with it come expectations — of new tech, next-gen ride systems, unexpected ways to experience stories and an ask for your vacation dollars. ...
Epic Universe is largely a triumph, a theme park that will instantly be the favorite of many, and a park that at long last gives Universal a destination to properly rival — in many ways best — those of Disney. Perfect? No, Epic Universe could benefit from a larger idea or two beyond re-creating cinematic and gaming worlds, but it is stunning and should forever change the modern theme park industry, which was born right here in SoCal when Disneyland opened in 1955. ...
See the full story here: https://www.latimes.com/travel/story/2025-04-09/universal-studios-epic-universe-theme-park-orlando
Disney Research Teaches Robots How to Autonomously Copy Human Behaviors in Real-Time
The AI uses a complex system to predict two things: smooth, flowing movements (like waving) and specific actions (like saying “hello”). They gave it a trial run first on a computer, like a dress rehearsal, before bringing it out to meet real people with the actual robot. And the outcome? It was a hit—the robot chatted and mingled with folks almost as smoothly as when the expert was pulling the strings.
See the full story here: https://www.techeblog.com/disney-research-autonomous-robot-copy-human-behaviors/
Escape.ai Establishes Star-Studded Strategic Advisory Board to Shape the Future of Creator-Driven Entertainment
escape.ai, the world's first Neo Cinema content distribution platform and creator marketplace, is proud to announce the formation of its powerhouse advisory board. ...
"Entertainment is at a crossroads - traditional distribution models no longer serve the next generation of creators," said John Gaeta, Founder of escape.ai. "Bringing together this caliber of industry leadership is a strategic game-changer as we scale. From blockbuster franchises to cutting-edge streaming, gaming and gen AI platforms, our advisors have shaped the future of entertainment. ...
Strategic Industry Leaders Join Advisory Board
escape.ai has secured some of the most influential names in entertainment and technology to drive its next phase of growth:
- ...
- Tony Driscoll –Former Disney, AT&T, and Warner Bros. executive, VP of Epic Games' Creator Economy, advising on platform economics and creator monetization.
See the full story here: https://finance.yahoo.com/news/escape-ai-establishes-star-studded-124900326.html
Tracing the thoughts of a large language model
PhilNote: the third one is very problematic! This is a really informative paper
...Our method sheds light on a part of what happens when Claude responds to these prompts, which is enough to see solid evidence that:
- Claude sometimes thinks in a conceptual space that is shared between languages, suggesting it has a kind of universal “language of thought.” We show this by translating simple sentences into multiple languages and tracing the overlap in how Claude processes them.
- Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so.
- Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning, providing a proof of concept that our tools can be useful for flagging concerning mechanisms in models. ...
Transparency into the model’s mechanisms allows us to check whether it’s aligned with human values—and whether it’s worthy of our trust. ...
Why do language models sometimes hallucinate—that is, make up information? At a basic level, language model training incentivizes hallucination: models are always supposed to give a guess for the next word. Viewed this way, the major challenge is how to get models to not hallucinate. Models like Claude have relatively successful (though imperfect) anti-hallucination training; they will often refuse to answer a question if they don’t know the answer, rather than speculate. We wanted to understand how this works.
It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. ...
Jailbreaks
Jailbreaks are prompting strategies that aim to circumvent safety guardrails to get models to produce outputs that an AI’s developer did not intend for it to produce—and which are sometimes harmful. ...
We find that this is partially caused by a tension between grammatical coherence and safety mechanisms. Once Claude begins a sentence, many features “pressure” it to maintain grammatical and semantic coherence, and continue a sentence to its conclusion. This is even the case when it detects that it really should refuse. ...
See the full story here: https://www.anthropic.com/research/tracing-thoughts-language-model
DeepMind is holding back release of AI research to give Google an edge
Google’s artificial intelligence arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry.
The group, led by Nobel Prize-winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. ...
In recent years, Hassabis has balanced the desire of Google’s leaders to commercialize its breakthroughs with his life mission of trying to make artificial general intelligence—AI systems with abilities that can match or surpass humans.
“Anything that gets in the way of that he will remove,” said one current employee. “He tells people this is a company, not a university campus; if you want to work at a place like that, then leave.”
See the full story here: https://arstechnica.com/ai/2025/04/deepmind-is-holding-back-release-of-ai-research-to-give-google-an-edge/
Doubling down on metasurfaces
Almost a decade ago, Harvard engineers unveiled the world’s first visible-spectrum metasurfaces – ultra-thin, flat devices patterned with nanoscale structures that could precisely control the behavior of light. A powerful alternative to traditional, bulky optical components, metasurfaces today enable compact, lightweight, multifunctional applications ranging from imaging systems and augmented reality to spectroscopy and communications.
Now, researchers in the Harvard John A. Paulson School of Engineering and Applied Sciences(SEAS) are doubling down, literally, on metasurface technology by creating a bilayer metasurface, made of not one, but two stacked layers of titanium dioxide nanostructures. Under a microscope, the new device looks like a dense array of stepped skyscrapers. ...
“It opens up a new way to structure light, in which we can engineer all its aspects such as wavelength, phase and polarization in an unprecedented manner…
“Many people had investigated the theoretical possibility of a bilayer metasurface, but the real bottleneck was the fabrication,” said Alfonso Palmieri, graduate student and co-lead author of the study. With this breakthrough, Palmieri explained, one could imagine new kinds of multifunctional optical devices – for example, a system that projects one image from one side and a completely different image from the other. ...
See the full story here: https://seas.harvard.edu/news/2025/04/doubling-down-metasurfaces
Hollywood studios can’t make money from AI-powered fake movie trailers on YouTube anymore
If you've ever visited YouTube and clicked on a trailer for the next superhero film and thought it seemed too good to be true, well, you might have been right. Wishful thinking, clever editing, and a scoop of AI fakery produced clips enticing billions of clicks and earning plenty of cash through advertising. The shocking part is that a lot of that money apparently found its way to the very studios you might expect to try and shut down any such unauthorized use of their intellectual property, at least according to information uncovered recently by Deadline.
That sidehustle may now be over with YouTube removing two of the biggest homes of these AI-laced fake trailers, Screen Culture and KH Studio, from its Partner Program. That means no more ad revenue for them or the studios reportedly getting a piece of the action.
Screen Culture has made many popular trailers full of AI-generated shots for upcoming films like The Fantastic Four: First Steps and Superman. KH Studio is more famous for its imaginary casting, like Leonardo DiCaprio in the next Squid Game or Henry Cavill as the next James Bond. You would be forgiven for assuming the plotlines, characters, and visuals on display were teasing details of the films, but they were produced far from the real film development. ...
YouTube is somewhat stuck as fan-made trailers have long been a popular kind of content. Using AI, though, can make a fake trailer seem good enough to trick people, even if only by accident. And YouTube doesn't want to encourage the practice by monetizing it. ...
See the full story here: https://www.techradar.com/computing/artificial-intelligence/hollywood-studios-cant-make-money-from-ai-powered-fake-movie-trailers-on-youtube-anymore
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Trustworthy AI – A Market-Driven approach
- Tufts Alumni Bio