Fighting Fiction With Fact In An AI-Powered World
...
A Perfect Storm of Trust and Technology
At their core, deepfakes are hyper-realistic audio or video forgeries created using generative adversarial networks—machine learning models where two AIs compete to produce increasingly convincing synthetic content. The result is content that can fool not just the eye or the ear, but even the instincts of experienced professionals. ...
The cybersecurity industry is scrambling to respond, but many of today’s defenses are fundamentally reactive. ...
And even when deepfakes are flagged, there's often no system in place to stop them from being acted upon. ...
Shifting from Reaction to Prevention
The path forward requires a fundamental shift: instead of simply identifying deepfakes after the fact, organizations need to prevent them from reaching users in the first place. That means integrating proactive safeguards into the communication channels themselves—blocking suspicious calls, verifying identities before interaction, and securing messaging platforms. ...
Innovating Real-Time Defense
Polyguard is embracing this proactive approach. The company launched today as the first platform designed to block deepfake and AI-powered fraud in real time across audio, video, and messaging.
Unlike traditional detection tools, Polyguard is designed to intercept threats before they’re delivered, using identity-verified encrypted communication channels and real-time inbound/outbound number blocking. Polyguard claims it also protects against caller ID spoofing and integrates with platforms like Zoom and call center software to secure common vectors for attack. ...
The Arms Race Between Authenticity and AI
This is not just a technological battle. As synthetic media becomes more sophisticated, the very concept of truth is at stake. Institutions, regulators, and technologists must work together to define new norms for digital trust. ...
See the full story here: https://www.forbes.com/sites/tonybradley/2025/03/27/fighting-fiction-with-fact-in-an-ai-powered-world/
OpenAI’s viral Studio Ghibli moment highlights AI copyright concerns
...
According to Evan Brown, an intellectual property lawyer at the law firm Neal & McDevitt, products like GPT-4o’s native image generator operate in a legal gray area today. Style is not explicitly protected by copyright, according to Brown, meaning OpenAI does not appear to be breaking the law simply by generating images that look like Studio Ghibli movies.
However, Brown says it’s plausible that OpenAI achieved this likeness by training its model on millions of frames from Ghibli’s films. Even if that was the case, several courts are still deciding whether training AI models on copyrighted works falls under fair use protections.
“I think this raises the same question that we’ve been asking ourselves for a couple years now,” said Brown in an interview. “What are the copyright infringement implications of going out, crawling the web, and copying into these databases?”...
See the full story here: https://techcrunch.com/2025/03/26/openais-viral-studio-ghibli-moment-highlights-ai-copyright-concerns
Over 4 million Gen Zers are jobless—and experts blame colleges for ‘worthless degrees’ and a system of broken promises for the rising number NEETs
... Too much time has been focused on promoting a four-year degree as the only reliable route, despite the payoff being more uneven and uncertain, says Bulanda. Other pathways, like skilled trade professionals,should be a larger share of the conversation. ...
Plus, with others struggling to land a job in a market changing by the minute thanks to artificial intelligence, it’s no wonder Gen Z finds doomscrolling at home more enjoyable than navigating an economy completely different than what their teachers promised them....
Efforts should include ramping up accessible entry points like apprenticeships and internships, especially for disengaged young people, as well as building better bridges between industries and education systems, Maleh says. ...
See the full story here: https://fortune.com/2025/03/25/gen-z-neet-not-in-education-employment-training-higher-ed-worthless-degrees-college/?utm_source=substack&utm_medium=email
Blockchain Integration Guide for Legacy System
Key Takeaways: Legacy System Challenges: Old hardware, rigid databases, and outdated software limit scalability and compatibility. Blockchain Benefits: Enhanced data protection (60% of adopters report stronger security), operational visibility (77% in supply chain), and potential cost savings (e.g., $20 billion annually in banking). Integration Steps:Analyze your system: Evaluate data structure, APIs, and performance. Build connection points: Use APIs and middleware for smooth data transfer. Secure the system: Implement cryptographic hashing, access controls, and audit trails. Test and monitor: Conduct unit, integration, and security tests; track performance metrics. See the full story here: https://www.trailyn.com/blockchain-integration-guide-for-legacy-systems-2/?ref=daily-newsletter |
‘Super-Turing AI’ uses less energy by mimicking the human brain
PhilNote: this is a general audience article, but the research it reports on could be of interest.
... Dr. Suin Yi, assistant professor of electrical and computer engineering at Texas A&M's College of Engineering, is on a team of researchers that developed "Super-Turing AI," which operates more like the human brain. This new AI integrates certain processes instead of separating them and then migrating huge amounts of data like current systems do. ...
In the brain, the functions of learning and memory are not separated, they are integrated. ...
"Traditional AI models rely heavily on backpropagation—a method used to adjust neural networks during training," Yi said. "While effective, backpropagation is not biologically plausible and is computationally intensive.
"What we did in that paper is troubleshoot the biological implausibility present in prevailing machine learning algorithms," he said. "Our team explores mechanisms like Hebbian learning and spike-timing-dependent plasticity—processes that help neurons strengthen connections in a way that mimics how real brains learn."
Hebbian learning principles are often summarized as "cells that fire together, wire together." This approach aligns more closely with how neurons in the brain strengthen their connections based on activity patterns. By integrating such biologically inspired mechanisms, the team aims to develop AI systems that require less computational power without compromising performance. ...
Read the full story here: https://techxplore.com/news/2025-03-super-turing-ai-energy-mimicking.html
OpenAI Is Ready for Hollywood to Accept Its Vision
... Among few considerations holding back further deployment of AI is the specter of a court ruling that the use of copyrighted materials to train AI systems constitutes infringement. Another factor is that AI-generated works aren’t eligible for copyright protection, limiting exploitation since they’d enter the public domain. ...
On March 19, the ChatGPT maker screened 11 short films made with Sora by independent filmmakers at Brain Dead Studios, a hip movie theater in West Hollywood on Fairfax Avenue, in a bid to showcase its technology. Those movies displayed the limitations of the tools while hinting at their potential.
None of the titles incorporated extensive dialogue between characters. Narratives were sparse to nonexistent, with more than one person commenting after the screenings some of the films were closer to commercials than short films. ...
Some VFX artists are already leaning into AI, working around certain legal constraints by training open-source systems on their own works. ...
Alton Glass, a Directors Guild of America member who attended the screening, said “workflows are going to shift” with the advent of AI. He stressed, “opportunity will come from that.” ...
A study commissioned last year by the Concept Art Association and The Animation Guild surveying 300 leaders across the entertainment industry found that three-fourths of respondents indicated that AI tools supported the elimination, reduction or consolidation of jobs at their companies. Over the next three years, it estimated that nearly 204,000 positions will be adversely affected. At the forefront of the displacement: sound engineers, voice actors, concept artists and employees in entry-level positions, according to the study. Visual effects and other postproduction work stands particularly vulnerable. ...
See the full story here: https://www.hollywoodreporter.com/business/business-news/openai-hollywood-sora-1236170402/
Why the world is looking to ditch US AI models
... the Trump administration's shocking, rapid gutting of the US government (and its push into what some prominent political scientists call “competitive authoritarianism”) also affects the operations and policies of American tech companies—many of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies’ willingness to engage with and invest in communities that have smaller user bases—especially non-English-speaking ones.
As a result, some policymakers and business leaders—in Europe, in particular—are reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI. ...
Social media content moderation systems—which already use automation and are also experimenting with deploying large language models to flag problematic posts—are failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. “The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content,” she tells me. “It’s so circular, and the errors just keep repeating and amplifying.” ...
Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context. ...
... “smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages,” says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation. ...
“Fundamentally, the training data a model trains on is akin to the worldview it develops.” ...
See the full story here: https://www.technologyreview.com/2025/03/25/1113696/why-the-world-is-looking-to-ditch-us-ai-models/
The Reality of the US-China AI Arms Race
Alvin Graylin gives an excellent, entertaining, and data-rich presentation on the current race to develop AI, why the idea that the US must "win" it doesn't map to what is actually happening, why the US sanctions to slow China's AI development are doing exactly the opposite, why cooperation with China and others will give the world better AI, and why the greatest threats from AI are rogue individuals and groups rather than countries.
(HBS) The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise(HBR)
... Even more interesting: AI broke down professional silos. R&D people with AI produced more commercial work and commercial people with AI had more technical solutions. ...
See the paper here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231
See the substack description here: https://www.oneusefulthing.org/p/the-cybernetic-teammate
PhilNote: The substack comments are equally interesting
Netflix’s Reed Hastings Donates $50M to Launch AI and Humanity Initiative at Bowdoin College
... Initial priorities include hiring 10 new faculty members in a range of disciplines; supporting current faculty who want to incorporate and interrogate AI in their teaching, research and other work; and conversations about the uses of AI and the changes and challenges it will bring, including workshops, symposia and support for student research. ...
“Bowdoin is ideally positioned to meet the challenges and opportunities of AI,” she said. “Our deep commitment to the liberal arts and the common good position us to think together about what we are going to value in human cognition, and what we will want our AI systems to do — or not do — going forward in service to humanity.” ...
See the full story here: https://www.hollywoodreporter.com/business/business-news/netflix-reed-hastings-donation-ai-humanity-bowdoin-college-1236171130/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio