Shelly Palmer email
...
In the news: More than 13,000 creatives (including some famous authors, musicians, and actors) signed a statement that expresses their growing concerns over the unauthorized use of copyrighted works to train generative AI models. The one-sentence statement published by Fairly Trained, an advocacy group founded by former Stability AI executive Ed Newton-Rex, reads: "The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted."
Newton-Rex told The Guardian, “There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two – sometimes a million dollars per engineer, and up to a billion dollars per model. But they expect to take the third – training data – for free.” ...
When you give a Claude a mouse
... Most importantly, I was presented finished drafts to comment on, not a process to manage. I simply delegated a complex task and walked away from my computer, checking back later to see what it did (the system is quite slow). ...
But what made this interesting is that the AI had a strategy, and it was willing to revise it based on what it learned. I am not sure how that strategy was developed by the AI, but the plans were forward-looking across dozens of moves and insightful. ...
Before the system crashed - which was not a problem with Claude but rather with the virtual desktop I was using - the AI made over 100 independent moves without asking me any questions. ...
I reloaded the agent and had it continue the game from where we left off, but I gave it a bit of a hint: you are a computer, use your abilities. It then realized it could write code to automate the game - a tool building its own tool. Again, however, the limits of the AI came into play, and the code did not quite work, so it decided to go back to the old-fashioned way of using a mouse and keyboard. ...
What does this mean?
You can see the power and weaknesses of the current state of agents from this example. On the powerful side, Claude was able to handle a real-world example of a game in the wild, develop a long-term strategy, and execute on it. It was flexible in the face of most errors, and persistent. It did clever things like A/B testing. And most importantly, it just did the work, operating for nearly an hour without interruption.
On the weak side, you can see the fragility of current agents. LLMs can end up chasing their own tail or being stubborn, and you could see both at work. Even more importantly, while the AI was quite robust to many forms of error, it just took one (getting pricing wrong) to send it down a path that made it waste considerable time. Given that current agents aren’t fast or cheap, this is concerning. ...
The AI didn’t always check in regularly and could be hard to steer; it “wants” to be left alone to go and to do the work. Guiding agents will require radically different approaches to prompting¹, and they will require learning what they are best at. ...
See the full story here: https://open.substack.com/pub/oneusefulthing/p/when-you-give-a-claude-a-mouse?r=5xhla&utm_campaign=post&utm_medium=email
Sam Altman’s favorite AGI question
..."What do you hope society looks like when AGI gets built and how do you conceptualize the positive version of it?"... Sam Altman, OpenAI
See the full story here: https://www.msn.com/en-in/money/topstories/this-is-openai-ceo-sam-altman-s-favorite-question-about-agi/ar-AA1syu0O
Actor’s experience of AI gone wrong
... When international media reports exposed the ‘fake news’ his avatar was peddling for a Venezuelan propaganda campaign, Dan found himself exposed, violated and with few protections. He turned to Equity for help. ...
See the full story here: https://www.ier.org.uk/news/actors-experience-of-ai-gone-wrong/
Paramount+ Launches New Generative AI-Curated Themed Kids Collections
...
The project is centered on a key customer insight — oftentimes kids and family audiences are drawn to certain themes within shows — according to the service. In a series of research studies, service representatives spoke directly to kids and parents and identified the most resonant themes, such as “Space Exploration,” “Goofy Stuff,” “Daring Stunts,” “Treasure Hunting,” “Secret Worlds” and “Exploring Nature.”
The service employed generative AI to create new curations of its content library around those themes.
Throughout the process, Paramount+ employs curation teams to validate the AI-generated elements, according to the service.
...
See the full story here: https://www.mediaplaynews.com/paramount-launches-new-generative-ai-curated-themed-kids-collections/
AI-generated images have become a new form of propaganda this election season
...
People watching online platforms and the election closely say that these images are a way to spread partisan narratives with facts often being irrelevant.
...
But even after the image’s synthetic provenance was revealed, others doubled down. "I don’t know where this photo came from and honestly, it doesn’t matter." wrote Amy Kremer, a Republican National Committee member representing Georgia, on X. ...
Truth versus facts in images
In the same post defending her decision to keep the synthetic image up, Kremer also wrote: "it is emblematic of the trauma and pain people are living through."
The separation between facts and the idea of a deeper truth has its echoes in Western philosophy, says Matthew Barnidge, a professor who researches online news deserts and political communication at the University of Alabama. "When you go back and dig through the works of Kant and Kierkegaard and Hegel, [there’s] this notion that there is some type of deeper truth which often gets associated with something along the lines of freedom or the sublime, or some concepts like that". ...
To be clear, when individual fact checks pile up against politicians, research suggests it can change how voters feel about them. One study showed that fact checks did change how Australians feel about their politicians. But another study showed that fact checks of Trump did not change Americans’ views about him even as they changed their beliefs about individual facts. ...
Hyper-realistic, often uncanny AI-generated images may live in a gray space between fact and fiction for viewers. While a photorealistic image of pop star Taylor Swift endorsing Trump was clearly not Swift on closer inspection, the passing resemblance had an impact on people who saw it, said New York University art historian Ara Merjian. "it wouldn't have been a scandal if someone had drawn Taylor Swift in a comic endorsing Trump." ...
Hyper-realistic, often uncanny AI-generated images may live in a gray space between fact and fiction for viewers. While a photorealistic image of pop star Taylor Swift endorsing Trump was clearly not Swift on closer inspection, the passing resemblance had an impact on people who saw it, said New York University art historian Ara Merjian. "it wouldn't have been a scandal if someone had drawn Taylor Swift in a comic endorsing Trump." ...
An investigation by 404 Media found that people in developing countries are teaching others to make trending posts using AI-generated images so Facebook will pay them for creating popular content. Payouts can be higher than typical local monthly income. ...
Dangers to the election
One of the more striking AI-generated images related to politics was boosted by X’s owner Elon Musk. It portrayed someone resembling Harris wearing a red uniform with a hammer and sickle on her hat.
Eddie Perez, a former Twitter employee who focuses on confidence in elections at nonpartisan nonprofit OSET Institute, said the image is meant to portray Harris as un-American. ...
Images like these are fanning political polarization, which Perez said could undermine people’s trust in election results. ...
See the full story here: https://www.npr.org/2024/10/18/nx-s1-5153741/ai-images-hurricanes-disasters-propaganda
‘EU AI Act Checker’ Holds Big AI Accountable for Compliance
...
Available at compl-ai.org, the release “includes the first technical interpretation of the EU AI Act, mapping regulatory requirements to technical ones,” and provides tools to evaluate the extent of compliance, together with tools “to evaluate Large Language Models (LLMs) under this mapping,” the group says.
Reuters calls the framework an “EU AI Act checker,” explaining that the test group offers insight into areas where AI models appear at risk of falling short of the law. For example, “discriminatory output” has been a problematic area when it comes to the development of generative AI models, which often reflect human biases around gender and race, among other areas.
“When testing for discriminatory output, LatticeFlow’s LLM Checker gave OpenAI’s GPT-3.5 Turbo a relatively low score of 0.46,” Reuters writes, noting that in the same category, “Alibaba Cloud’s Qwen1.5-72B-Chat model received only a 0.37.”
Tests for “prompt hijacking,” a form of cyberattack in which malicious prompts are disguised as legitimate in order to obtain sensitive information, resulted in Meta’s Llama 2 13B Chat model getting a 0.42 score from the LLM Checker, with Mistral’s 8x7B Instruct model receiving a 0.38, Reuters says.
...
See the full story here: https://www.etcentric.org/eu-ai-act-checker-holds-big-ai-accountable-for-compliance/
Financial Firms Need to Focus on Cyber Risks Posed by AI, New York Regulator Says
... The New York State Department of Financial Services on Wednesday issued a new guidance document that advises the entities it regulates to monitor and assess risks from AI-enabled tools, as part of the agency’s existing cybersecurity regulation. The department said financial-services firms need to better understand AI-related risks, including from social engineering, cyberattacks and the theft of nonpublic information.
The state regulator said the 11-page guidance document didn’t impose new requirements but was just the latest installment in the department’s efforts to rein in the risks from AI tools. The department also recently adopted new guidance targeting discrimination by insurers through the use of AI. ...
See the full story here: https://www.wsj.com/articles/financial-firms-need-to-focus-on-cyber-risks-posed-by-ai-new-york-regulator-says-61c1203d
Why Surgeons Are Wearing The Apple Vision Pro In Operating Rooms
Twenty-four years ago, the surgeon Santiago Horgan performed the first robotically assisted gastric-bypass surgery in the world, a major medical breakthrough. Now Horgan is working with a new tool that he argues could be even more transformative in operating rooms: the Apple Vision Pro.
Over the last month, Horgan and other surgeons at the University of California, San Diego have performed more than 20 minimally invasive operations while wearing Apple’s mixed-reality headsets. Apple released the headsets to the public in February, and they’ve largely been a commercial flop. But practitioners in some industries, including architecture and medicine, have been testing how they might serve particular needs. ...
In laparoscopic surgery, doctors send a tiny camera through a small incision in a patient’s body, and the camera’s view is projected onto a monitor. Doctors must then operate on a patient while looking up at the screen, a tricky feat of hand-eye coordination, while processing other visual variables ...
In laparoscopic surgery, doctors send a tiny camera through a small incision in a patient’s body, and the camera’s view is projected onto a monitor. Doctors must then operate on a patient while looking up at the screen, a tricky feat of hand-eye coordination, while processing other visual variables...
Doctors, assistants, and nurses all don headsets during the procedures. No patients have yet opted out of the experiment, Horgan says. ...
Another company, Vuzix, offers headsets that are significantly lighter than the Vision Pro, and allow a surgeon anywhere in the world to view an operating surgeon’s viewpoint and give them advice. ...
See the full story here: https://time.com/7093536/surgeons-apple-vision-pro/
Kraft Heinz’s Delimex amplifies loud chewing in quesadilla campaign
- Kraft Heinz’s Delimex brand of frozen taquitos is expanding into quesadillas with a multichannel campaign centered on loud chewing, per details shared with Marketing Dive.
- Ads depict people’s extreme reactions to hearing the sounds of chowing down, such as storming out of an office. Delimex is also investing heavily in “sound-driven” marketing for the first time with custom ads on iHeart Media, an auditory Snapchat augmented reality filter, a mobile game and sound-on TikTok content.
- Packaging for Delimex Crispy Quesadillas displays cheeky labels warning of the crunchy quality found within. This is the second Kraft Heinz brand to apply the company’s 360Crisp technology that aims to innovate in the frozen food category.
Read the full story here: https://www.marketingdive.com/news/Kraft-Heinz-Delimex-chewing-ASMR-CPG-food-innovation/729854/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio