WMG’s CEO Lays Out His Vision & Proposed Rules for AI During Senate Hearing on Deepfakes Bill
... The draft bill — called the Nurture Originals, Foster Art, and Keep Entertainment Safe Act (NO FAKES Act) — would create a federal right for artists, actors and others to sue those who create “digital replicas” of their image, voice, or visual likeness without permission. Those individuals have previously only been protected through a patchwork of state “right of publicity” laws. First introduced in October, the NO FAKES Act is supported by a bipartisan group of U.S. senators including Sen. Chris Coons (D-Del.), Sen. Marsha Blackburn (R-Tenn.), Sen. Amy Klobuchar (D-Minn.) and Sen. Thom Tillis (R-N.C.). ...
“When you have these deepfakes out there [on streaming platforms],” said Kyncl, “the artists are actually competing with themselves for revenue on streaming platforms because there’s a fixed amount of revenue within each of the streaming platforms. ...
See the full story here: https://www.billboard.com/business/legal/ai-hearing-senate-warner-music-ceo-fka-twigs-1235670149/
MrBeast’s Marc Hustvedt IDs His Top Five Trends for the Creator Economy
- MrBeast YouTube president Marc Hustvedt shares his insights into “The Top Five Trends in the Creator Economy” at the 2024 NAB Show, delving into the evolving dynamics currently shaping digital content creation and distribution.
- He discusses the profound impact of AI on content creation, emphasizing the need for creators to utilize AI to enhance engagement and navigate a noisier social landscape.
- Hustvedt highlights the importance of localizing content through human translators to maintain cultural relevance and emotional resonance, broadening global audience engagement.
- He showcases successful creator-led product development, citing MrBeast’s chocolate bar sales and innovative retail strategies. He also discusses the evolution of monetization models from ad-supported to direct revenue streams like merchandise and subscriptions, reducing dependency on platform-specific ads.
- Addressing the risks associated with platform instability, particularly TikTok, Hustvedt recommends diversification of content strategies across more stable platforms like YouTube.
See the full story here: https://amplify.nabshow.com/articles/nabshow-mrbeast-youtube-marc-hustvedts-creator-economy
Making Your LLM Yours: Enhancing LLMs with External Signals (Shelly Palmer)
PhilNote: this is an excellent summary of generic info about making LLMs more useful.
Big foundational models like GPT-4, Gemini, Claude, and Llama — aka large language models or “LLMs” — are awesome, but they are not experts in your business. They are also available to all of your competitors, so creating competitive advantage requires you to train your subject matter experts to get the most out of the AI and to augment your LLMs to be as relevant to your business as possible.
One of the most effective ways to augment an LLM is with the addition of external signals, which are various types of real-world data and information that can be used to influence and improve the performance and relevance of LLMs. Let’s take a high-level look at how external signals can transform LLMs, making them more relevant, responsive, and ultimately, more useful. ...
See the full story here: https://shellypalmer.com/2024/04/making-your-llm-yours-enhancing-llms-with-external-signals
- In March of 2024, U.S.-based AI company Anthropic released Claude 3, an update to its powerful large language model AI.
- Its immense capabilities, especially some introspection during testing, left some wondering if Claude 3 had reached a certain level of self-awareness, or even sentience.
- While Claude 3’s abilities are impressive, they’re still a reflection of the AI’s (admittedly) remarkable ability to identify patterns, and lacks the important intelligence criteria to match human sentience.
... But things got creepier when Anthropic prompt engineer Alex Albert pulled back the testing curtain to detail one of the more strange responses Claude 3 gave when fulfilling certain tasks designed to stump it. In a post on X, Albert said they were performing a “needle-in-the-haystack eval,” where the a sentence is inserted into a random documents and then a question is asked that only that sentence can answer. Claude 3’s response to the question was…surprising. ...
This was Claude’s response after correctly finding the “needle” and answering the question:
However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping “fact” may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all. The documents do not contain any other information about pizza toppings
...
See the full article here: https://www.popularmechanics.com/technology/robots/a60606512/claude-3-self-aware/
Recruiters Are Going Analog to Fight the AI Application Overload
So far, over 3,000 people have applied to one open data science vacancy at a US health tech company this year. The top candidates are given a lengthy and difficult task assessment, which very few pass, says a recruiter at the company, who asked to remain anonymous because they are not authorized to speak publicly.
The recruiter says they believe some who did pass may have used artificial intelligence to solve the problem. There was odd wording in some, the recruiter explains, others disclosed using AI, and in one case when the person moved on to the next interview, they couldn’t answer questions about the task. “Not only have they wasted their time, but they wasted my time,” says the recruiter. “It’s really frustrating.” ...
Still, generative AI tools for both recruiters and job seekers are becoming more common. LinkedIn launched a new AI chatbotearlier this year, meant to help people navigate job hunting. The hope was that it would help people see better if they align well with a job or better tailor their résumé for it, peeling back the curtain that separates a job seeker and the hiring process. ...
Sim Bhatia, the people operations manager at Reality Defender, a company that detects deepfakes, says she doesn’t use any AI tools to evaluate applicants. For now, the tools are not as useful as they are risky, she says. She can filter for applicants based in New York, where Reality Defender’s office is, without using the generative tools. Using the still developing technology might be a data safety issue for candidates, she says, or for current employees if it’s used in the company’s system.
Bhatia says she is reviewing applicants herself, looking at résumés and screening applicants over the phone, which takes about 10 hours a week as the company’s small staff is looking to expand. ...
Disney patents metaverse technology for theme parks that would track visitors while projecting personalized 3D images for them
- Disney is one step closer to creating its own theme-park metaverse.
- The entertainment company was recently approved for a "virtual-world simulator" patent.
- The tech includes a tracking system and 3D image projector
Disney Enterprises was approved for a "virtual-world simulator" patent during the last week of December. The technology would project 3D images and virtual effects onto physical spaces, according to the US Patent Office.
Instead of being designed for mass entertainment, the device would track individual park visitors to personalize the projections. For example, while one family may see Mickey Mouse greeting them by a hot-dog stand, another group could interact with Princess Belle and Cinderella.
The technology aligns with the brand's goal to tell stories through a "three-dimensional canvas" ...
See the full story here: https://ca.movies.yahoo.com/movies/disney-patents-metaverse-technology-theme-164633499.html
ACLU seeks AI records from NSA, Defense Department in new lawsuit
...
“Transparency is one of the core values animating White House efforts to create rules and guidelines for the federal government’s use of AI, but exemptions for national security threaten to obscure some of the most high-risk uses of AI,” Patrick Toomey, deputy director of the ACLU’s National Security Project who is representing the civil rights organization, told FedScoop.
Toomey said the NSA has described itself as a leader among the intelligence agencies in the development and deployment of AI, and officials have noted that it’s using the technology to gather information on foreign governments, assist with language processing, and monitor networks for cybersecurity threats.
“But unfortunately, that’s about all we know,” Toomey said. “And as the NSA integrates AI into some of its most profound decisions, it’s left the public in the dark about how it uses AI and what safeguards, if any, are in place to protect everyday Americans and others around the world whose privacy hangs in the balance.”
The specific documents being requested include an October 2022 report from DOD and NSA titled “Joint Evaluation of the National Security Agency’s Integration of Artificial Intelligence,” several roadmap documents created by NSA starting in January 2023, and documents related to the agency’s proposed uses of AI and machine learning created on or after January 2022.
...
See the full story here: https://fedscoop.com/aclu-seeks-ai-records-from-nsa-defense-department/
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
... Some question whether Saudi Arabia can become a global tech hub. The kingdom has faced scrutiny for its human rights record, intolerance to homosexuality and brutal heat. But for those in the tech world who descended on Riyadh last month, the concerns seemed secondary to the dizzying amount of deal-making underway. ...
Torn Between Superpowers
Situated along the Red Sea’s turquoise waters, King Abdullah University of Science and Technology has become a site of the U.S.-Chinese technological showdown.
The university, known as KAUST, is central to Saudi Arabia’s plans to vault to A.I. leadership. Modeled on universities like Caltech, KAUST ihas brought in foreign A.I. leaders and provided computing resources to build an epicenter for A.I. research. ...
To achieve that aim, KAUST has often turned to China to recruit students and professors and to strike research partnerships, alarming American officials. They fear students and professors from Chinese military-linked universities will use KAUST to sidestep U.S. sanctions and boost China in the race for A.I. supremacy, analysts and U.S. officials said.
Of particular concern is the university’s construction of one of the region’s fastest supercomputers, which needs thousands of microchips made by Nvidia, the biggest maker of precious chips that power A.I. systems. The university’s chip order, with an estimated value of more than $100 million, is being held up by a review from the U.S. government, which must provide an export license before the sale can go through. ...
See the full story here: https://www.nytimes.com/2024/04/25/technology/to-the-future-saudi-arabia-spends-big-to-become-an-ai-superpower.html
Here are 7 free AI classes you can take online from top tech firms, universities
...
Harvard University: Introduction to Artificial Intelligence with Python
If you’re one of the 5.7 million people who has taken Harvard University’s CS50 Introduction to Computer Science course through edX, then the university’s introductory AI class might be the best option for you. CS50, which is one of the most popular free online courses of all time, is a prerequisite for Harvard’s Introduction to Artificial Intelligence with Python course.
This seven-week course covers AI algorithms, game-playing engines, handwriting recognition, and machine translation. Students have to commit between 10 and 30 hours per week to complete the course, which includes hands-on projects and lectures. The course is taught by David J. Malan, a renowned computer scientist and Harvard professor. ...
See the full story with all 7 classes described at https://fortune.com/education/articles/free-ai-classes-you-can-take-online/
OpenAI, Meta and Google Sign On to New Child Exploitation Safety Measures
... “This project was intended to make abundantly clear that you don’t need to throw up your hands,” said Rebecca Portnoff, vice president of data science at Thorn. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees.” ...
When Thorn approached AI companies, they found that while some companies already had large teams focused on removing child-sexual-abuse material, others were unaware of the problem and potential solutions. There is also a tension between the imperative to safeguard these tools and business leaders’ push to move quickly to advance new AI technology. ...
Today, watermarks are removable, and AI companies are still looking for ways to mark AI-generated images permanently, said Ella Irwin, senior vice president of integrity at Stability AI, the company behind the open-source image-generation model Stable Diffusion. ...
See the full story here: https://www.wsj.com/tech/ai/ai-developers-agree-to-new-safety-measures-to-fight-child-exploitation-2a58129c
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio