AI music startup Suno admits to using copyrighted music, but says it’s ‘fair use’
AI music startup Suno has admitted that its AI model is trained on copyrighted music, but insists it's legally protected by the fair use doctrine. ...
"We train our models on medium- and high-quality music we can find on the open internet," said Suno CEO Mikey Schulman in a blog postaccompanying the legal filing. "Much of the open internet indeed contains copyrighted materials, and some of it is owned by major record labels."
In response, RIAA posted a statement on X, saying "[Suno's] industrial scale infringement does not qualify as 'fair use.' There's nothing fair about stealing an artist's life's work, extracting its core value, and repackaging it to compete directly with the originals." ...
See the full story here: https://mashable.com/article/ai-music-startup-suno-admits-using-copyrighted-music-says-its-fair-use
European Artificial Intelligence Act comes into force
...
The AI Act introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU:
- Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act due to their minimal risk to citizens' rights and safety. Companies can voluntarily adopt additional codes of conduct.
- Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
- High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.
- Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
...
See the full press release here: https://ec.europa.eu/commission/presscorner/detail/en/ip_24_4123
Entertainment Industry Backs Bill to Outlaw AI Deepfakes
... A bipartisan group of senators, led by Sen. Chris Coons of Delaware, introduced a revised version Wednesday of the No Fakes Act, which would make it illegal to create an AI replica of someone without their consent.
The bill has the support of SAG-AFTRA, Disney, the Motion Picture Association — which represents six major studios — as well as the Recording Industry Association of America, the Recording Academy, and the major music labels and talent agencies. ...
Even some in the tech industry have accepted the idea of outlawing unauthorized likenesses, which they see as an abuse that gives AI a bad name. OpenAI and IBM each endorsed the revised legislation. ...
See the full story here: https://variety.com/2024/politics/news/ai-bill-outlaw-no-fakes-sag-aftra-1236091652/
Who is more polarized about AI—the tech community or the general public?
Researchers from the University of Rochester led by Jiebo Luo, a professor of computer science and the Albert Arendt Hopeman Professor of Engineering, used ChatGPT and natural language processing techniques to analyze the themes and sentiments of 33,912 comments in 388 unique subreddits in the roughly six months following the generative AI tool’s launch in November 2022. ...
“The tech community’s opinions were either strongly positive or strongly negative, more so than the non-tech community” says Luo. “I think the polarization is due to the commenters’ personal knowledge of the issues. You see that play out among some of the tech celebrities as well, with people like Geoffrey Hinton, one of the pioneers of deep learning, being very pessimistic, and others like Sam Altman [the CEO of OpenAI] being far more optimistic.” ...
See the findings here: https://www.sciencedirect.com/science/article/abs/pii/S0736585324000625?via%3Dihub
See the full story here; https://www.rochester.edu/newscenter/artificial-intelligence-ai-reddit-technology-614602/
Istanbul Blockchain Week 2024 returns showcasing Turkey as the rising star in Web3 adoption
... According to Binance Research, cryptocurrency is rapidly establishing itself as a key alternative to traditional finance in Turkey, with 40% of the population already investing in it. ...
See the full story here: https://cointelegraph.com/press-releases/istanbul-blockchain-week-2024-returns-showcasing-turkey-as-the-rising-star-in-web3-adoption
One of America’s Hottest Entertainment Apps Is Chinese-Owned
It is a chatbot offering AI-generated conversations with Donald Trump, Taylor Swift or a customized romantic partner. It is one of America’s more popular entertainment apps. And unnoticed by many users, it is Chinese-owned. ...
Through June, Talkie ranks No. 5 among the most-downloaded free entertainment apps in the U.S., according to Sensor Tower, a market researcher. That ranking puts it behind the likes of Warner Bros. Discovery’s “Max,” Netflixand Tubi. ...
For China’s rising AI stars such as MiniMax, going abroad establishes a much-needed commercial and development pipeline at a time when China’s economyhas softened, access to high-end chips is blocked and regulations make innovation difficult. ...
Users can create their own virtual characters on the app, customizing their look, life story and even the sound of their voices. “Bring your wildest imagination to life,” Talkie promises users. ...
Conversation can unfold via text message or phone call—though not video—with the AI generating potential user responses. More interactions can reap rewards, such as a digital trading card of a user’s Talkie, which can be sold to others with the app’s in-house “gems” currency. ...
The Justice Department said Friday that TikTok collected data about its users’ views on sensitive topics and censored content at the direction of its China-based parent company, making its most forceful case to date that the video-sharing app poses a national-security threat. TikTok has said it wouldn’t comply with any such requests from Beijing. ...
Talkie’s parent MiniMax counts Alibaba and Tencent among its investors. It was valued at more than $2.5 billion in the latest round of investment in March, ...
Talkie’s equivalent in China got pulled from major app stores early last year for sexually explicit content and politically sensitive material. When it relaunched in September with the new name of “Xingye,” or “star field” in Mandarin, some users said they could no longer send a text containing the word “country” or “China.” The AI lovers once would be receptive to users’ offers for a kiss. But no more. ...
More than half of Talkie’s 11 million monthly active users are in the U.S., with popularity also strong in the Philippines, the U.K. and Canada. That puts Talkie within striking distance of the leader in the AI chatbot-companion category, Character.AI, run by an Andreessen Horowitz-backed firm in Silicon Valley, which has roughly 17 million monthly users, according to Sensor Tower. ...
See the full story here: https://www.wsj.com/tech/ai/one-of-americas-hottest-entertainment-apps-is-chinese-owned-04257355
AI at the 2024 Paris Games: Transforming the Olympics Broadcast
... “In 98 days, OBS will produce more than 11,000 hours of content,” OBS head Yiannis Exarchos said in April during a presentation of the Olympic AI Agenda, underscoring the immense challenge the Games presented. “That’s more than 450 days of content, which will translate into more than half a million hours of television and digital coverage across the globe. This is how much our media partners will broadcast. Considering that approximately half of the world’s population will watch these Games, the scale and complexity are unlike anything else in broadcasting.” ...
One key innovation OBS is introducing for Paris 2024 is the Automatic Highlights Generation system. Powered by Intel’s computer vision AI platform Geti, the highlight generation system leverages machine learning models trained on vast datasets of previous Olympic footage. These models can recognize patterns and significant events, such as record-breaking performances or dramatic finishes, ensuring that fans receive the most exciting content without delay. The AI-driven system can create tailored highlights across multiple disciplines and distribute them to fans instantly, significantly improving production and editing efficiency, and enabling broadcasters to deliver more customized digital content faster than ever before. ...
Now, beyond providing production efficiencies, AI is also transforming the storytelling aspect of Olympic broadcasts. ...
For example, AI can analyze an athlete’s historical performance data, training regimes, and social media presence to generate detailed profiles. ...
See the full story here: https://amplify.nabshow.com/articles/ic-olympics-ai-broadcast
5 strategies to activate your agency and stay relevant in the age of AI
... The “Complex Five”: know your unknowns...
Known knowns: ...there is no uncertainty...
Unknown knowns: Things we think we know, but we find that we don’t understand them when they manifest. ...
Known unknowns: ... These are obvious, highly likely events, but few acknowledge them. ...
Unknown unknowns: Things that we don’t know that we don’t know. ...
Butterfly Effects: ... how small changes can have significant and unpredictable consequences. ...
All these degrees of uncertainty share a common trait: ignorance, or absence of evidence, is not evidence of absence. ...
[PhilNote: the article gets into examples and what to do about them]
See the full story with diagram here: https://www.weforum.org/agenda/2024/07/5-strategies-agency-relevant-age-of-ai/
Video game performers will strike over AI concerns
... SAG-AFTRA performers working in games "deserve and demand the same fundamental protections as performers in film, television, streaming, and music: fair compensation and the right of informed consent for the A.I. use of their faces, voices, and bodies," said the union's National Executive Director & Chief Negotiator Duncan Crabtree-Ireland. ...
The strike includes the following studios:
- Activision Productions Inc.
- Blindlight LLC
- Disney Character Voices Inc.
- Electronic Arts Productions Inc.
- Formosa Interactive LLC
- Insomniac Games Inc.
- Llama Productions LLC
- Take 2 Productions Inc.
- VoiceWorks Productions Inc.
- WB Games Inc.
See the full story here: https://www.engadget.com/video-game-performers-will-strike-over-ai-concerns-201733660.html
AI And The Changing Character Of War – OpEd
... The United Nations Office for Disarmament Affairs (UNODA) identified in 2017 that an increasing number of states were pursuing development and utilization of the autonomous weapon systems that present risk of an ‘uncontrollable war.’ According to a 2023 study on ‘Artificial Intelligence and Urban Operations’ by the University of South Florida, “the armed forces may soon be able to exploit autonomous weapon systems to monitor, strike, and kill their opponents and even civilians at will.” The study further highlights that in October 2016, United States Department of Defence (US DoD) conducted experiments with micro drones capable of exhibiting advanced swarm behaviour such as collective decision making, adaptive formation flying and self-healing. Asia Times reported in February 2023 that the US DoD had launched Autonomous Multi-Domain Adaptive Swarms-of-Swarms (AMASS) project to develop autonomous drone swarms that can be launched from sea, air and land to overwhelm enemy air defences. ...
Notably, in January 2024, a group of researchers from four US universities found, while simulating a war scenario, using five AI programs including OpenAI and Meta’s AI program, that all models chose nuclear attacks over peace with their adversary. Findings of this study are a wake-up call for the world leaders and scientists to come together in a multilateral setting to strengthen the UN’s efforts to regulate AI in warfare. ...
See the full story here: https://www.eurasiareview.com/22072024-ai-and-the-changing-character-of-war-oped/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio