philip lelyveld The world of entertainment technology

18Mar/24Off

Why Are Large AI Models Being Red Teamed? 

Intelligent systems demand more than just repurposed cybersecurity tools

In February, OpenAI announced the arrival of Sora, a stunning “text-to-video” tool. Simply enter a prompt, and Sora generates a realistic video within seconds. But it wasn’t immediately available to the public. Some of the delay is because OpenAI reportedly has a set of experts called a red team who, the company has said, will probe the model to understand its capacity for deepfake videos, misinformation, bias, and hateful content.

Red teaming, while having proved useful for cybersecurity applications, is a military tool that was never intended for widespread adoption by the private sector. 

“Done well, red teaming can identify and help address vulnerabilities in AI,” says Brian Chen, director of policy from the New York–based think tank Data & Society. “What it does not do is address the structural gap in regulating the technology in the public interest.” ...

 The purpose of red-teaming exercises is to play the role of the adversary (the red team) and find hidden vulnerabilities in the defenses of the blue team (the defenders) who then think creatively about how to fix the gaps. ...

Zenko also reveals a glaring mismatch between red teaming and the pace of AI advancement. The whole point, he says, is to identify existing vulnerabilities and then fix them. “If the system being tested isn’t sufficiently static,” he says, “then we’re just chasing the past.” ...

Dan Hendrycks, executive and research director of the San Francisco–based Center for AI Safety, says red teaming shouldn’t be treated as a turnkey solution either. “The technique is certainly useful,” he says. “But it represents only one line of defense against the potential risks of AI, and a broader ecosystem of policies and methods is essential.”

NIST’s new AI Safety Institute now has an opportunity to change the way red teaming is used in AI. The Institute’sconsortium of more than 200 organizations has already reportedly begun developing standards for AI red teaming. Tech developers have also begun exploring best practices on their own. For example, Anthropic, GoogleMicrosoft, and OpenAI have established the Frontier Model Forum (FMF) to develop standards for AI safety and share best practices across the industry.

See the full story here: https://spectrum.ieee.org/red-team-ai-llms

18Mar/24Off

AI is keeping GitHub chief legal officer Shelley McKinley busy

TechCrunch chats with GitHub's legal beagle about the EU's AI Act and developer concerns around Copilot and ownership

... GitHub, which Microsoft bought for $7.5 billion in 2018, has emerged as one of the most vocal naysayers around one very specific element of the regulations: muddy wording on how the rules might create legal liability for open source software developers. ...,

For the unfamiliar, GitHub is a platform that enables collaborative software development, allowing users to host, manage, and share code “repositories” (a location where project-specific files are kept) with anyone, anywhere in the world. Companies can pay to make their repositories private for internal projects, but GitHub’s success and scale has been driven by open source software development carried out collaboratively in a public setting. ...

As well-meaning as Europe’s incoming AI regulations might be, critics argued that they would have significant unintended consequences for the open source community, which in turn could hamper the progress of AI. This argument has been central to GitHub’s lobbying efforts.

“Regulators, policymakers, lawyers… are not technologists,” McKinley said. “And one of the most important things that I’ve personally been involved with over the past year, is going out and helping to educate people on how the products work. People just need a better understanding of what’s going on, so that they can think about these issues and come to the right conclusions in terms of how to implement regulation.”

At the heart of the concerns was that the regulations would create legal liability for open source “general purpose AI systems,” which are built on models capable of handling a multitude of different tasks. If open source AI developers were to be held liable for issues arising further down-stream (i.e. at the application level), they might be less inclined to contribute — and in the process, more power and control would be bestowed upon the big tech firms developing proprietary systems. ...

But those intricacies aside, McKinley reckons that their hard lobbying work has mostly paid off, with regulators placing less focus on software “componentry” (the individual elements of a system that open-source developers are more likely to create), and more on what’s happening at the compiled application level.

“That is a direct result of the work that we’ve been doing to help educate policymakers on these topics,” McKinley said. ...

Copilot ultimately raises key questions around who authored a piece of software — if it’s merely regurgitating code written by another developer, then shouldn’t that developer get credit for it? Software Freedom Conservancy’s Bradley M. Kuhn wrote a substantial piece precisely on that matter, called: “If Software is My Copilot, Who Programmed My Software?

There’s a misconception that “open source” software is a free-for-all — that anyone can simply take code produced under an open source license and do as they please with it. But while different open source licenses have different restrictions, they all pretty much have one notable stipulation: developers reappropriating code written by someone else need to include the correct attribution. It’s difficult to do that if you don’t know who (if anyone) wrote the code that Copilot is serving you. ...

“I would say the EU AI Act is a ‘fundamental rights base,’ as you would expect in Europe,” McKinley said. “And the U.S. side is very cybersecurity, deep-fakes — that kind of lens. But in many ways, they come together to focus on what are risky scenarios — and I think taking a risk-based approach is something that we are in favour of — it’s the right way to think about it.”

See the full story here: https://techcrunch.com/2024/03/16/ai-is-keeping-github-chief-legal-officer-shelley-mckinley-busy/

18Mar/24Off

Looking for Gen Z shoppers? They’re on Facebook

... We all know that Facebook has fallen from grace. Once atop the social media hierarchy as the place to be, it’s become a relic of the past where estranged relatives post annually for your birthday.

The numbers back it up: Teen usership of Facebook dropped from 71% in 2014 to 33% in 2023, according to a Pew Research Center survey. In its place, YouTube, TikTok, Instagram, and Snapchat reign supreme.

But Facebook has one feature that keeps young users from deleting their accounts: Marketplace, which Gen Zers are using regularly to shop, per The New York Times.

  • Facebook Marketplace, which launched in 2016, has 1B+ monthly active users and ~40% of Facebook’s 3B+ users shop on Marketplace.
  • An estimated 491m users log into Facebook just to use Marketplace.
  • In 2022, the number of Marketplace users increased 3.6% YoY.

Now, Marketplace ranks just behind eBay as the second most popular site for secondhand shopping in the US.

...

See the full story here: https://thehustle.co/news/looking-for-gen-z-shoppers-they-re-on-facebook?utm_source=substack&utm_medium=email

18Mar/24Off

ChatGPT’s ancestor GPT-2 jammed into 1.25GB Excel sheet — LLM runs inside a spreadsheet that you can download from GitHub

[PhilNote: This 10 minute video, embedded in this story, gives a VERY CLEAR explanation, using spreadsheets, of how ChatGPT works: https://www.youtube.com/watch?v=FyeN5tXMnJ8&t=587s ]

Software developer and self-confessed spreadsheet addict Ishan Anand has jammed GPT-2 into Microsoft Excel. More astonishingly, it works – providing insight into how large language models (LLMs) work, and how the underlying Transformer architecture goes about its smart next-token prediction. "If you can understand a spreadsheet, then you can understand AI," boasts Anand. The 1.25GB spreadsheet has been made available on GitHub for anyone to download and play with.

Naturally, this spreadsheet implementation of GPT-2 is somewhat behind the LLMs available in 2024, but GPT-2 was state-of-the-art and grabbed plenty of headlines in 2019. It is important to remember that GPT-2 is not something to chat with, as it comes from before the 'chat' era.  ...

... it is still good for a demo and Anand claims that his "low-code introduction" is ideal as an LLM grounding for the likes of tech execs, marketers, product managers, AI policymakers, ethicists, as well as for developers and scientists who are new to AI. Anand asserts that this same Transformer architecture remains "the foundation for OpenAI’s ChatGPT, Anthropic’s Claude, Google’s Bard/Gemini, Meta’s Llama, and many other LLMs." ...

See the full story with explanatory 10 minute video here: https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpts-ancestor-gpt-2-jammed-into-125gb-excel-sheet-llm-runs-inside-a-spreadsheet-that-you-can-download-from-github?ref=trailyn.com

15Mar/24Off

AI image-generator Midjourney blocks images of Biden and Trump as election looms

... Declaring that “this moderation stuff is kind of hard,” Holz didn’t outline exactly what policy changes were being made but described the clampdown as a temporary measure to make it harder for people to abuse the tool. The company didn’t immediately respond to a request for comment Wednesday.

Attempts by AP journalists to test Midjourney’s new policy on Wednesday by asking it to make an image of “Trump and Biden shaking hands at the beach” led to a “Banned Prompt Detected” warning. A second attempt escalated the warning to: “You have triggered an abuse alert.”

The tiny company — which has just 11 employees, according to its website — has largely kept silent in the public debate over how generative AI tools could fuel election misinformation around the world. ...

“Anybody who’s scared about fake images in 2024 is going to have a hard 2028,” Holz said Wednesday. “It will be a very different world at that point. Like, obviously you’re still going to have humans running for president in 2028, but they won’t be purely human anymore.” ...

See the full story here: https://www.pbs.org/newshour/politics/ai-image-generator-midjourney-blocks-images-of-biden-and-trump-as-election-looms?ref=platformer.news&utm_source=substack&utm_medium=email

15Mar/24Off

Snap, Amplified Intelligence seek to measure AR attention

Amplified Intelligence, a source for attention measurement, has partnered with Snapchat on an approach to assess how augmented reality (AR) Lenses captivate audiences. Conducted in partnership with global media agency OMD, the research initiative facilitated live analysis of in-the-moment human attention on Snapchat AR Lenses, producing the most accurate measurement to date. ...

See the full story here: https://advanced-television.com/2024/03/14/snap-amplified-intelligence-seek-to-measure-ar-attention/

15Mar/24Off

How Technology Is Changing the Future of the Entertainment Industry

1. Robotics will likely take the stage. ...

... In fact, the biggest name in live entertainment, Cirque du Soleil, is currently developing a new show featuring robot performers. ...

2. The use of drones will continue to grow. ...

Already, drones have found their way into many Fourth of July celebrations, with cities around the country replacing traditional fireworks with UAV (Unmanned Aerial Vehicle) displays. ...

3. The advent of new entertainment genres. ...

Who would’ve thought playing video games would ever become a spectator sport? .... Drone soccer serves as an example, with locations cropping up all around the U.S. 

4. Job security will come into question. ...

... However, most of these jobs will be in the white-collar sector, focusing on more mundane and repetitive tasks. ...

5. Losing the competitive edge will leave companies behind. ...

However, there is one caveat: merely copying what others are doing in the space will do a company no favors. Instead, monitoring feedback to ensure the technology will attract the masses to an event is crucial. ...

See the full story here: https://www.rollingstone.com/culture-council/articles/how-technology-changing-the-future-the-entertainment-industry-1234985968/

14Mar/24Off

Watch Your Step! There’s AGI Everywhere

...

Ben Goertzel, the founder of SingularityNET and the person often credited with creating the term, makes a compelling case that AGI should be decentralized, relying on open-source development as well as decentralized hosting and mechanisms for interconnect A.I. systems to learn from and teach on another. 

SingularityNET’s DeAGI Manifesto states, “There is a broad desire for AGI to be ethical and beneficial for all humanity; the most straightforward way to achieve this seems to be for AGI to ‘grow up’ in the context of serving and being guided by all humanity, or as good an approximation as can be mustered.”

Having AGI manifest in part from the aggressive activities of for-profit enterprises is dicey. As Goertzel pointed out, “You get into questions [about] who owns and controls these potentially spooky and configurable human-like robot assistants … and to what extent is their fundamental motivation to help people as opposed to sell people stuff or brainwash people into some corporate government media advertising order.” ...

“We’re in the anthropocene. We’re in an era where our actions are affecting everything in our biological environment,” Blaise Aguera Y Arcas, the Noeme article author, told me. “The Earth is finite and without the kind of solidarity where we start to think about the whole thing as our body, as it were, we’re kind of screwed.”

See the full story here: https://observer.com/2024/03/organizational-artificial-general-intelligence/

13Mar/24Off

Anti-AI sentiment gets big applause at SXSW 2024 as moviemaker dubs AI cheerleading as ‘terrifying bullsh**’

...

“So imagine what this technology will do within this current system, within this current incentive structure. This is the same system that brought us climate change, income inequality, and the general lack of gratitude and understanding of our worth and the worth of those around us,” Kwan said.

Plus, he noted, if you are feeling anxious about AI, it’s probably because, deep down, you know you’re next. “Even if the jobs aren’t going to be lost, the value of the job will go down, right? … It will slowly be compounded and normalized until we don’t even realize it,” he said. ...

“Are you trying to use it to create the world you want to live in? Are you trying to use it to increase value in your life and focus on the things that you really care about? Or are you just trying to, like, make some money for the billionaires, you know?”  Scheinert asked the audience. “And if someone tells you, there’s no side effect. It’s totally great, ‘get on board’ — I just want to go on the record and say that’s terrifying bullshit. That’s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff,” he said.

The crowd then erupted into sustained applause. ...

“Why did we write ‘Everything Everywhere All at Once’ the way we did? And the answer is, we did it to save ourselves. Every story … that we make is an act of saving ourselves and our value from a system that wants to devalue us and the people that we care about,” Kwan said.

See the full story here: https://techcrunch.com/2024/03/12/anti-ai-sentiment-gets-big-applause-at-sxsw-2024-as-storytellers-dub-ai-cheerleading-as-terrifying-bullsh

13Mar/24Off

Europe’s world-first AI rules get final approval from lawmakers. Here’s what happens next

... “The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential," Dragos Tudorache, a Romanian lawmaker who was a co-leader of the Parliament negotiations on the draft law, said before the vote. ...

The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct. 

High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users. 

Some AI uses are banned because they’re deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces. ...

They added provisions for so-called generative AI models, the technology underpinning AI chatbot systems that can produce unique and seemingly lifelike responses, images and more. ...

In the U.S., President Joe Biden signed a sweeping executive order on AI in October that’s expected to be backed up by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are working on their own AI legislation. 

Chinese President Xi Jinping has proposed his Global AI Governance Initiative for fair and safe use of AI, and authorities have issued “ interim measures ” for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China.

Other countries, from Brazil to Japan, as well as global groupings like the United Nations and Group of Seven industrialized nations, are moving to draw up AI guardrails. ...

When it comes to enforcement, each EU country will set up their own AI watchdog, where citizens can file a complaint if they think they've been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems. 

Violations of the AI Act could draw fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue. 

See the full story here: https://abcnews.go.com/Business/wireStory/europes-world-ai-rules-set-final-approval-108072010