philip lelyveld The world of entertainment technology

11Dec/23Off

This A.I. Subculture’s Motto: Go, Go, Go

... The battle between the e/accs and the Effective Altruists is one of many quasi-religious schisms breaking out in San Francisco’s A.I. scene these days, as insiders argue about how quickly the technology is progressing, and what should be done about it.

E/acc prefers the all-gas, no-brakes approach. Its adherents favor open-sourcing A.I. software rather than having it be controlled by big corporations, and unlike Effective Altruists, they don’t see powerful A.I. as something to be feared or guarded against. They believe that A.I.’s benefits far outweigh its harms, and that the right thing to do with such important technology is to get out of the way and let it rip. ...

Initially, I wrote the movement off as a fringe novelty — a bunch of Twitter-addicted techies with persecution complexes turning warmed-over Ayn Rand into edgy memes.

But a few months later, tech luminaries like Marc Andreessen, the co-founder of the venture capital firm Andreessen Horowitz, started showing up in e/acc’s Twitter Spaces, and proclaiming that he, too, believed in effective accelerationism. ...

Critics have pointed to the fact that some of e/acc’s leaders, including Mr. Verdon, seem to actually agree with the Effective Altruists that a rogue A.I. could wipe out humanity, but aren’t bothered by the idea, since superhuman A.I. could represent a logical next step in evolution.  ...

See the full story here: https://www.nytimes.com/2023/12/10/technology/ai-acceleration.html?fbclid=IwAR2sqGN1Hs1klDA-GF48Qw2_YGpqkH_tmEdO4fKE0jXRput-1xH8wGKD6mc

8Dec/23Off

The Race to Dominate A.I.

Though the companies were concerned that their A.I. chatbots were inaccurate or biased, they put those worries to the side — at least for the moment. As one Microsoft executive wrote in an internal email, “speed is even more important than ever.” It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”

A.I. has since sneaked into daily life, through chatbots and image generators, in the word processing programs you might use at work, and in the seemingly human customer service agents you chat with online to return a purchase. People have already used it to create sophisticated phishing emails, cheat on schoolwork and spread disinformation. ...

That one person could be so central to the future of A.I. — and perhaps humanity — is a symptom of the lack of meaningful oversight of the industry. ...

European regulators this week are in marathon sessions to write the world’s strictest A.I. regulations, and they will be worth watching. In the meantime, companies continue to push ahead. On Wednesday, Google demonstrated a powerful new A.I. system called Gemini Ultra, even though Google hasn’t yet completed its customary safety testing. The company promised it would be out in the world early next year. ...

See the full story here: https://www.nytimes.com/2023/12/08/briefing/ai-dominance.html

8Dec/23Off

Democrats and Republicans see role for government in development of AI

... “Government cannot govern AI if it does not understand AI,” said Daniel Ho, a professor at Stanford Law School, at the hearing. ...

President Biden’s Oct. 30 executive order on AI lays out 150 requirements, as tracked by Stanford, meaning there are major workforce demands for implementation. Yet among other persistent skills gaps in cyber and IT, Ho said that fewer than 1% of AI PhDs pursue a career in public service. ...

Read the full story here: https://www.federaltimes.com/federal-oversight/congress/2023/12/07/democrats-and-republicans-see-role-for-government-in-development-of-ai/

8Dec/23Off

AI And A-listers: Sundance Festival Line-up Unveiled

Kristen Stewart is among several Hollywood stars heading to next month's Sundance festival. But artificial intelligence -- the subject of, and technology behind, several new films -- could steal the show.

Among the line-up for Utah's influential indie movie fest are a "generative" music film that plays differently on each viewing, two documentaries about loved ones using AI to communicate after death, and an interactive "digital griot" that will teach audiences how to vogue.

"One of the things that was striking to see, as we were going through these films and talking about them as a team, was how AI just kept popping up," Sundance director of programming Kim Yutani told AFP.

"Whether it be in a documentary, whether it be influencing a documentary... that's going to be a really interesting part of the festival this year." ...

Among Sundance's new offerings are "Eno," which explores musician Brian Eno's career and creative process, using a "generative engine" to mesh together near-infinite different versions of a film from hundreds of possible scenes.

The technology uses prompts and keywords to find and create associations between scenes, changing or reshuffling the lineup each night, just as a touring band might do at each new gig...

See the full story here: https://www.barrons.com/articles/ai-and-a-listers-sundance-festival-line-up-unveiled-1628627f

8Dec/23Off

The race to 5G is over — now it’s time to pay the bill

... Instead, there’s one 5G use case where the big three networks are finding traction, and it comes up over and over again in their earnings reports: fixed wireless access, or FWA. If you’re keeping score at home, that’s internet that comes to your house over radio waves rather than a cable. T-Mobile and Verizon have aggressively expanded their FWA offerings over the past couple of years, and even “fiber is everything” AT&T is getting in on the action with Internet Air. ...

https://www.theverge.com/23991136/5g-network-att-verizon-tmobile-cost-competition

7Dec/23Off

Google Gemini gets us closer to the AI of our imagination, and it’s going to change everything

... What matters here is that Google is finally showing what you can do when you have the world's information and industry-leading AI development. Microsoft and OpenAI have the best-known AI, but they've never had access to the same kind of data and knowledge graph as Google. I always assumed that was an advantage; and now, it seems, Google has finally figured that out, too. ...

Watch a video here: https://www.youtube.com/watch?v=UIZAiXYceBI

See the full story here: https://www.techradar.com/computing/artificial-intelligence/google-gemini-gets-us-closer-to-the-ai-of-our-imagination-and-its-going-to-change-everything

7Dec/23Off

Experts on A.I. Agree That It Needs Regulation. That’s the Easy Part.

...

In October, the White House issued a lengthy executive order on A.I., but without an enforcement mechanism, something Mr. Benifei sees as necessary. “Obviously, it’s a delicate topic,” he said. “There is a lot of concern from the business sector, I think rightly so, that we do not overregulate before we fully understand all the challenges.” But, he said, “we cannot just rely on self-regulation.” The development and use of A.I., he added, must be “enforceable and explainable to our citizens.”

Other task force members were far more reluctant to embrace such broad regulation. Questions abound, such as who is responsible if something goes wrong — the original developer? A third-party vendor? The end user? ...

Transparency is key, all agreed, and so are partnerships between government, industry and university research. “If you are not very transparent, then academia gets left behind and no researchers will come out of academia,” said Rohit Prasad, senior vice president and head scientist at Amazon Artificial General Intelligence. ...

In addition, she said, “It’s not just about regulation. It really has to do with investment in the public sector in a deep and profound way,” noting that she has directly pleaded with Congress and President Biden to support universities in this area. Academia, she said, can serve as a trusted neutral platform in this field, but “right now we have completely starved the public sector.” ...

See the full story here: https://www.nytimes.com/2023/12/06/business/dealbook/artificial-intelligence-regulation.html

6Dec/23Off

Runway ML and Getty Images set their sights on Hollywood

Runway ML, a leading AI video creation platform, has joined forces with Getty Images, the world’s premier visual content provider, to develop innovative AI video models tailored for the entertainment and advertising industries.

The collaboration of Runway ML and Getty Images aims to develop cutting-edge AI video models that will empower Hollywood studios, advertising agencies, and media companies to produce high-quality, captivating video content at an unprecedented scale. ...

Yes, this collaboration is really exciting but it has two big main problems: Not being able to appeal to a wide audience and using Art and AI in the same sentence. ...

See the full story here: https://dataconomy.com/2023/12/05/runway-ml-and-getty-images-ai-hollywood/

6Dec/23Off

Rapper Bad Bunny lashes out over viral AI copycat hits

Puerto Rican rapper and singer Bad Bunny's voice quickly went viral last month. However, the songs circulating did not belong to him. ...

Chilean artist Maurico Bustos launched the trend with the song NostalgIA - a play on the Spanish abbreviation for AI. It was written and recorded by Bustos using artificial intelligence to modify Bad Bunny's vocals, producing a viral track that prompted parodies and copycat versions on TikTok.

Bad Bunny told his 20 million WhatsApp followers to leave if they liked "this shitty song that is viral on TikTok ... I don't want you on tour either." ...

See the full story here: https://www.wionews.com/entertainment-news/rapper-bad-bunny-lashes-out-over-viral-ai-copycat-hits-666653

5Dec/23Off

AI Is Testing the Limits of Corporate Governance

... Can AI safety research shed any light on old corporate governance problems? And can the law and economics of corporate governance help us frame the new problems of AI safety? I identify five lessons — and one dire warning — on the corporate governance of AI that the corporate turmoil at OpenAI has made vivid.

1. Companies cannot rely on traditional corporate governance to protect the social good.

... Anthropic is organized as a public benefit corporation (PBC), with the specific mission to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.” ...

2. Even creative governance structures will struggle to tame the profit motive.

This phenomenon has been in full display during the OpenAI governance war. ...

In an influential paper, economists Oliver Hart and Luigi Zingales argued that, in an unrestricted market for corporate control, a profit-driven buyer can easily hijack the social mission of a firm. They called this phenomenon “amoral drift.” ...

3. Independence and social responsibility do not necessarily converge.

An important concept in AI safety is the so-called “orthogonality thesis,” which posits that AI’s intelligence and its final goals are not necessarily correlated. We can have unintelligent machines that serve us well and super-intelligent machines that harm us. Intelligence alone does not guarantee against harmful behavior.

Corporate governance experts should borrow this helpful concept. ...

4. Corporate governance should try to solve for the alignment of profit and safety.

One crucial problem in AI safety is the so-called “alignment problem”: Superintelligent AI might have values and goals that are incompatible with human well-being. ...

The AI alignment problem is quite similar to the central problem of corporate governance. ...

Our most successful institutional designs, from liberal constitutions to capitalist institutions, do not depend on suppressing greed and ambition. Instead, they focus on harnessing these passions for the greater good. ...

5. AI companies’ boards must maintain a delicate balance in cognitive distance.

 ... This difference between how AI safety experts and outsiders interpret and understand the world is what some scholars have termed “cognitive distance.” ...

Was the drastic and sudden decision to fire Sam Altman, with little or no warning to major investors and no explanations to the public, the product of too little cognitive distance? ...

Corporate boards are complex social systems. The ideal decision-making dynamic in the boardroom should be one in which directors with different backgrounds, competences, and point of views discuss vigorously and intelligently, willing to contribute their insights but also to learn and change their minds when appropriate. Real-world boardrooms often fail to live up to this standard. ...

A Warning: Corporate Governance Cannot Handle Catastrophic Risk

... Many AI experts, however, believe that there is a small but non negligible chance that AI will be catastrophic for humanity. ...

While corporate governance might help mitigate serious risks, it is not good at handling existential risk, even when corporate decision-makers have the strongest commitment to the common good. ...

Top AI experts and commentators have already invoked a Manhattan Project for AI, in which the U.S. government would mobilize thousands of scientists and private actors, fund research that would be uneconomic for business firms, and make safety an absolute priority. ...

While good corporate governance can help in the transitional phase, the government should quickly recognize its inevitable role in AI safety and step up to the historic task.

See the full article here: https://hbr.org/2023/12/ai-is-testing-the-limits-of-corporate-governance