philip lelyveld The world of entertainment technology

8Sep/23Off

A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous

...

Beyond the world of text, generative applications Midjourney, DALL-E, and Stable Diffusion produceunprecedentedly realistic images and videos. These models have burst into the public consciousness rapidly. Most people have begun to understand that generative AI is an unparalleled innovation, a type of machine that possesses capacities — natural language generation and artistic production — long thought to be sacrosanct domains of human ability. 

But generative AI is only the beginning. A team of Microsoft AI scientists recently released a paperarguing that GPT-4 — arguably the most sophisticated LLM yet — is showing the “sparks” of artificial general intelligence (AGI), an AI that is as smart — or smarter — than humans in every area of intelligence, rather than simply in one task. They argue that, “[b]eyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting." In these multiple areas of intelligence, GPT-4 is “strikingly close to human-level performance.” In short, GPT-4 appears to presage a program that can think and reason like a human. Half of surveyed AI experts expect an AGI in the next 40 years. ...

The pressure to outpace adversaries by rapidly pushing the frontiers of a technology that we still do not fully understand or fully control — without commensurate efforts to make AI safe for humans — may well present an existential risk to humanity’s continued existence. ...

...just the perception of falling behind an adversary contributed to a destabilizing buildup of nuclear and ballistic missile capabilities, with all its associated dangers of accidents, miscalculations, and escalation. ...

The Alignment Problem

Despite dramatic successes in AI, humans still cannot reliably predict or control its outputs and actions. While research focused on AI capabilities has produced stunning advancements, the same cannot be said for research in the field of AI alignment, which aims to ensure AI systems can be controlled by their designers and made to act in a way that is compatible with humanity’s interests. ...

Arms Racing or Alignment Governance? A Risky Tradeoff

How does international competition come into play when discussing the technical issue of alignment? Put simply, the faster AI advances, the less time we will have to learn how to align it. ...

Likewise, the perception of an arms race may preclude the development of a global governance framework on AI. A vicious cycle may emerge where an arms race prevents international agreements, which increases paranoia and accelerates that same arms race. ...

However, the outlook is not all rosy: as the political salience of AI continues to increase, the questions of speed, regulation, and cooperation may become politicized into the larger American partisan debate over China. Regulation may be harder to push when “China hawks” begin to associate slowing AI with losing an arms race to China. Recent rhetoric in Congress has emphasized the AI arms race and downplayed the necessity of regulation. ...

See the full story here: https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.