philip lelyveld The world of entertainment technology

5Dec/23Off

AI Is Testing the Limits of Corporate Governance

... Can AI safety research shed any light on old corporate governance problems? And can the law and economics of corporate governance help us frame the new problems of AI safety? I identify five lessons — and one dire warning — on the corporate governance of AI that the corporate turmoil at OpenAI has made vivid.

1. Companies cannot rely on traditional corporate governance to protect the social good.

... Anthropic is organized as a public benefit corporation (PBC), with the specific mission to “responsibly develop and maintain advanced AI for the long-term benefit of humanity.” ...

2. Even creative governance structures will struggle to tame the profit motive.

This phenomenon has been in full display during the OpenAI governance war. ...

In an influential paper, economists Oliver Hart and Luigi Zingales argued that, in an unrestricted market for corporate control, a profit-driven buyer can easily hijack the social mission of a firm. They called this phenomenon “amoral drift.” ...

3. Independence and social responsibility do not necessarily converge.

An important concept in AI safety is the so-called “orthogonality thesis,” which posits that AI’s intelligence and its final goals are not necessarily correlated. We can have unintelligent machines that serve us well and super-intelligent machines that harm us. Intelligence alone does not guarantee against harmful behavior.

Corporate governance experts should borrow this helpful concept. ...

4. Corporate governance should try to solve for the alignment of profit and safety.

One crucial problem in AI safety is the so-called “alignment problem”: Superintelligent AI might have values and goals that are incompatible with human well-being. ...

The AI alignment problem is quite similar to the central problem of corporate governance. ...

Our most successful institutional designs, from liberal constitutions to capitalist institutions, do not depend on suppressing greed and ambition. Instead, they focus on harnessing these passions for the greater good. ...

5. AI companies’ boards must maintain a delicate balance in cognitive distance.

 ... This difference between how AI safety experts and outsiders interpret and understand the world is what some scholars have termed “cognitive distance.” ...

Was the drastic and sudden decision to fire Sam Altman, with little or no warning to major investors and no explanations to the public, the product of too little cognitive distance? ...

Corporate boards are complex social systems. The ideal decision-making dynamic in the boardroom should be one in which directors with different backgrounds, competences, and point of views discuss vigorously and intelligently, willing to contribute their insights but also to learn and change their minds when appropriate. Real-world boardrooms often fail to live up to this standard. ...

A Warning: Corporate Governance Cannot Handle Catastrophic Risk

... Many AI experts, however, believe that there is a small but non negligible chance that AI will be catastrophic for humanity. ...

While corporate governance might help mitigate serious risks, it is not good at handling existential risk, even when corporate decision-makers have the strongest commitment to the common good. ...

Top AI experts and commentators have already invoked a Manhattan Project for AI, in which the U.S. government would mobilize thousands of scientists and private actors, fund research that would be uneconomic for business firms, and make safety an absolute priority. ...

While good corporate governance can help in the transitional phase, the government should quickly recognize its inevitable role in AI safety and step up to the historic task.

See the full article here: https://hbr.org/2023/12/ai-is-testing-the-limits-of-corporate-governance

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.