...
“We really view it as a starting point — to start more public discussion about how AI systems should be trained and what principles they should follow,” he says. “We’re definitely not in any way proclaiming that we know the answer.”
This is an important note, as the AI world is already schisming somewhat over perceived bias in chatbots like ChatGPT. Conservatives are trying to stoke a culture war over so-called “woke AI,” while Elon Musk, who has repeatedly bemoaned what he calls the “woke mind virus” said he wants to build a “maximum truth-seeking AI” called TruthGPT. Many figures in the AI world, including OpenAI CEO Sam Altman, have said they believe the solution is a multipolar world, where users can define the values held by any AI system they use. ...
Kaplan says he agrees with the idea in principle but notes there will be dangers to this approach, too. He notes that the internet already enables “echo-chambers” where people “reinforce their own beliefs” and “become radicalized” and that AI could accelerate such dynamics. But he says, society also needs to agree on a base level of conduct — on general guidelines common to all systems. It needs a new constitution, he says, with AI in mind.
See the full story here: https://www.theverge.com/2023/5/9/23716746/ai-startup-anthropic-constitutional-ai-safety?mc_cid=5f64fee337&mc_eid=f55a714a2f