philip lelyveld The world of entertainment technology

26Jun/23Off

Chuck Schumer Wants AI to Be Explainable. It’s Harder Than It Sounds

... Unfortunately, these tools often contradict each other. One tool might say that the loan was rejected because the person’s credit rating was too low while another might emphasize the person’s income.

Another challenge is that explainability can be in tension with other goals policymakers might set for AI systems, such as protecting people’s privacy and giving them a right to have information about them removed from the internet. ...

“One of the things that has happened in the last five or six years is a mushrooming of ideas of how to produce interpretations, or some clarity on what an AI system is doing,” says Suresh Venkatasubramanian, a computer scientist at Brown University who teaches a course on AI explainability and co-authored the White House Blueprint for an AI BIll of Rights. ...

But for the most complex systems, including large language models (LLMs) such as OpenAI’s ChatGPT, the explainability tools developed for simpler models break down. ...

But when a team of researchers from NYU, AI startup Cohere, and AI lab Anthropic tried this on two different LLMs developed by OpenAI and Anthropic, they found that these models tended to give answers that were in line with common stereotypes and failed to mention the influence of the social biases that resulted in those answers. ...

“These techniques do not show rigorous replication at scale,” says Sara Hooker, a director at Cohere and head of its nonprofit research lab Cohere For AI. Hooker believes that attempting to explain neural network-based systems using these tools “becomes like tea leaf reading. It is often no better than random.” ...

“I would not expect Congress to micromanage the explainability tools used. Congress should demand that these systems be explainable. How that plays out will be a matter for innovation,” says Venkatasubramanian. ...

 If legislation was passed that required all AI systems to be explainable regardless of their architecture, it could spell disaster for companies developing these more complex systems. Hooker believes one of two things would happen: “it could stifle innovation, or it could become like GDPR [General Data Protection Regulation, the European data privacy law], where it’s stated but in practice it isn’t enforced.” ...

But a third scenario exists. Schumer called for a concerted effort to find a technical solution on Wednesday, harnessing the “ingenuity of the experts and companies to come up with a fair solution that Congress can use to break open AI’s black box.” ...

See the full story here: https://time.com/6289953/schumer-ai-regulation-explainability/

Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.