...As well as banning some uses outright (facial recognition for identification in public spaces and social “scoring,” for instance), its focus is on regulation and review, especially for AI systems deemed “high risk” — those used in education or employment decisions, say.
Any company with a software product deemed high risk will require a Conformité Européenne (CE) badge to enter the market. The product must be designed to be overseen by humans, avoid automation bias, and be accurate to a level proportionate to its use.
Some are concerned about the knock-on effects of this. They argue that it could stifle European innovation as talent is lured to regions where restrictions aren’t as strict — such as the US. ...
Mistakes at the very start of this new era could damage public perception irrevocably. ...
That’s why the legislation’s focus on reducing bias in AI, and setting a gold standard for building public trust, is vital for the industry. ...
In 2019, Harvard Business Review found that patients were wary of medical AI even when it was shown to out-perform doctors, simply because we believe our health issues to be unique. We can’t begin to shift that perception without trust. ...