Sony AI has introduced the Fair Human-Centric Image Benchmark (FHIBE, pronounced “Fee-bee”), a new global benchmark for fairness evaluation in computer vision models. FHIBE addresses the industry challenge of identifying biased and ethically compromised training data for AI, aiming to trigger “industry-wide improvements for responsible and ethical protocols throughout the entire life span of data — from sourcing and management to utilization — including fair compensation for participants and clear consent mechanisms,” Sony AI says. ...
Xiang notes that facial recognition systems on mobile phones in China have mistakenly let family members unlock each other’s phones and make payments, a mistake that could result from a lack of images of Asian people in model training data or undetected model bias.
The Register points out that there are other fairness benchmarks, including Meta FACET (FAirness in Computer Vision EvaluaTion) computer vision evaluation.
See the full story here: https://www.etcentric.org/sony-debuts-benchmark-for-measuring-computer-vision-bias/