Last week, in Nanjing, a major city in the east of China, a woman by the name of Yan was twice offered a refund from Apple for her faulty iPhoneX, which her colleague unlocked using facial recognition technology. Both women are ethnically Chinese.
Yan told local news that the first time that this happened, she called the iPhone hotline, but they didn’t believe her. It wasn’t until she and her colleague went to a local Apple store and showed the store’s staff that they offered her a refund and she purchased a new phone, thinking that perhaps a faulty camera was to blame.
Customers line up to enter the second Apple store in Nanjing, Jiangsu Province of China. (Photo by VCG/VCG via Getty Images)
But the second phone had the same problem, suggesting that it was not a faulty camera, as the store’s workers suggested, but an issue with the software itself.
Telesol launches 4G internet service; set to bridge digital gap
Managing Director of Telesol, a 4G-internet service provider said the company is set to adopt new technologies to widen the current digital divide between Africa and the rest of the world beginning from Ghana.
This would not be the first case in which facial recognition software, and the AI behind it, has had trouble recognizing non-white faces.
In 2015, Google Photos accidentally tagged a photo of two African-Americans as gorillas, while in 2009, [HP computers] had trouble recognizing and tracking black faces – but no problem with white faces. That same year, Nikon’s camera software was caught mislabeling an Asian face as blinking.
“This is fundamentally a data problem,” wrote Kate Crawford, a principal researcher at Microsoft and the co-chairman of the Obama White House’s Symposium on Society and A.I. “Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.”
Jacky Alcine, the Brooklyn-based programmer whose photo was mislabeled by Google, agreed. Of his experience he said, “This could have been avoided with accurate and more complete classifying of black people.”
But the racism that is coded into AI, even if it is unintentional, has implications beyond just facial recognition.
A ProPublica investigation in 2016 found that black criminals were twice as likely to be mistakenly flagged as likely to recommit crimes than white criminals, while the growing trend of “predictive policing” uses algorithms to forecast crime and direct police resources accordingly.
Yet minority communities have been historically over-policed, leading to the possibility that “this software risks perpetuating an already vicious cycle,” says Crawford.
Back in Nanjing, China, Yan received a second refund on her second iPhoneX. From local news reports, it’s unclear as to whether she then purchased a third.
What was at stake, this time, may have just been a single consumer. But Yan’s case was an example of the continued need for the technology industry to design with diversity and inclusivity in mind.
Have your say
More Technology Headlines
- Huawei executive released on bail in Canada
- Instagram is bringing voice messaging to your DMs
- Kantanka says 13k orders secured from 3 African countries
- BoG assures provision of regulations to support blockchain innovations
- Facebook bans sale of community groups
- Renowned tech blogger Quick Vue reviews the HUAWEI Y9
- Samsung Nigeria tweets promo via iPhone
- IS Internet Solutions, Konnect bring satellite broadband to Ghana
- e-Crime Bureau founder honoured
- ISOC Ghana organises workshop on internet connectivity
- Facebook once considered selling user data, internal emails reveal
- Our business is changing lives; crypto dealer fights off fraud claim
- Instagram using AI to describe photos for users with visual impairments
- Asemkrom residents 'picking calls from the grave'
- Facebook and its ‘black people problem’