Audio By Carbonatix
YouTube says it’s bringing back human moderators who were “put offline” during the pandemic after the company’s AI filters failed to match their accuracy.
Back in March, YouTube said it would rely more on machine learning systems to flag and remove content that violated its policies on things like hate speech and misinformation.
But YouTube told the Financial Times this week that the greater use of AI moderation had led to a significant increase in video removals and incorrect takedowns.
Around 11 million videos were removed from YouTube between April and June, says the FT, or about double the usual rate. Around 320,000 of these takedowns were appealed, and half of the appealed videos were reinstated. Again, the FT says that’s roughly double the usual figure: a sign that the AI systems were over-zealous in their attempts to spot harmful content.
As YouTube’s chief product officer, Neal Mohan, told the FT: “One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in [a] slightly higher number of videos coming down.”
This admission of failure is notable. All major online social platforms, from Twitter to Facebook to YouTube, have been increasingly under pressure to deal with the spread of hateful and misleading content on their sites. And all have said that algorithmic and automated filters can help deal with the immense scale of their platforms.
Time and time again, though, experts in AI and moderation have voiced scepticism about these claims. Judging whether a video about, say, conspiracy theories contains subtle nods toward racist beliefs can be a challenge for a human, they say, and computers lack our ability to understand the exact cultural context and nuance of these claims. Automated systems can spot the most obvious offenders, which is undoubtedly useful, but humans are still needed for the finer judgment calls.
Even with more straightforward decisions, machines can still mess up. Back in May, for example, YouTube admitted that it was automatically deleting comments containing certain phrases critical of the Chinese Communist Party (CCP). The company later blamed an “error with our enforcement systems” for the mistakes.
But as Mohan told the FT, the machine learning systems definitely have their place, even if it is to just remove the most obvious offenders. “Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” he said. “And so that’s the power of machines.”
Latest Stories
-
KNUST Africa Health Collaborative begins fourth-year community training in health entrepreneurship
2 minutes -
Government to revive PBC to resume full operations as leading licensed cocoa buyer
14 minutes -
Black Sherif donates over GH₵50,000 to support mothers at 37 Military Hospital
28 minutes -
From courtship to clicks: How romance has changed across generations in Ghana
32 minutes -
CPC set for revival to become Ghana’s leading cocoa processor
33 minutes -
TIME100 honors Dr. Delese Mimi Darko for leading Africa’s unified medicines revolution
38 minutes -
Dr Charity Binka urges bold action on sexual and reproductive health
1 hour -
EC conducts balloting for March 3 Ayawaso East by-election
1 hour -
Kotoko should not behave like colts club – Owusu Bempah fires
1 hour -
Minority demands immediate arrest over unlawful closure of Tema NHIS office
1 hour -
Cabinet has directed criminal COCOBOD probe covering last 8 years – Ato Forson reveals
1 hour -
Gov’t sets new farmgate cocoa price at GH₵41,392 per tonne for 2025–2026 season
1 hour -
Diya organics founder Princess Burland builds premium African haircare brand
1 hour -
‘It’s normal’ – Didi Dramani reacts to Karim Zito’s Kotoko exit
1 hour -
Govt revives PBC, CPC; orders 50% processing of cocoa beans locally – Ato Forson
1 hour
