Audio By Carbonatix
Major websites are turning to automatic systems to moderate content as they tell their staff to work from home.
YouTube, Twitter and Facebook are all relying on artificial intelligence and automated tools to find problematic material on their platforms.
The tech giants admit this may lead to some mistakes - but say they still need to remove harmful content.
The coronavirus scare has led to a surge of medical misinformation across the web.
Google, which owns YouTube, said appeals about content wrongfully removed could take longer under the new measures.
Twitter, meanwhile, promised that no accounts suspended by automated software would be permanently banned without a human review.
Computer errors
Content review operations for Facebook, Twitter and Google are spread around the globe, including in the US, India and Spain.
All those countries have said employees should work from home – but switching the content review operation to remote working is complicated.
Facebook has sent home all its content reviewers until further notice, and says it is paying them during this time.
In a blogpost, Facebook said: "With fewer people available for human review we'll continue to prioritise imminent harm and increase our reliance on proactive detections in other areas."
Twitter said it would increase the use of machine-learning and automation but acknowledged they could "sometimes lack the context that our teams bring, and this may result in us making mistakes".
As a result, it said it would not permanently ban any accounts based solely on automated systems.
And nearly all of Google's full-time employees worldwide have been ordered to work from home due to the coronavirus pandemic.
"This means automated systems will start removing some content without human review," YouTube said in a blog.
"As we do this, users and creators may see increased video removals, including some videos that may not violate policies.
"Our workforce precautions will also result in delayed appeal reviews."
It added it would also be more cautious about what content gets promoted, including livestreams.
It comes at a time when the tech giants are being asked to ramp up their removal of coronavirus misinformation on their platforms.
The UK's Digital, Culture, Media and Sport committee has asked the government to explain why it has taken two months to set up a unit to counter the spread of disinformation about the virus.
MPs expressed concern that false narratives about coronavirus could undermine efforts to deal with the crisis.
Latest Stories
-
‘We’ll meet you in court’- DVLA boss fires back at VEMAG over injunction on new number plates
8 minutes -
John Mahama’s symphony of stewardship: The first anniversary of the Accra reset
32 minutes -
How Edmond Kombat reclaimed TOR from industrial decay
35 minutes -
Antoine Semenyo chooses Manchester City ahead of January move
1 hour -
Western region records 465 road fatalities in 11 months; officials blame drunk driving, human error
1 hour -
DVLA extends use of DP stickers and DV plates amid new plate rollout delay
1 hour -
What’s in a nickname? AFCON 2025 teams have stories to tell
2 hours -
DVLA suspends rollout of new number plates planned for January 2026
2 hours -
Health Minister commends workers, pledges stronger health system in end-of-year message
2 hours -
Two dead, dozens injured in crash on Cape Coast–Takoradi highway
2 hours -
NPP Primary: Bawumia still in strong lead in latest Global InfoAnalytics survey
2 hours -
NPP Primary: Bawumia leads with 56% amongst committed voters in latest Global InfoAnalytics poll
2 hours -
Venezuela accuses US of ‘extortion’ over seizure of oil tankers
2 hours -
Zelensky says Ukrainian withdrawal from the East possible in latest peace plan
2 hours -
NDC highlights first year achievements, vows to stabilise economy and strengthen governance
3 hours
