Audio By Carbonatix
Major websites are turning to automatic systems to moderate content as they tell their staff to work from home.
YouTube, Twitter and Facebook are all relying on artificial intelligence and automated tools to find problematic material on their platforms.
The tech giants admit this may lead to some mistakes - but say they still need to remove harmful content.
The coronavirus scare has led to a surge of medical misinformation across the web.
Google, which owns YouTube, said appeals about content wrongfully removed could take longer under the new measures.
Twitter, meanwhile, promised that no accounts suspended by automated software would be permanently banned without a human review.
Computer errors
Content review operations for Facebook, Twitter and Google are spread around the globe, including in the US, India and Spain.
All those countries have said employees should work from home – but switching the content review operation to remote working is complicated.
Facebook has sent home all its content reviewers until further notice, and says it is paying them during this time.
In a blogpost, Facebook said: "With fewer people available for human review we'll continue to prioritise imminent harm and increase our reliance on proactive detections in other areas."
Twitter said it would increase the use of machine-learning and automation but acknowledged they could "sometimes lack the context that our teams bring, and this may result in us making mistakes".
As a result, it said it would not permanently ban any accounts based solely on automated systems.
And nearly all of Google's full-time employees worldwide have been ordered to work from home due to the coronavirus pandemic.
"This means automated systems will start removing some content without human review," YouTube said in a blog.
"As we do this, users and creators may see increased video removals, including some videos that may not violate policies.
"Our workforce precautions will also result in delayed appeal reviews."
It added it would also be more cautious about what content gets promoted, including livestreams.
It comes at a time when the tech giants are being asked to ramp up their removal of coronavirus misinformation on their platforms.
The UK's Digital, Culture, Media and Sport committee has asked the government to explain why it has taken two months to set up a unit to counter the spread of disinformation about the virus.
MPs expressed concern that false narratives about coronavirus could undermine efforts to deal with the crisis.
Latest Stories
-
Foh-Amoaning urges inquiry into curriculum after NaCCA withdraws teacher manual over gender content
7 minutes -
Learning to Stay Healthy in the New Year – Focus on the Basics
9 minutes -
Ghana aims to attain WHO Level Five preparedness under new health security plan
11 minutes -
African nations slam U.S. military strikes in Venezuela as threat to global sovereignty
21 minutes -
President Mahama’s First Year: Cautious reform or dangerous complacency?
28 minutes -
Prof Bokpin calls on gov’t to apologise over NaCCA SHS teacher manual response
30 minutes -
UN Security Council weighs dangerous precedent set by US military operation in Venezuela
32 minutes -
Semenyo’s personality fits right with Man City team – Bernardo Silva
37 minutes -
One killed in road crash at Anyaa Market
42 minutes -
China announces record $1tn trade surplus despite Trump tariffs
45 minutes -
Global temperatures dipped in 2025 but more heat records on way, scientists warn
45 minutes -
Police arrest man over alleged sale of 3-year-old son for GH¢1m
49 minutes -
Asiedu Nketia calls for investigation into cocoa sack procurement under ex-government
54 minutes -
Ghanaians divided over DStv upgrades as government ramps up anti-piracy war
58 minutes -
African exporters face tariff shock as U.S. eyes AGOA Extension Bill
1 hour
