Audio By Carbonatix
Thousands of Google employees have signed an open letter asking the internet giant to stop working on a project for the US military.
Project Maven involves using artificial intelligence to improve the precision of military drone strikes.
Employees fear Google's involvement will "irreparably damage" its brand.
"We believe that Google should not be in the business of war," says the letter, which is addressed to Google chief executive Sundar Pichai.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
No military projects
The letter, which was signed by 3,100 employees - including "dozens of senior engineers", according to the New York Times - says that staff have already raised concerns with senior management internally. Google has more than 88,000 employees worldwide.
In response to concerns raised, the head of Google's cloud business, Diane Greene, assured employees that the technology would not be used to launch weapons, nor would it be used to operate or fly drones.
However, the employees who signed the letter feel that the internet giant is putting users' trust at risk, as well ignoring its "moral and ethical responsibility".
"We cannot outsource the moral responsibility of our technologies to third parties," the letter says.
"Google's stated values make this clear: every one of our users is trusting us. Never jeopardise that. Ever.
"Building this technology to assist the US government in military surveillance - and potentially lethal outcomes - is not acceptable."
'Non-offensive purposes'
Google confirmed that it was allowing the Pentagon to use some of its image recognition technologies as part of a military project, following an investigative report by tech news site Gizmodo in March.
A Google spokesperson told the BBC: "Maven is a well-publicised Department of Defense project and Google is working on one part of it - specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer.
"The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
"Any military use of machine learning naturally raises valid concerns. We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."
The internet giant is working on developing policies for the use of its artificial intelligence technologies.
Latest Stories
-
GNFS, Cadet Media launches ‘Fire Service Tales’ to promote safety education Â
5 minutes -
Minority demands resignation of Ayariga, Dafeamekpor over OSP repeal bill
21 minutes -
NDC doesn’t want the Special Prosecutor after them – John Darko on move to scrap OSP
55 minutes -
Mahama and NDC donate GH₵50,000 to Daddy Lumba’s family
1 hour -
GRA intensifies night market tax enforcement, seals non-compliant business Â
2 hours -
Trump’s ‘historic’ peace deal for DR Congo shattered after rebels seize key city
2 hours -
Politicians are fighting OSP because they can’t control it – Senyo Hosi
2 hours -
King Charles praised for ‘powerful’ message on early cancer detection
2 hours -
Photos: Fans, family bid final goodbye to Daddy Lumba
2 hours -
Cambodia shuts border crossings with Thailand as fighting continues
2 hours -
One million households without power in Ukraine after Russia attacks energy grid
2 hours -
Cyber Security Authority, National Security arrest 32 in Kasoa-Tuba cybercrime operation
2 hours -
OSP is not luxury but a constitutional necessity -Senyo Hosi
3 hours -
32 Foreigners arrested in Kasoa romance scam bustÂ
3 hours -
Engineer leverages U.S. research to build exportable digital twin frameworks for energy resilience
3 hours
