Audio By Carbonatix
Thousands of Google employees have signed an open letter asking the internet giant to stop working on a project for the US military.
Project Maven involves using artificial intelligence to improve the precision of military drone strikes.
Employees fear Google's involvement will "irreparably damage" its brand.
"We believe that Google should not be in the business of war," says the letter, which is addressed to Google chief executive Sundar Pichai.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
No military projects
The letter, which was signed by 3,100 employees - including "dozens of senior engineers", according to the New York Times - says that staff have already raised concerns with senior management internally. Google has more than 88,000 employees worldwide.
In response to concerns raised, the head of Google's cloud business, Diane Greene, assured employees that the technology would not be used to launch weapons, nor would it be used to operate or fly drones.
However, the employees who signed the letter feel that the internet giant is putting users' trust at risk, as well ignoring its "moral and ethical responsibility".
"We cannot outsource the moral responsibility of our technologies to third parties," the letter says.
"Google's stated values make this clear: every one of our users is trusting us. Never jeopardise that. Ever.
"Building this technology to assist the US government in military surveillance - and potentially lethal outcomes - is not acceptable."
'Non-offensive purposes'
Google confirmed that it was allowing the Pentagon to use some of its image recognition technologies as part of a military project, following an investigative report by tech news site Gizmodo in March.
A Google spokesperson told the BBC: "Maven is a well-publicised Department of Defense project and Google is working on one part of it - specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer.
"The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
"Any military use of machine learning naturally raises valid concerns. We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."
The internet giant is working on developing policies for the use of its artificial intelligence technologies.
Latest Stories
-
Prof H. Prempeh questions compulsory retirement at 60, proposes extended working age for lecturers
9 minutes -
Trump says progress made in Ukraine talks but ‘thorny issues’ remain
35 minutes -
Fear and confusion in Nigerian village hit in US strike, as locals say no history of ISIS in area
43 minutes -
Health Minister calls for collective action to fast-track Western North’s development
56 minutes -
Mahama Ayariga leads NDC delegation to Bawku ahead of Samanpiid Festival
6 hours -
Edem warns youth against drug abuse at 9th Eledzi Health Walk
9 hours -
Suspension of new DVLA Plate: Abuakwa South MP warns of insurance and public safety risks
9 hours -
Ghana’s Evans Kyere-Mensah nominated to World Agriculture Forum Council
10 hours -
Creative Canvas 2025: King Promise — The systems player
10 hours -
Wherever we go, our polling station executives are yearning for Bawumia – NPP coordinators
10 hours -
Agricultural cooperatives emerging as climate champions in rural Ghana
11 hours -
Fire Service rescues two in truck accident at Asukawkaw
11 hours -
Ashland Foundation donates food items to Krachi Local Prison
11 hours -
Akatsi North DCE warns PWD beneficiaries against selling livelihood support items
11 hours -
Salaga South MP calls for unity and peace at Kulaw 2025 Youth Homecoming
13 hours
