Three-thousand Google employees have signed a letter protesting the internet giant’s contract with the Defense Department to develop artificial intelligence in order to analyze imagery collected by drones.
The employees are calling on Google CEO Sundar Pichai to cancel the project immediately and to “enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
Since 2014, the nations that have signed the Convention on Certain Conventional Weapons (CCW) have convened biannual conferences of experts to study the issue. Academics, policymakers and activists have found widespread agreement on the importance of controlling autonomous weapons, yet failed to reach consensus on how to do it.
Technologists Speak Out
Along the way, technologists have become increasingly vocal on the issue.
Last August, 116 computer scientists and founers of AI firms called on the United Nations to ban the development and use of killer robots. The open letter, signed by Tesla’s chief executive Elon Musk, warned that an urgent ban was needed to prevent a “third revolution in warfare,” after gunpowder and nuclear arms.
The letter asserted:
“Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
“We do not have long to act,” the AI experts concluded. “Once this Pandora’s box is opened, it will be hard to close."
The Google employees made the same point. While a top Google executive has assured the employees that the AI technologies under development will not “operate or fly drones” and “will not be used to launch weapons,” the employees have rejected the claim.
“The technology is being built for the military,” they note, “and once it’s delivered it could easily be used to assist in these tasks.”
Follow the Leader
Google's involvement in Project Maven follows the lead of Eric Schmidt, the chairman of Alphabet, Google’s parent company. Since 201X, Schmidt has led the Pentagon’s Defense Innovation Board, which he says seeks “to get the military up to speed with things which are going outside the military.”
In a speech last November about the board’s work, Schmidt mentioned Project Maven twice, and said, “One of the most important points we made is that the military is not leading in AI.”
Schmidt acknowledged “a general concern in the tech community" that "the military-industrial complex [is] using their stuff to kill people incorrectly….it’s essentially related to the history of the Vietnam War and the founding of the tech industry.”
Schmidt's oddly detached comments embody the sort of abstract utilitarianism that assumes technological solutions are inherently beneficial. They exemplify the mindset Google's employees are now questioning.
Mary Wareham, a leader of the Campaign to Stop Killer Robots, told AlterNet that the campaign wrote to Google last month seeking more information about Project Maven. “We received a swift and friendly but vague response that did not address our questions,” she said.
“I hope the company will realize the public relations benefits of speaking directly to the concerns over autonomy in weapons systems and publicly support the call to ban fully autonomous weapons,” Wareham said. “These questions are only going to intensify."
Read the full text of the Google employees' letter.
Shares