20 May 2018

Google's march to the business of war must be stopped

Lucy Suchman, Lilly Irani and Peter Asaro

‘Should Google proceed despite moral and ethical opposition by several thousand of its own employees?’ A US remotely piloted aircraft in Iraq, 2015. Project Maven uses machine learning and artificial intelligence to analyse the vast amount of footage shot by US drones. Should Google, a global company with intimate access to the lives of billions, use its technology to bolster one country’s military dominance? Should it use its state of the art artificial intelligence technologies, its best engineers, its cloud computing services, and the vast personal data that it collects to contribute to programs that advance the development of autonomous weapons? Should it proceed despite moral and ethical opposition by several thousand of its own employees?


Gizmodo reported this week that more than a dozen Google employees have resigned over Google providing AI support to a Pentagon drone program called Project Maven, which aims to improve the ability of drones to identify humans. This follows a public letter, signed by 3,100-plus Google employeeswho say that Google should not be in the business of war.

We agree with and support those employees and we are joined by more than 700 academic researchers who study digital technologies. We support their demand that Google terminates its contract with the US Department of Defense (DoD), that the company commit not to weaponize the personal data they collect, or support the development of autonomous weapons. We also urge their executives to join other artificial intelligence (AI) and robotics researchers and technology executives in supporting an international treaty to prohibit autonomous weapon systems.

Google has long sought to organize and enhance the usefulness of the world’s information, and along the way it has taken responsibility for collecting our most intimate information, from our personal correspondence to our calendars, to our location data, to our private photos. Being entrusted with such personal information comes with the responsibility to protect it, and to use it carefully, in ways that respect the global makeup of those who contribute these records of their lives.

Given this grave responsibility, news of Google’s involvement in the defense department’s Project Maven alarmed many of us who study digital technologies. Maven is a US military program that applies AI to drone surveillance videos for the purpose of detecting “objects of interest”, which are flagged for human analysts. Google is providing not only AI technologies (potentially built in part on the personal data that Google collects), but also engineers and expertise to the DoD. Maven is already being used “in the Middle East” and the project is slated to expand by next summer, eventually being used on blanket surveillance footage from “a sophisticated, hi-tech series of cameras … that can view entire towns”.

Reports on Project Maven currently emphasize the role of human analysts, but the DoD’s ambitions are clear. These technologies are poised to automate the process of identifying targets, including people, and directing weapons to attack them. Defense One reports that the DoD already plans to install image analysis technologies onboard the drones themselves, including armed drones. From there, it is only a short step to autonomous drones authorized to kill without human supervision or meaningful human control. We already lack sufficient oversight and accountability for US drone operations. It’s unlikely that we would know when the US military takes Maven across the threshold from image analysis assistance to fully autonomous drone strikes.

Even without automated targeting, the US drone program has been extremely controversial, with many arguing that targeted killings violate US and international law. Targeted killings include “personality strikes”, on known individuals named on “kill lists”, and “signature strikes” based on “pattern-of-life analysis”, which target people based only on their appearance and behavior in surveillance imagery. As a result, not only are bystanders frequently killed in strikes, but social gatherings of civilians, such as weddings, are sometimes directly targeted. “Every independent investigation of the [drone] strikes,” the New York Times reported in 2013, “has found far more civilian casualties than administration officials admit.”

The fact that military funding supported the early development of computing technology does not mean that it must determine the field’s future, particularly given the current power of the tech industry. With Project Maven, Google joins hands with the arguably illegal US drone program, and advances the immoral practice of statistically and algorithmically targeted killings. Google, a global company, has aligned itself with a single nation’s military, developing a technology that could potentially put its users, and their neighbors, at grave risk.

We are at a critical moment. Two months ago, Stanford professor and Google Cloud AI director Fei-Fei Li wrote an op-ed in the New York Times titled How to Make AI That’s Good For People. We call on Google’s leadership to live up to its ethical responsibilities by listening to people who challenge Google to expand their definition of “good”. We call on Google to expand its definition of “people” to include those already subjected to illegal drone strikes and data surveillance. 

This week, in response to a question at the I/O developer conference, Google AI chief Jeff Dean stated that he opposes using AI to build autonomous weapons. We call on Google to support ongoing international efforts at the United Nations to ban the development and use of autonomous weapons. We call on Google to respect employees’ right to refuse work they find immoral or unethical. Google’s employees asked their company to leave money on the table and stay true to its words: “Don’t be evil.” Nowhere is this more urgent than in deciding whether to build systems that decide who lives and who dies.

Terminate its Project Maven contract with the DoD. 
Commit not to develop military technologies, nor to allow the personal data it has collected to be used for military operations. 
Pledge to neither participate in nor support the development, manufacture, trade or use of autonomous weapons; and to support efforts to ban autonomous weapons.

No comments: