With AI and Justice for All

AI-generated image of a robot judge surrounded by surveillance cameras.

Editor’s Note: This post about AI’s use by governments was originally published on the now-defunct All Turtles blog in July 2018. You can read other articles I’ve written for that site here.

Look at today’s technology headlines and odds are you’ll see something about artificial intelligence. Whether it’s a new startup or announcements from established companies, practically everyone is working on AI. From smart assistants to connected devices, image and face recognition, algorithms and robots, we seemingly applaud every innovation. But recent events suggest there’s a line some companies won’t cross, at least in the United States, and that line is when AI is being used by our government.

Employees at the companies making AI don’t pause when their products — warts and all, are used by private companies for any number of questionable purposes. But when government agencies opt to use these products, things come to a halt. It’s when employees revolt and vocalize their opinions, taking a stand that they refuse to see AI weaponized.

AI Transparency matters

Rekognition is Amazon’s software that powers image analysis within applications. Seemingly innocuous when it launched in 2016, the company has since added several features, including real-time facial recognition and “improved” face detection. Then came revelations this spring and summer about Rekognition’s use by police. The Sheriff’s Office of Washington County (Oregon) had been piloting Rekognition for the past year “to reduce the identification time of reported suspects.” Amazon also signed a deal with the city of Orlando.

Tensions rose in May when the American Civil Liberties Union (ACLU) of Northern California and dozens of civic activist organizations submitted an open letter (PDF) to Amazon chief executive Jeff Bezos requesting that the company cease its dealings with law enforcement. The ACLU had obtained documents it claimed proved Amazon sold Rekognition to police and used nondisclosure agreements to circumvent public disclosure.

“People should be free to walk down the street without being watched by the government,” the ACLU wrote in a blog post. “By automating mass surveillance, facial recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo.”

Employees at Amazon demanded the contracts be ended. Their protest snowballed into a movement, spreading to Microsoft, Google, and Salesforce, where employees demonstrated against ongoing similar projects with police departments, Immigration and Customs Enforcement (ICE), and U.S. Customs and Border Protection. “The revolt is part of a growing political awakening among some tech employees about the uses of the products they build,” wrote Nitasha Tiku in Wired.

Why Protest Now?

Tech companies have been building AI-powered tools and devices for decades, so why the fevered uproar?

It’s not as if connected devices are without flaws: Amazon’s Alexa privacy debacle, smart TVs reportedly listening to you, Google’s “racist” algorithm within its photo app, and more. In those cases, dissent among employees barely registered compared to recent weeks. Providing tools for government agencies isn’t new, but it’s one thing for companies to produce software ultimately used by consumers versus law enforcement in which there’s a potential for the technology to be used against the public. And it becomes a hot-button issue when it comes to civil liberties, activism, and immigration under the Trump administration.

Google and Clarifai are two companies known to be working with the Department of Defense as part of Project Maven. Google’s participation became a public debacle, and the company eventually opted not to renew its contract. Clarifai, meanwhile, avoided the spectacle of a public disagreement among its employees.

Clarifai CEO Matt Zeiler stated in a blog post that responsibility was a core part of the company’s value and that everyone on his team understood the nature of the work and signed a non-disclosure agreement. But there was some pushback:

“Two employees decided they no longer wanted to be part of the initiative and were reassigned to other projects within Clarifai,” Zeiler wrote. “An important part of our culture is having employees who are actively engaged in the work that we do. We make sure they understand the projects they are asked to work on and regularly accommodate employee requests to switch or work on particular projects of interest.”

A lack of transparency, as well as ethical and political concerns, are the likely catalysts for such dissension. The likes of Amazon, Google, Microsoft, and Salesforce are so large, with so many customers that it’s difficult for employees to know everything that’s going on, including all the ways their technology is being used. In the aforementioned examples, transparency was key — executives failed to internally disclose what was happening with certain customers and to assuage employee fears about malicious uses of AI. 

Brian Brackeen, the CEO of facial recognition software provider Kairos, believes the use of tech like facial recognition may infringe on our civil liberties and therefore, shouldn’t be in the hands of the police:

“Facial recognition technologies, used in the identification of suspects, negatively affect people of color. To deny this fact would be a lie,” he opined in a TechCrunch article. “And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.”

Avoiding China’s Dystopia

China is an extreme scenario of what can happen when countries incorporate AI into their national security apparatus. The government has implemented facial recognition and cameras to begin tracking its 1.4 billion citizens. The country has laid out plans to become the “world’s primary AI innovation center” by 2030, which means investing up to $63 billion to grow core AI industries and establishing standards to boost efforts by Tencent and Alibaba. Experts suggest China is already on track to dominate the AI race, surpassing investments made in the U.S.

But China’s efforts are not without worry as the government is using AI innovations to monitor citizens, correcting their actions through shaming, and weaponizing them for the benefit of the military (e.g. cyberattacks). One use case is listed in a recent New York Times article, “Invasive mass-surveillance software has been set up in the west to track members of the Uighur Muslim minority and map their relations with friends and family.”

In the U.S., we may be convinced that our democratically elected government will not use technology to violate civil rights or suppress liberty. Laws are often slow to adapt to technology so developers must always question how their software will be used. Unlike GDPR to protect data privacy in the European Union, a similar law around abuse of AI doesn’t exist — tech companies operate on an honor system of sorts.

There are bonafide reasons for government agencies and law enforcement to use AI, such as improving crisis response in a disaster (natural or man-made) or for security. And while there’s historically been a solid friendship between Silicon Valley and government, AI brings a new dimension to the relationship. Tech firms are not part of the traditional military-industrial complex. Their engineers seek to build products to solve problems while their defense contractors’ counterparts build weapons to kill people. The pause by tech workers, in the face of opacity by their employers, should have been expected. 

The lesson here is simple. Tech companies developing AI must remain transparent and vigilant about how their technologies are being used not only by private companies but also by the government. 

Those Amazon, Google, Microsoft, and Salesforce workers who spoke out understand what’s at stake: the risk of turning countries into surveillance states.

Leave a Reply

Discover more from Ken Yeung

Subscribe now to keep reading and get access to the full archive.

Continue reading