Is This the End of Anonymity on the Internet?

GUEST BLOGGER
Hallie Ayres
Contributing Writer

When New York Times reporter Kashmir Hill began her investigation into Clearview AI, the company was shrouded in mystery. After months of digging, Hill has uncovered a fair amount of information about the facial recognition app, but many questions remain unanswered, leaving the future of privacy as enigmatic as ever.

Hill’s sweeping look into Clearview, its key players, and their motives was published on January 10, 2020. In the article, Hill documents the origins of the startup, which she first learned about through a leaked police memo that mentioned a radical new facial recognition software. Hill, who has been covering privacy for 10 years, was shocked to learn of Clearview’s premise: by scraping the open web for public photos, the app has built a massive database of over 3 billion photos to be run through facial recognition software.

Violating terms of service and crossing ethical lines

Technology of this nature and scale has long been considered taboo, even for major Silicon Valley tech companies. In 2011, Google announced that they had the means to develop a service like this but that they chose not to, citing the myriad ways in which this technology could be implemented in ill-natured and even violent ways. As Hill described in an episode of “The Daily,” the New York Times’ podcast, “If this app were made publicly available, it would be the end of being anonymous in public. You would have to assume anyone can know who you are any time they’re able to take a photo of your face.”

After developing the concept for the app in 2016, Clearview AI founder Hoan Ton-That enlisted the help of engineers who built a program that automatically collects images from social media sites such as Facebook, Twitter, Instagram and Venmo, as well as from news sites, employment sites and educational sites. This sourcing of their database material is what sets Clearview apart from other facial recognition softwares.

Many sites, Facebook and Twitter included, explicitly prohibit users from scraping images uploaded by other users. By doing this, Clearview is violating these sites’ terms of service. In response to questions about this, Ton-That told Hill that the database sources only from publicly available images. If users on Facebook disable the privacy setting that allows search engines to include their profile in searches, then their photos will not find their way into Clearview’s database.

However, once a profile has been scraped, the images acquired by Clearview will remain permanently in the database, even if they are later deleted from the social media site.

Grassroots marketing kept them under the radar

The app, which was backed financially by venture capitalist Peter Thiel, has been used by more than 600 law enforcement agencies within the past year and has also been licensed to serve as a security measure in a few companies, though Clearview refused to provide a list of its clients to the Times.

Though police departments have had access to facial recognition software for nearly 20 years now, their databases have been limited to images provided by the government, such as license photos, mug shots and juvenile booking photos. Clearview seized on this market by offering 30-day free trials to law enforcement officers across the country, banking on the assumption that the officers would implore their offices to subscribe to the service and would disseminate information about Clearview by word of mouth.

The strategy worked. Along with the 600 agencies, the F.B.I., the Department of Homeland Security and some Canadian offices are experimenting with Clearview. CBS News reports that the Chicago Police Department, one of the nation’s largest, maintains a two-year contract with Clearview that costs around $50,000, though only 30 officers have exclusive access to the app, according to a statement from the department.

As more agencies continue to use the service, Clearview’s database grows with each new image upload. Hill notes that “the company also has the ability to manipulate the results that the police see.” She recounted a particularly chilling story of her face being removed from search results after Clearview noticed numerous officers uploading her photo at Hill’s request during her investigation. When Hill inquired about this, Ton-That deemed it a “software bug.”

A privacy battle may be on the horizon

While a wide range of lawmakers, politicians, and computer science professionals have called for the banning of Clearview and facial recognition programs in general, police departments have praised Clearview for making it possible to solve cases that had been unresolved for years.

Ton-That told CBS News that the app has an accuracy rate of 99.6%, and Hill mentions that she spoke to a retired police chief from Indiana who solved a previously unsolvable case within 20 seconds by using Clearview. “One of the officers told me that they went back through like 30 dead-end cases that hadn’t had any hits on the government database, and he got a bunch of hits using the app… With the government databases they were previously using, they had to have a photo that was just a direct full-face photo of a suspect -- like mug shots and driver’s license photos. But with Clearview, it could be a person wearing glasses, or a hat, or part of their face was covered, or they were in profile, and officers were still getting results on these photos,” Hill remarks on the podcast.

Since the publishing of Hill’s article, Twitter, Google, YouTube, Facebook, Venmo and LinkedIn have all sent cease-and-desist letters to Clearview, demanding that Clearview delete any previously collected data and that they refrain from sourcing any new data or images from their respective sites. In response to these letters, Ton-That has argued for Clearview’s right to access public data, citing the First Amendment. “The way we have built our system is to only take publicly available information and index it that way,” he told CBS News.

Clearview has also faced reproach from several lawmakers and data privacy watchdog organizations, both expressing fears that Clearview has set the scene for the total loss of anonymity in public. Toward the end of January, New Jersey’s attorney general, Gurbir S. Grewal, banned state prosecutors from using the app, saying, “I’m not categorically opposed to using any of these types of tools or technologies that make it easier for us to solve crimes, and to catch child predators or other dangerous criminals. But we need to have a full understanding of what is happening here and ensure there are appropriate safeguards.”

The consequences are unknown

In analyzing the computer code behind Clearview, researchers at the Times discovered that it includes programming that could eventually allow the app to run through augmented-reality glasses, paving the road for users to identify anyone they walk past. While many law enforcement officials and Clearview’s investors are anticipating that the app will eventually become widely available to the public, Ton-That remained cautious: “There’s always going to be a community of bad people who will misuse it,” he told Hill.

However, now that the taboo against using publicly sourced images in this kind of software has been broken, there’s no guarantee that another company might create its own version of the app. With no ban or any real laws in place to regulate the use of such a program, Hill left readers and listeners with a bleak thought, “In terms of holding this tool back, we’re just relying on the moral compasses of the companies that are making this technology and on the thoughtfulness of people like Hoan Ton-That.”

Which technologies are most effective in helping organizations manage their fraud risk? How are organizations successfully harnessing the power of data and technology as part of their anti-fraud programs? The answers to these and other questions can be crucial in gaining management buy-in and successfully implementing new anti-fraud technologies. The information contained in the Anti-Fraud Technology Benchmarking Report helps organizations effectively evaluate anti-fraud technologies so that they can remain one step ahead of potential fraud perpetrators.