Data ethics: When your public images are used for profit

Privacy news
3 mins
An image of a person being collected.

NOTE: This post was originally published on February 17, 2021

The Canadian government came down hard on facial-recognition company Clearview AI earlier this month, determining that the firm’s practice of scraping billions of publicly available images and selling the resulting facial-recognition technology to law-enforcement bodies was illegal mass surveillance. 

The company used nearly 3 billion images from sources such as social-media profiles to build a facial-recognition app. It then sold the tech to over 2,400 law-enforcement organizations in the U.S., as well as at least one in Canada

The Canadian Privacy Commissioner, which began an investigation into Clearview AI approximately a year ago, determined that the practice “violated the reasonable expectation of privacy of individuals” and demanded that the company delete its repository of Canadian images.

[Get more privacy hot takes. Subscribe to the ExpressVPN Blog Newsletter.]

Canada is the first country to take such a strong position against Clearview AI’s practices. Authorities in Australia and the UK are also pursuing an investigation, but it remains to be seen what their recommendations are.

Using your images for products and experiments

Clearview AI is far from the first company to wade into an ethically nebulous area. In 2014, Facebook experimented with the News Feeds of almost 700,000 users to see if it could influence users’ emotions. It went as far as co-publishing the results of its findings in an academic journal in partnership with a professor from Cornell University. 

And apparently, Facebook believed it did nothing wrong. After public backlash over the experiment, the company pointed out that its nearly 10,000-word terms of service allowed the use of data for “research and improvement.”

Clearview AI, too, seemed to be unfazed by the strong public response and the verdict of the Canadian Privacy Commission. It argued that the consent of individuals was not required because users had willingly published the images themselves and that it was trying to push against a world in which only a handful of tech companies had access to public information.

And even if you think crime fighting is a worthy cause, providing facial recognition for law enforcement is surely just the start. Powerful technology will find other uses—and it’s likely that some of those uses will be much more harmful.

‘It’s your choice to post’

When confronted with questions over the ethics of their businesses, many of the Big Tech companies will argue that the public isn’t required to use their services. That’s a fairly reductionist argument. 

During the pandemic, how many students can opt out of Zoom to attend class? Can those preferring to isolate and socially distance afford to stop relying on Instacart or Amazon Prime for essentials? Nervousness about public transportation options means there’s no other choice but to use Uber and Lyft. And when you know your best chance of getting employed is to have an up-to-date LinkedIn profile with a clear headshot, would you choose privacy over the ability to find a job?

Try as we might, there seems to be no escaping the outsized influence that tech companies have in our lives. 

A Wall Street Journal article notes how our reliance on technology-assisted platforms has shot to stratospheric levels since the start of the pandemic. Spending has surged online for computers, retail goods, video games, and groceries. The combined revenue for Apple, Microsoft, Amazon, Alphabet, and Facebook grew 20% at a time when brick-and-mortar retailers and legacy businesses like airlines teetered on the edge of bankruptcy.

Make it illegal

Some lucky citizens, such as those in the EU, enjoy the ability to hide some of their personal information. But the information isn’t scrubbed from the internet as some might think; the Right to be Forgotten only compels search engines like Google to delist URLs from searches if they meet certain criteria. Plus, the rule only applies within the EU. Anyone searching from outside those borders can still see those pages in the search results.

As it stands right now, there’s no legal cover preventing companies from mining publicly available personal data and using it for whatever purpose they wish. Is it ethical? Definitely not. Can it be prevented? No, for the most part. 

The contrarian argument of living in a bunker, off the grid does not hold sway any more. It’s pointless and self-defeating. What we need is a more robust declaration of acceptable business practices and a clear distinction between our right to privacy for corporations looking to profit from our personal information and images. 

I like to think about the impact that the internet has on humanity. In my free time, I'm wolfing down pasta.