A few years ago, Ken Crum started getting uncomfortable with how much of his life seemed to be online. The long-time computer programmer was particularly concerned by what companies appeared to know about him.
The amount of personal information was mind-boggling to the 66-year-old Texan, who recently moved from Dallas to the small town of Weatherford. Data brokers were collecting his personal details. Social media was targeting ads at him. Then one day, after shopping at a local home improvement store, he got an email from the company asking how his visit was. While he can’t be absolutely certain, he’s pretty sure the company used location-tracking on his work phone to find him.
He found it all unnerving.
So Crum decided to pull himself off most social media, keeping just his LinkedIn account. He quit using Google in favor of DuckDuckGo, a search engine that promises to protect user privacy. He deleted tracking-prone “app crap” — his words — from his smartphone. And he tried to wrestle as much of his personal information back from the data brokers as possible, paying for a subscription to DeleteMe, a service that helps people remove information from databases.
“I wanted to get as much of me off the internet as possible,” Crum said. (Abine, the company that owns DeleteMe, introduced CNET to Crum.)
Crum, a charming individual who shares his opinions freely, isn’t anti-technology. He’s simply one of a growing number of Americans concerned by the loss of control over personal information that ranges from your Social Security number to your search history. Today, your digital self includes your social media accounts, biometric identifiers, usernames and passwords. Possibly most creepy: Your smartphone records the location data of your daily life as you tote it around.
The data collection doesn’t stop there. Your Yelp review of a pizza parlor or a comment you posted on your local newspaper’s website all become part of your digital profile. They’re used by marketers trying to get you to buy something, to support a policy or to vote for a candidate. There are oodles of data about you. Most of that info is largely free for the taking.
As you’d expect, there’s no shortage of companies looking to profit from it. At last count, there were about 540 data brokers operating in the US, according to the Privacy Rights Clearinghouse, which based its estimate on numbers from data broker registries maintained by California and Vermont.
The skyrocketing amount of consumer data online has also given cybercriminals new opportunities to exploit your personal details for identity theft, online scams or other kinds of fraud. Once cybercriminals get your data, they use it to try to bust into your accounts or sell it to other cybercrooks. Get breached once and you may spend years cleaning up the mess. (Here’s how to remove your personal information from the internet.)
The pandemic has only increased the amount of personal data online because more people turned to the internet for work, school and social connections. According to Abine, the number of pieces of online personally identifiable information per individual has jumped 150% in the last two years, boosted by increases in both data broker activity and COVID-related consumer screen time.
That can make it all but impossible to distinguish your digital identity from your real-world self.
“All identity is digital identity, at this point,” says Eva Velasquez, president and CEO of the Identity Theft Resource Center, a nonprofit group that helps victims of identity theft. Separating the two would be a mistake, she adds.
Why privacy matters
Creating massive databases of consumer profiles has gotten easier in recent years because of advances in artificial intelligence technology that allow for better cross-referencing and correcting of data, says John Gilmore, Abine’s head of research. The databases are bigger and more accurate than ever.
Though many people worry data brokers are mining their social media accounts for personal information to feed those databases, Gilmore says the vast majority of information comes from voter registration rolls, property and court records, and other conventional public sources.
Still, smaller, questionably legitimate data farmers are likely scraping social media, as well as buying stolen consumer data off the dark web, Gilmore says. Worse, cybercriminals and extremists groups have used these methods. A few years ago, members of the alt-right — a loose collection of neo-Nazis and white supremacists — attempted to create data profiles of supposed far-left activists with the intent of using the data to dox and harass them.
Those groups have a lot of data to work with these days. People have unwittingly become “data creators,” Velasquez says. The digital footprint produced by the average person goes well beyond Facebook oversharing. Keeping tabs on the data created by online shopping, online entertainment and simply surfing the internet goes well beyond the capabilities of most people.
That’s why the Electronic Frontier Foundation and other digital privacy advocates are arguing for limits on what types of data companies can collect, how long they can keep it and who they can share it with.
Curbing the amount of data stored would reduce the impact of data breaches.
“It seems like every week there’s a breach,” says Aaron Schwartz, senior staff attorney for the EFF. “To state the obvious, if the information isn’t collected in the first place or stored, this wouldn’t be an issue.”
Consumers covered by state privacy laws also need the ability to sue companies that infringe on the rights protected by those laws without having to rely on state attorney generals to do it for them, he says. For example, Illinois’ privacy law gives consumers this right, while a similar Texas law doesn’t.
That isn’t to say Texas’ attorney general has been silent on data privacy issues. The AG’s office filed suit in February against Facebook’s parent company, Meta, over its past use of facial recognition technology, accusing it of violating the state’s privacy laws by capturing biometric data on tens of millions of Texans without properly obtaining consent.
Months earlier, Facebook had pledged to shut down its facial recognition system and delete the face scan data of more than 1 billion users. The company said the decision was spurred by societal concerns and regulatory uncertainty about facial recognition technology.
Crum, the computer programmer in Texas, says he was floored by his first DeleteMe report, which showed that more than 200 data brokers had harvested personal tidbits about him. The data included his name, address, emails and phone numbers, along with information about his shopping habits and purchase history.
“There’s nothing bizarre about my life,” he says. “But I value my privacy, and don’t want anybody selling my information to any Tom, Dick or Harry for any reason.”
The threat of data breaches
The organizations that hold our data are under constant threat from cybercriminals looking to steal online data. When peoples’ identities are compromised, the fallout can be life shattering.
Identity theft can destroy a person’s credit, make it difficult to get housing and, in some cases, drive people to contemplate suicide, according to a report by ITRC.
Eighty-three percent of people polled by the ITRC said they were unable to rent an apartment or find housing as a result of identity theft, while 67% said they couldn’t pay their bills as a result of their information being exploited.
Many of the crimes stemmed from the increasing digitization of data and records, Velasquez of the ITRC says. Twenty years ago, identity theft was all about dumpster diving, mail fraud and human resources records stolen by an insider. Now it’s about acquiring data through mass data breaches, phishing and scam phone calls.
“So all of those logins and passwords,” Velasquez said, “they’re part of your digital identity, too.”
The pandemic has only made things worse, she says. Stimulus payments have been stolen and people who qualify for unemployment benefits have been unable to get them because they can’t prove who they are, she says.
The Federal Trade Commission, which tracks fraud complaints, recorded 1.4 million cases of identity theft last year, about the same number of cases as in 2020, but double the 700,000 cases it logged in 2019.
Abine’s Gilmore says the exposure of personal activity can also hurt people, too.
He pointed to the recent theft and publication of a list of donors to truckers who were protesting COVID restrictions in Canada by blocking border crossings. At least one Canadian government official lost his job after he was discovered to be a donor.
“This use of personal information to hurt others has become so easy, and you can’t be punished for it,” he says.
The debate over biometrics
Sometimes the need to protect data can clash with the desire to keep it private, particularly when you’re talking about biometrics.
A person’s face, fingerprints and even some behavioral characteristics, such as how they move their mouse across a computer screen, can be great identifiers because they’re unique and don’t change. They’re also convenient to use because they can’t be cracked or forgotten the way a password can. They also can’t be misplaced, like a key card.
Velasquez says they’re a necessary next step in ID protection and proof, though “frameworks and guardrails” need to be built in to protect consumers. Specifically, the tech needs to be something people choose to opt into, and the biometrics data only gets used with a person’s consent.
Additionally, there should be alternatives for people who need them — for example, elderly or vision-impaired people who might not be able to scan their faces to get facial ID data with a smartphone camera, she says.
Privacy advocates, however, have serious concerns about other ways that biometric data can be used. For example, tech startup Clearview AI has been targeted by investigations and lawsuits because of its creation of a facial recognition tool powered by billions of images it scraped from social media sites without consent.
The tool has been licensed to law enforcement and government agencies for use in solving crimes. Privacy advocates warn it’s been unlawfully used to identify people protesting police brutality. They worry tools like Clearview could stifle free speech by discouraging people from attending such gatherings out of fear authorities will target them.
Lawsuits filed in both federal and state courts claim the company violated Illinois’ Biometric Information Privacy Act (BIPA), which requires opt-in consent to collect someone’s faceprint.
Clearview maintains that its right to collect the data is protected by the First Amendment, but its motion to dismiss the federal case on those grounds was recently dismissed. Representatives for Clearview didn’t respond to a request for comment.
Schwartz of the EFF, which has filed friend-of-the-court briefs in both the federal and state cases, says the lawsuits are certain to set key precedents about what companies can and can’t do with biometric data. In the meantime, the EFF continues to push for the passage of strong privacy protection laws at the state level.
As for regular people, like Crum, Schwartz says the EFF encourages the practice of “surveillance self defense” measures like always using strong passwords, two-factor authentication and end-to-end encryption.
“But at a certain point, we can’t do enough as individuals,” he says. “The surveillance is just that far reaching.” (Republished from CNET.com)
Bree Fowler is a senior writer covering cybersecurity. Before joining CNET she covered the same beat, along with other tech topics, for The Associated Press and Consumer Reports.