Is privacy dead? Do we still maintain the desire to safeguard our habits and thoughts? Or does our recent disregard for privacy show that we have really lost this primal need?
In my opinion, our startling lack of concern for safeguarding our personal lives stems more from a lack of understanding about how much of our personal data is stored and analyzed in today’s digital world than from any fundamental disregard for our privacy.
Whatever the reason, we all seem increasingly oblivious to how much of our information we allow to be shared. If I wanted to, I could easily tell you where most of my friends are right now. With a little more effort I could put together what they’ve been up to every week for the last five years. Eating sushi at that restaurant? Traveling with HIM where!? The world probably knows it all.
That type of networked world can be creepy. But at least sharing that data is still an affirmative and (usually) conscious personal decision.
Where our culture of open information becomes alarming is in how easy it is to subtly allow our lives to be monitored. It’s no longer about what we choose to share. Now it’s about what gets collected about us. Being plugged in (especially on mobile) basically requires letting countless companies monitor various facets of our life. We are stuck in a series of indiscriminate dragnets.
Technology has become sophisticated enough and data storage has become cheap enough that new paradigms of data collection are emerging to challenge historical notions of privacy. Since our old protections are no longer adequate we are at a pivotal crux for deciding our future path.
Yet at the exact moment that we need to throw ourselves into redefining what it means to protect our privacy, technology companies and governments are both taking advantage of our apathy and lack of understanding to exploit deep lapses in legal and consumer protections.
The result is that our lives are being invaded. Sometimes with our consent, but often without it. And it’s not limited to one small area of your life like social media either. This invasion is all-encompassing.
Sign in with [X]…the path to the new 1984
We’ve all signed into various services with our Google, Facebook, or LinkedIn account. What we rarely stop to consider is how every time we do that we are “forced” into approving “permissions” that allow these companies to obtain an even greater understanding of our lives across different services. And of course with network effects, the more we do this, the more information is collected about us AND about those people we interact with.
So not only do our actions impact ourselves, they also impact our friends–even if they don’t consent to having their interactions recorded. All of this happens because it’s easier to opt-in and the costs of doing so are pretty obscured. After all, when was the last time you read an electronic user agreement?
Admit it. You, like me, like all of us, just hit the ‘I agree’ box and move on. But digital convenience isn’t free, and APIs shouldn’t be built to foster exploitive data collection.
Having one account allows a single company access to most of your digital life. They can then use that information to sell stuff to you, monitor you, and generally influence you in hundreds of ways.[1] If you think that doesn’t matter you’re wrong. No one is immune to influence–subtle, perceived, or subconscious—we all feel its effects.
It gets worse. All of the data that is being collected is starting to be aggregated with increasingly sophisticated models and data is being aggregated into new scoring models across different facets of our lives. These scores could have important ramifications for us, yet they remain shrouded in secrecy. We have access to our credit scores, shouldn’t we have the same right to know what else is out there?
The end result of our rapid technological advance is that it is now cheap and easy to track individuals across devices, platforms, and locations. And while these corporate actions may be legal today, they shouldn’t be. It’s too easy for them to be abused.[2]
But Resistance is Hard
The amount of active resistance it requires just to maintain a base level of privacy with the proliferation of these networks is astronomical. Even if you are careful and register for each website or each app individually, the separate terms of use for each generally contain enough invasive privacy policies to negate your care. Each policy, buried in small text in nebulous legal agreements, give those companies an incredible latitude to access (and share) parts of your life that should be firmly off-limits.
Data intrusion isn’t limited to pure software companies either. Recently, for example, Wal-Mart conducted secret tests where they inserted RFID tags into packages of lipstick so they could track their customer movements. Meanwhile CVS Pharmacy sold customer prescription data to drug companies so they could improve their marketing. This isn’t an isolated phenomenon and it isn’t innocuous. As one former executive recently stated: “If you show us what you buy, we can tell you who you are, maybe even better than you know yourself.”[3]
The ability of companies to trade the use of their software for aggressive access to most of your data is inexcusable and it shouldn’t be legal. And yet…
In the Future You Will Voluntarily Carry Your Monitoring Device
Take Jawbone for example. This health tracking device collects and offers us a plethora of data on our physical activities and physiological responses. That’s great in theory. More data can allow us to make healthier, more informed decisions.
But Jawbone, which we pay a couple hundred dollars for, takes all of our data without asking or letting us opt out. Their “Privacy Policy” is put there to “inform” us and not give us choices on actually controlling our own data. The company is completely unapologetic about this fact. I can’t buy a jawbone and have my data locally stored. If I buy their product they will collect everything they want about me, end of story. It is as if their privacy policy assumes you would be crazy to want to opt out of any data collection or keep any information from being shared. After all, they seem to say, such a desire is a quaint and 20th century notion.
But let’s say you use UP, Jawbones most popular wearable product. What exactly are you letting the company have access to?
- Your location at all times
- All of your contacts
- Data on when you are asleep, awake, or idle
- Your physiological readings
- Your phone model/make/and unique ID
Perhaps most egregiously, by using UP you are also MANDATED to be publicly searchable in the UP Directory. Oh, and if that wasn’t enough, Jawbone will make sure to broadcast your participation by automatically notifying all other UP users who you are linked to via Facebook or email.
What, you thought your personal health history belonged to you?
I think wearable fitness is an important trend. A lot of good can come of tracking and optimizing your life against data. But Jawbone is able (somehow) to legally mandate that if you use their device everything you do belongs to them.
I’m sure some people out there are saying that if I don’t agree with this practice I should just avoid Jawbone. Fine. Should I avoid all apps where I can’t control my own privacy settings? Maybe I won’t use Uber because it tells the world where I am at any time with their ‘God Cam‘? Or maybe I won’t walk the streets because CCTV’s automatic sensors are scanning my face and recording my every location. At what point does the line get drawn?
The reality is it’s not practical to avoid all of these apps. Becoming a Luddite to maintain your privacy shouldn’t be the only option.
The real problem here lies in both the cavalier greed for our data exhibited by myriad companies and the lack of optionality we’re given regarding information sharing. If I am buying your product there should be choices that allow me to safeguard all of my data within that application. Period.
Even more than that, the default settings on all apps should reflect that my right to privacy is held sacrosanct. Companies should have to work hard to get me to agree to share data. Not the other way around. This idea isn’t that far-fetched. I don’t think it is even very controversial. It just needs to be codified into law and then protected.
The Internet of Things Makes Now the Time to Enforce Privacy
If none of this gets to you, let me paint a picture of one potential not-so-distant future involving the most mundane of household appliances—your refrigerator.
Let’s say in a few years you go online and buy the latest and greatest connected smart refrigerator by General Electric. Now this fridge does lots of great stuff like automatically cool your fridge zones to the right temperature for the food it detects is stored there. It probably tells you when your food is set to expire. Maybe it reminds you to use up certain foods that have been in there awhile or pulls recipes for the combination of ingredients you have on hand.
Pretty cool right? It could be. If that data is my own data.
But just as with the Jawbone example, buying a GE fridge shouldn’t mean that the company has the right to know about everything I put inside of it. Right?
If you disagree, then imagine your fridge telling your doctor how many beers actually go in and out every week, or the combination of foods you take out at the same time. Now all of a sudden whoever is getting that data knows what you eat, when you eat it, and how much you drink.
That’s bad. But it could get much worse.
Imagine combining that info with your social media posts to build a profile of your moods and your dietary coping mechanisms for dealing with those moods. Now GE will take that info and sell it to your health care provider. Every time you eat a snickers your health premium is going up. Chew on that one.
That isn’t the only way this could play out. Maybe the company will stick a little advertisement in your fridge to bombard you with coupons for instant delivery of junk food when you’re feeling shitty. Or maybe your GE Fridge has a higher calling and exposes you to healthy coupons instead.
Whatever the scenario, the point is you have less privacy and your capacity for resistance has decreased because you’re now subject to more specifically targeted messaging designed to hit us when we’re most receptive. All of this happens automatically too. That’s one creepy fucking refrigerator.
If you think today’s world and the implications of not being able to opt out of monitoring is scary just wait till you see the future. Wearable sensors (or even implants) will give companies unprecedented access to your entire measurable psyche. If we want to go really conspiracy theory with it, what happens when we have nanobots and AI living inside our bodies?
The internet of things, the truly connected world is coming. Is that going to be a good thing?
Big Brother Wants your Data
The right to protect our data is important.
I don’t think this situation is just the fault of data greedy tech companies. The private sector, even obsessed with the belief that data leads to the highest valuation and most defensible networks, is exploiting legal loopholes because it can.
I think there’s another deeper, cultural and political layer resting on top of all this. Tech-enabled monitoring is actually being propagated by the government we’ve depended on to protect us.
Maybe it’s ironic that even as we encounter a problem best solved legislatively our government is busy being the worst offender of all.[4]
Historically, we have been protected (at least in the US) from violations of our privacy by the principle of probable cause. This let us be reasonably confident that the government would not seize or inspect our private property or information without evidence or suspicion of a crime. In other words, they can’t just decide to spy on you.
In the post-9/11 world, however, we have allowed our legislative branch unprecedented access to our records. One of the best exposes about this appeared just over a year ago in the New Yorker. Their fascinating and in-depth chronicle titled, State of Deception, focused on how the intelligence community has expanded the data it collects and analyzes.
The evidence is chilling:
“Since 2001, the N.S.A. has run four surveillance programs that, in an effort to detect terrorist plots, have swept up the contents of the phone and Internet communications of hundreds of thousands of Americans, and collected the telephone and Internet metadata of many more Americans.”
One of the N.S.A’s biggest proponents, Senator Feinstein, has justified this unprecedented level of intrusion by calling it a “data-collection program” rather than a “surveillance program.” Are we really dumb enough to fall for that line?
The grey line the government is exploiting comes from a lack of clarity in defining “metadata.” It is my view that this mass data collection program is itself an affront to our rights.
But even if we aren’t able to completely prevent monitoring from happening in the first place, our digitally recorded activity needs to still be classified as being property. That means access to it should be protected by the same laws that protect our homes.
Having these laws is extremely urgent. The exploitative potential of this data is huge, and there is plenty of evidence that the government already abuses it. From monitoring the pornography searches of vocal (non-violent) dissidents to more subtle forms of intrusion this type of governmental data collection should scare us all.
It’s not just the NSA that is adopting the ability to track and record us either.
Our passports already contain RFID chips and increasingly so do school IDs. In fact, in 2013 a Texas student lost a student’s challenge to her school’s requirement that she carry an RFID-enabled ID card.[5]
State police departments already track the location of thousands of cell phone locations and can pull them up at will. No warrant required. Such actions are routine.
As new technologies come on line this becomes even more of an issue. The FAA for example has already authorized 20 states to remotely pilot drones in U.S. airspace. According to Bloomberg news, “The federal government has already allocated billions [of dollars] for these, and state and local governments will follow.” What other technologies will come online moving forward?
When we can be both individually tracked and systematically targeted, our ability to be repressed grows exponentially. And just because it hasn’t happened to you yet doesn’t mean it can’t or won’t.
Parts of our government even acknowledge this tension.
Joe Biden for example upon learning that the government was collecting telephone records said,
“I don’t have to listen to your phone calls to know what you’re doing…If I know every single phone call you made, I’m able to determine every single person you talked to. I can get a pattern about your life that is very, very intrusive.”
Privacy Must be Defended
In an excellent op-ed in The Boston Review, Reed Hundt sums up the entire problem. He states:
“If law-abiding people cannot be protected against the misuse of their data by an alliance of government and information-gathering firms, then they lose their status as rights-holding individuals. They become statistically defined groups.”
But does, as Hundt asks, this “alliance of big business and big government to control big information” constitute a new and necessary form of control over society? Or does the promise of democracy still hold—are individuals still sovereign?
The birth of all-encompassing, algorithmically powered spying cannot be allowed to hide within seemingly innocent sounding language. If it does the automatic collection and analysis of all of our data will be so prevalent as to move us into a modern-day version of 1984‘s dystopia.
So What’s Next
At the very least, the compromise between the collection and interception of digital data for national security versus the individual right to privacy needs to favor due process. This shouldn’t be a radical idea and it’s definitely a bi-partisan issue.
The solution lies in strengthening our “old-school” eighteenth century processes so that they are tailored to protect each individual in a new type of world. We all have the right to be protected by a judicial process not decided on in secret courts.
My point here isn’t to single out the government programs (although they deserve to be). The point is to illustrate the huge gap between what is being collected about us, how that information can be exploited, and what people think the law is versus how it’s been secretly interpreted.
Privacy isn’t dead, but we are at a crossroads where the impact of new technologies combined with an antiquated and corrupted legal system and power-hungry companies and governments has placed our individual privacy at great risk to government overreach, unethical corporate spying, and straight-up thieves.
At this juncture we must make real choices and put real pressure on our elected officials and the companies we do business with or face a future filled with potential oppression.
So where is all the activism?
Footnotes
[1] One example from Google is discussed by Julia Awin in an excerpt from her book Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance: Prior to the 2012 presidential election, google searchers who looked up Barack Obama saw news about the president threaded into their future searches on other topics. Searchers who looked up Mitt Romney did not see news about the Republican presidential candidate included in subsequent searches. Google said that the disparity was simply the result of the mathematical formula it was using to predict users’ queries. Google’s technologists viewed their effort as helping us figure out the answer to our needs before we know we have those needs. But it is worth noting that if a newspaper did the same thing — inserted Obama news into articles about toothpaste for certain readers — it would be roundly called out as biased and intrusive. Similarly, a newspaper would be called out if it placed only gay ads in the papers of subscribers it deemed to be gay, or diabetes treatment ads in the papers of subscribers it guessed had the disease.
[2] Julia Anwin’s post on Medium has a great description of this type of abuse using one source of innocuous (even voluntarily) collected data: Census data. She says that “during World War I, census data was used to locate draft violators. During World War II, the Census Bureau provided the names and addresses of Japanese-American residents to the U.S. Secret Service. The information was used to round up Japanese residents and place them in internment camps.”
[3] Data from this entire paragraph, including the quote are pulled from a Stansberry Research Report by Dr. Eifrig which is available to subscribers of his excellent newsletter at http://secure.stansberryresearch.com/reports/the-easy-way-to-maintain-your-privacy-in-america-3/
[4] Why do I think it is best solved this way? Because to a large extent this problem is so dispersed and hidden from us as consumers and citizens that we have a hard time rallying to drive forward collective change.
[5] Again pulled from the excellent article on Medium, ‘Who is Watching You.’
Last modified: February 11, 2015
This is a great post. I’m an active member of several organizations concerned with the abuse of personal data so read many discussions of the issues, sometimes daily. I found yours to be a great and comprehensive overview. A couple notes on the matter of credit scores. Near and dear to my heart because I built a data-sharing platform for small businesses that is grounded in their ability to harness and share credit data on their terms. The FCRA has fallen behind the technology. First, consumer credit information, including scores, can be used to evaluate small business trade credit, lending and procurement without triggering a disclosure requirement. Second, one of my newsgroups posted this link last week citing an example of the use of social data in credit scoring that big data companies including those transparently in the credit and underwriting space maintain are not covered by FCRA.
http://venturebeat.com/2015/08/04/facebook-patents-technology-to-help-lenders-discriminate-against-borrowers-based-on-social-connections/
We need activists but we also need entrepreneurs building technologies that prove the value of data when under the control of those affected by it. Oh, and then we need investors who are not so enthralled with black box systems!
Thanks for a great piece.
Hey LaVonne,
First off, I’m only about 5 months late in saying thank you for your kind note. I find the interplay of regulation and innovation fascinating, and so relevant to what you talk about with respect to credit data. In a world of connection and algorithmically derived insights, where do we look for bias? When do we even know if it could be at play?
I agree with your assessment of what we need completely. Hopefully I can and will be one of those entrepreneurs.
As an aside do you have a link or would you be willing to send along your presentation on open source governance models? I just wrote a piece on thinking about religion as a public vs. private good and I want to expand on the potential of open-source and near zero marginal cost distribution as a new framework. Could be some interesting parallels to the B2B example.
Fantastic article and spot on. The idea that we are giving up our data and identities for the sake of some software is just sad. The reality is everything starts small usually with good intentions and snowballs from there. Unless there is an understanding of what is happening and a change in thinking the overall invasion of our privacy will continue unabated.
Thanks Michael, and sincere apologies that this comment was lost in the passage of time. After reading about what you’re doing with redscotch I’m definitely curious how the alpha roll-out goes. If you’re looking for users I’d be glad to take a spin and provide feedback.