No, Google Isn't Literally Stealing Your Free Will
Surveilling "The Age of Surveillance Capitalism"
The worst job I ever had was in an ice cream parlor. The store was cramped and old with the lingering smell of an unemptied grease trap in the air. The usual unpleasantries of service work sucked of course, but what I really remember just gnawing away at my psyche was the camera.
It was always on, nestled in the corner, live-streaming video to the owners phone, with us having no way to know if she was watching at any given moment — anxious every time the phone rang that we were about to receive another chewing out for insufficient vigor whilst scrubbing the glass case. It was, in all probability, the world's only panopticon ever to involve Lemon Sorbet.
I quit as soon as the summer was over.
The last job I had in high school was as a barely paid intern for my local congressman’s campaign — they had promised us free lunch and travel costs. We got the travel costs. One memorable day for me was the basic training we received in using the backend software campaigns use to plan votes. What particularly stood out to me was the predictive analytic software that used a variety of bought data as well as the voter registration list to assign every potential voter in the district a probability of voting for a specific party. Personally, I thought this was a clever bit of technical engineering using legally acquired data, but I could never quite square using it with the Congressman’s avowed antagonism towards big tech and staunch consumer privacy advocacy.
These jobs weren’t anywhere near worst thing that could happen to a person and I wasn’t financially dependent on them, but they did suck at the time. I’m telling you about them, frankly, because I want to demonstrate that I have had some, small, infinitesimally tiny even, amount of skin in the game when it comes to surveillance intersecting with capitalism and politics. The reason I want you to know that is so you’ll also know that the reason I think The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power by Shoshanna Zuboff is a failure isn’t solely because I’ve been insulated from the harms it’s trying to describe.
I decided to pick up the book because I was vaguely aware of some sort of “Surveillance Capitalism” literature existing, felt I probably should learn a bit more about it, and the few bits I had encountered had been fairly interesting. The Age of Surveillance Capitalism seemed like a good place to start as the praise for it has been effusive.
The Guardian thinks that Surveillance Capitalism is the “first definitive account of the economic…condition of our age.” The Financial Times thinks it’s a “masterwork of original thinking and research.” It received positive reviews in The New York Times, The Intercept, The New York Review, The New Statesman, The LA Review of Books, and the Wall Street Journal. It is blurbed by an extraordinary amount of luminaries. Zadie Smith thinks it's the most important book published this century. It merely made the Guardian’s list of the top 100 books this century — admittedly, this was a weird list, including a Young Adult novel that bravely asks the question “What if white people were discriminated against instead?” Perhaps The Age of Surveillance Capitalism’s most impressive achievement is that it topped the list of Barack Obama’s best books of 2019, despite the book's own scorching indictment of the Obama Whitehouse’s relationship with Google.
It is also a terrible book and all of these reviews are wrong.
The problems with Surveillance Capitalism are extensive. The book cannot decide what it wants to be, ranging over a wide assortment of topics that are only loosely connected by subpar theorizing. The overheated rhetoric and conspiratorial tone result in the author making claims that are factually incorrect (as demonstrated at several points by her own footnotes) or wildly misleading. In places the author engages amateur philosophizing about the nature of free will that, frankly, is embarrassingly ill conceived. The sum impression one walks away with after the book is that the author was very interested in telling us how bad Google is (which, perhaps, it is) and receiving credit for coining a bunch of terms, and backfilled the book from there.
Before we get into the meat of Zuboff’s, argument, we need to first address two problems that pervade the book: errors and overheated rhetoric. I most noticed the former in the early sections of the book where Zuboff walks us through the history of capitalism writ large before turning to the modern surveillance variant. Though, I think this is because I am most familiar with this area, not because it is unusual in it’s errors.
Portions of the explanation of capitalism's arrival are simply weird. The rise of the specialization of labor on factory assembly lines, we are told, is because the ‘struggle for existence is more acute.’ instead of being the natural result of the efficiency of specialization a la Adam Smith. I suppose it must have come at great convenience to Henry Ford that his workers' struggle for existence happened to result in significantly faster production of cars.
When Zuboff begins to criticize the “neoliberal turn” in public policy we see outright incorrect facts and descriptions rather than just odd causal claims. She writes that “Research in the UK showed that by 2013, poverty fueled by lack of education and unemployment already excluded nearly a third of the population from routine social participation.” Having checked the study she cites (while I am all for making fun of the British economy) that is not what it says. The study, in fact, does find a negative relationship between income and social participation, except for the bottom third of the income distribution, where it is essentially flat (except for the poorest, who, oddly, have a slightly higher rate of participation). The study makes no finding about the causes of poverty (i.e. that education and lack of jobs are the reason) and makes no finding that the bottom third are entirely excluded from routine participation (indeed the word routine does not appear in the study) merely that they all share a reduced level of participation.
This slight infelicity in summarizing a study would be entirely understandable if it was not followed by several more examples of misrepresentation. Turning her eye to the US, Zuboff argues that “By 2014 nearly half of the US population lived in functional poverty, with the highest wage in the bottom half of earners at about $34,000.” The author cites two reports on this. Both reports place the poverty rate at ~14% using multiple different measures of poverty, that is not nearly half.
Of course, Zuboff is slightly hedging here by qualifying it as “functional” poverty. I still don’t think we can arrive at an interpretation of this as true. The share of Americans who have an income that is double the poverty line is still way under 50%, per her own citations. A further oddity is that the reason Zuboff brings up this fact is to decry the disintegration of the social safety net. But, transfer payments and in-kind benefits, notably, are not captured in wage measurements, making that 34,000 wage number irrelevant to her claims.
An additional confusing aspect to her critique of neoliberalism is that Zuboff cannot quite seem to let us know what “neoliberal” ideology is. Later in the book, to argue that neoliberal ideology drives our unwillingness to regulate Google (well, tech companies generally, but Zuboff mostly has it out for Google) she cites a study that finds that the “dominant theme of this literature was ‘the coercive nature of administrative government’ and the systematic conflation of industry regulation with ‘tyranny’ and ‘authoritarianism.’” This at first appearance seems fine, but the article finds that this elevation of tyranny concerns comes at the expense of concerns about cost and efficiency. I am not a scholar of neoliberalism, but I am reasonably certain that economic efficiency concerns are at least equivalent in importance to libertarian concerns about the legitimacy of government in the neoliberal mindset.
I didn’t spend the time to fact check every citation like this. To Zuboff’s credit there are several hundred of them and it would have taken days. It’s also not the case that every single citation is wrong, plenty that I did check were in fact correct. But, there was a persistent pattern of my reading a fact, not thinking that it passes a smell test, looking it up, and finding that it may have been stretching the truth somewhat or phrased in such a way as to indicate something other than what was directly said.
I think this malleability of truth is mostly a function of the book's heated anti Google rhetoric. Nothing is allowed to be simply bad, it must be the worst and a sign of impending dystopia. That leads to overstatements and perhaps unintentional twisting of the facts. Everything bad a tech company does is a conspiratorial plot rather than, say, the result of John the Project Manager rushing to get in this week's lines of code in too quickly.
Google is not just under-regulated, instead “no moral, legal, or social constraints” will stand in Google’s way. To which all I can say is, really? Do you, Shoshanna Zuboff, Professor emeritus at Harvard Business School, actually believe that Google does not consider whether something is illegal before doing it? One wonders why they wasted money paying their Chief Legal Officer a cool 22 million in stock this year if all they do is send his memos straight to the shredder. Of course, that Google ignores laws isn’t surprising to Zuboff because she believes that “The world is vanquished now, on its knees, and brought to you by Google.” I honestly don’t know what to do with this statement; I just don’t know how to explain to you that Susan Wojcicki isn’t f*****g Charlemagne.
An example of rhetorical structure resulting in a true but wildly misleading claim is Zuboff’s telling of how data was illegally collected from wifi routers by Google’s Street View project. We are told that it collected a massive 600 billion bytes of personal information. In fairness, illegal data collection is absolutely worth discussing and calling out. However, what Zuboff fails to mention is that 600 billion bytes of data is less than the storage capacity of the laptop I am writing this on. Her writing would have you think that this data theft (I think it was theft at least, Zuboff is unclear) was a significant contributor to Google’s data and general operations. Assuming an extraordinarily conservative amount of data acquired per day at Google of 100 petabytes (it’s almost certainly more), the total data collected by this crime would be 0.000006% of the data collected by Google in a single day. This would seem worth contextualizing.
There are also just bizarre characterizations that don’t even fit Zuboff’s narrative. We are told that the Supreme Court rejecting restrictions on pornography (page 109) is evidence of a conservative-libertarian agenda. Which, what? Preventing bans on porn is definitely a libertarian idea, but that we should ban porn is like one of two things conservatives agree on. In fairness, Zuboff cites to a law article that makes the same claim, but that also doesn’t do a great job of explaining how this is a conservative opinion.
At points I even found the rhetoric deeply uncomfortable. Multiple times in the text Zuboff compares tech companies to conquistadors slaughtering natives. It takes a great deal for me to be troubled by an analogy, but this was inappropriate. The worst moment of this sort is when Zuboff goes from a paragraph literally decrying the secret puppetmasters of capitalism to praising the pro-social nature of Henry Ford. This is uncomfortable when considering who exactly Ford considered the puppetmasters. Was this intentional? Certainly not, but it really ought to have been caught in editing.
Some bad citations and flamboyant prose could be forgiven if the book was deeply insightful, but, honestly, the bits of arguing that are useful aren’t new and the bits that are new are questionable. Indeed, Zuboff’s main contribution seems to be taking things we already have a word for and coining a new edgier one, seemingly in the hope of getting credit for doing so: The Internet of Things becomes “the apparatus”, predictive targeted advertising is “the reality business”, and the one that made my eyes actually roll into the back of my head was when the author decided that user data needed to be rechristened “The Shadow Text”.
All of this terminology is put to service theorizing about “surveillance capitalism”. The basic model is this: the introduction of computers into providing services like maps or information search meant that data about users is captured. Machine learning can be applied to this data to generate predictions that allow for much better targeting of advertising than was previously available, generating profits. Thus, companies have an incentive to capture as much data as possible. Or, to use Zuboffs terminology, data exhaust from the apparatus builds the shadow text that is used in service of the reality business in an expanding cycle driven by the fundamental logics of surveillance capitalism.
This is fairly simple and uncontroversial as arguments go. Zuboff spends a great deal of time going over specific examples of expansion of data collection by various companies and this is where the book is at its best, likely because Zuboffs reputation allowed her a great deal of access to interview data scientists.
Zuboff proceeds from this simple model of the expansion of data collection to trying to elucidate the harms of surveillance capitalism. This is where I expected to find myself in agreement with her the most, but instead found the claims bizarre or unrelated to the specific “logic of surveillance capitalism” she sets out.
Two harms that Zuboff rails against are the use of data by authoritarian governments and the harms from social media; I think both of these are absolutely true and real problems in society, but they also aren’t really related to “surveillance capitalism”?
First, on the issue of the Chinese government collecting data, are we supposed to believe that absent an economic incentive for private companies to collect this data the state wouldn’t compel them to do so anyway? It seems unreasonable to think so. The harms of information gathering by authoritarians seem better characterized as being the harms of the availability of big data technology altogether rather than the harms of surveillance capitalism generally. Zuboff somewhat attempts to square the circle by pointing out cases of using data of who defaulted on debts to enact punishments, but any real systematic attempt to tie China’s surveillance back to capitalism is lacking. This section on debts is also just odd for a remark that Zuboff makes that “No one is sent to a reeducation camp”, because of data collected on debt repayment failures and the like. This is weird because, you know, people are in fact sent to reeducation camps because of it.
Social media and its harms seem like a more natural connection to surveillance capitalism, but in fact the connection is tenuous. Zuboff rehearses a slew of arguments about addiction and harms to teens — the discussion in this section is much more compelling and technical than anywhere else in the book, probably because it is focused on psychology, which is Zuboff’s background. All of her claims are about the harms of participating in social media, but social media would still exist absent an incentive to collect data for predictive advertising! Facebook (let's be honest, whenever people say ‘social media’ is bad they mean like 75% Facebook and 25% Twitter) is also an advertising delivery system as well as a system for collecting data. If the entire capability to target ads dried up overnight, Facebook would certainly be way less profitable, but it would still be just as addictive and misinformation ridden and would still be incentivized to have as many users as possible. Zuboff fails to do the analytic work to connect it to the surveillance capitalism model she is proposing.
There are a couple places where we receive a more direct connection of the harms. Zuboff expresses concern that personal data integration into things like cars or credit scoring could give tech companies an uncomfortable level of control over our lives, which, fair point, but somehow this is still somewhat bungled. We are given the example of companies automatically shutting down a car so that the engine will not start if payments are missed. This is extraordinarily worrying to Zuboff and we are invited to ask “What happens to the driver? What if there is a child in the car? Or a blizzard? Or a train to catch? Or a day-care center drop-off on the way to work? A mother on life support in the hospital still miles away? A son waiting to be picked up at school?”
Now, perhaps I’m missing something here, but I feel like the answer to all of these questions is that the same thing that would happen if your car was repoed the old fashioned way? Like, I guess electronic tracking makes the repo man marginally more efficient at finding the car? So maybe you miss your train a day earlier than you would have otherwise?
The idea that personal data could be used in bad ways to make credit decisions is what I am most sympathetic to, but once again Zuboff makes the worst possible argument for it. There is a great deal of fearmongering in the book about how personal data is used to measure creditworthiness using nontraditional measures that attempt to measure things like personality traits rather than more traditional factors like education or employment history. We are then told that this is bad for poor people because rich people don’t need to participate in this new system and can opt out, while poor people who don’t have alternative ways to get loans are forced into the system. This seems… odd?
Like, ideally everyone would have access to credit without using personal data, but, acknowledging the reality that’s not true, it seems good to expand how we measure creditworthiness if that increases poor peoples’ access? My own worry about this is that using blackbox ML models can easily let in massive amounts of discrimination into the decision unintentionally, but this concern is absent here (and throughout the book).
The oddity of Zuboff focus on data being bad over any material gains (indeed, Zuboff never really reckons with Google making the sum total of human knowledge searchable for free) makes sense when we arrive at the final section of the book where she trots out a frankly absurd argument that tech companies are plotting to take away our free will using noted psychologist B.F. Skinner’s endeavors in fiction writing as a template.
There are two parts to this argument (though the structure of the book is such a mess this is actually an artificial division I’m imposing post hoc): that tech companies using data to predict our behavior literally undermines our free will and whether a project to do this actually exists and is happening. Both are false.
First, on free will. This is embarrassing. Zuboff is less than forthcoming with a precise account of what free will is, but she is pretty specific about one necessary condition for free will. Zuboff says that “there is no freedom without uncertainty”, by which she means that necessarily to have free will it must not be possible to perfectly predict behavior. Tech companies, we are told, are rapidly approaching the ability to perfectly predict and manipulate our behavior and therefore rid us of free will.
This account of free will is nonsense. Consider my friend, who we will call Sleve McDichael, who only ever eats vanilla icecream. Zuboff seriously maintains that the fact that I know that he will pick vanilla everytime we go to Baskin-Robbins is undermining his free will. Just because someone is predictable in their choices does not mean that that choice is unfree. Like, if I suddenly died of a heart attack does Sleve now exercise free will when he picks his boring-ass icecream? This account suggests that physicists are actively undermining our free will by learning about the fundamental particles that make up matter (and therefore our brains) and, moreover, isn’t Zuboff undermining our free will by attempting to write a book that tries to predict the behavior of people? I am unimpressed with this concern.
As to the argument that tech companies are actively trying to build what she terms an “instrumentarium” dystopia. I’m not necessarily opposed to the idea that Google and Facebook are kinda creepy and a lot of their employees don’t seem to care about so-called minor things like rights or human wellbeing, but Zuboff attempts to argue for this via close reading of the works of both famed psychologist B.F. Skinner and Arthur Pentland, one of Skinner’s disciples who now works in the wearable device industry as a data gatherer, and their proposals to use data to predict human behavior and us those predictions to improve society. This is perplexing as evidence of a grand attempt to rewrite society. Sure, we are given examples of Pentland giving talks at Google and a few other important places, but you are ultimately left asking “why should I care?”. Arthur Pentland, is, as I’m sure you are all aware given his importance to our impending totalitarian nightmare, just some dude at MIT with a research lab. If we are crashing towards dystopia, I’m going to need slightly more evidence of this grand conspiracy to render humanity slaves to the algorith. If one crank professor doth a conspiracy make, then there is currently a conspiracy to impregnate humans with aliens, for the academy to defraud the IRS, and to do whatever Larry Lessig’s deal is.
I left this section and really the book with the impression that because Zuboff really dislikes Skinner and the Skinnerites (she was a grad student while Skinner was teaching and frequently butted heads) she assumes that other people she dislikes are heavily influenced by them. Again, maybe they are! But any real evidence for that isn’t provided in favor of telling us why Skinner is bad.
I don’t really know what to do with this book. I have a gnawing sense that surely there must be something I’m missing if it’s receiving all of this praise from people who have actually done things with their life. Like, I hope I am not narcissistic enough to assume that “actually, Barack Obama was befuddled by the big words and bad structure and I’m the only one who can see the truth that it’s bad and has some dodgy citations” or whatever. But I really can’t make sense of the reception it got compared to what I actually read. I also am uncomfortable giving a totally negative review of a book that was clearly someones baby for several years, but would rather be honest than hedge my bets. And really, I do get that I’m in the minority here, after all 89% of Google users liked this book.
No, Google Isn't Literally Stealing Your Free Will
I don't know if you've ever heard of Hacker News, but it's a site where every commenter is a carbon copy of the author of this book. If people like them are reading it, that would explain the positive reception.