[This is part 3 in a series of posts building off of James Scott’s Seeing Like a State; it references topics from Part 1 and Part 2.]
This episode begins from a somewhat looser connection sparked by Scott’s Seeing like a State. As I mentioned in my overview of the book, Scott begins with the case of German “scientific forestry” where, in the interest of short-term lumber profits, the state wiped out entire ecologies that had catastrophic consequences in the long term. The idea of sacrificing sustainability for short term gain is a critique that Davies also makes about neoliberal capitalism in The Unaccountability Machine. And to be honest, it’s been on my mind amidst reports from the past couple of years1 that big name social media has reached its peak (or is about to).
A few weeks ago, I came across an essay from last year, Maggie Appleton’s The Expanding Dark Forest and Generative AI, which references Yancey Strickler’s dark forest theory of the web2. (Aha, I thought, more forestry.) The connection is more figurative than anything, though. Appleton explains that “Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk,” making the possibility of personal and/or genuine interaction (what we were promised) less and less likely. And as Appleton’s title suggests, with the impending textpocalypse that AI brings with it, this is only likely to get worse.
This isn’t the same kind of thing that Scott talks about, of course, but Appleton’s piece does note that one way people respond to the dark forest is to “make themselves illegible and algorithmically incoherent in public venues.” I’ve preserved Appleton’s link there because it traces out to a post by Venkatesh Rao’s Ribbonfarm, where he discusses (you guessed it) Scott’s notion of legibility. While Raghavan and Schneier really focus on abstraction and almost seem to suggest that legibility is a subset, Appleton’s piece confirms my own sense that they’re coordinate—I don’t think that the rise in our ability to handle abstractions algorithmically has made legibility any less important. If anything, I think the two function symbiotically.
Platforms and Layers
I had a little trouble deciding where to go next in this post, but I think I want to take a step back for a moment, because most of the cases in Scott’s book are about the transformation of local spaces for state-optimized purposes. Part of the challenge of applying his ideas to online spaces is that those platforms weren’t really local3. At the same time, though, they were framed to us as local, places where we could perform the kinds of associative activities that we used to do in person or on the phone. [Right after I published this, I saw Drew Austin’s piece on how “the internet is not a place.”] If we think about Scott’s claim that “Formal order, to be more explicit, is always and to some considerable degree parasitic on informal processes,” a great deal of the value that originally accrued to the earliest generations of social software and later social media had a lot in common with mētis. These programs relied heavily upon early users’ dedication, commitment, and creativity to build value. None of them would have survived without it. The transition from mētis to techne, for Scott, is a consequence of state intervention; in the case of these platforms, it was the shift from those “local” types of value to corporate valuation.
Maybe I’m attributing villainy where it doesn’t belong, but I believe that shift towards valuation plays a big role in turning these platforms into the dark forest. There’s no single decision or single platform that’s necessarily at fault, but over the years, their cumulative changes, their fixation with metrics and valuation, their regulatory capture (not to mention the cover that Section 230 provided for them), and their collective will-to-monopoly all played a role in sunsetting the internet that was sold to us back in the day. Users have changed in response, though. If you go to Appleton’s essay, it features a cross-section visualization of how users have withdrawn from the dark forest by retreating to “digital gardens” (blogs, wikis, newsletters) and the “cozy web” (enclave communities built around affinity groups on platforms like group chats, Discord, Slack, et al.).
The idea of the dark web has been around for a little more than a decade, and I’ve lost count of how many movies and shows have spun that trope. It’s usually contrasted with the “open web” (Appleton calls it the “clear web”), but that diagram makes me think that it’s more useful to consider the various layers of the internet in terms of transparency and opacity. Of course, it’s not bi-directional—the practiced opacity of our tech moguls can, has, and will fill many books’ worth of investigation. But this is another way, I think, to talk about the way that the net has put us at the world’s fingertips, as opposed to the reverse.
If you’ve got nothing to hide…
The argument that’s often brought to bear against privacy demands is a bit of reverse psychology, self-incrimination jiu jitsu. If you don’t have anything to hide, then why not give authorities access to your phone, or police access to your home, and so on? The implication is that anyone who balks at absolute transparency must only be doing so for nefarious reasons. When a character in a police procedural demands a warrant, it’s almost always portrayed as obstructionist.
I want to suggest, though, that social media’s premium on self-exposure and its relentless harvest of personal data have turned into something that I’d describe as predatory transparency4. Although Scott’s book is focused primarily on case studies of large-scale social engineering, he does turn at one point to briefly naming some of the obstacles to this activity, and the first “decisive factor” he names is an emphasis on privacy: “the existence and belief in a private sphere of activity in which the state and its agencies may not legitimately interfere5.” We erode this private sphere in all sorts of ways: arguments on behalf of convenience, personalization, and customization, by giving granting access to our metadata to corporations whose security protocols amount to hoping nothing will go wrong, and by sharing so much of ourselves in those dark forest spaces, whether we intend to or not.
I’m writing this just following the recent Apple WWDC where they’ve announced “Apple Intelligence,” a suite of AI-powered tools that they’ll be integrating into hundreds of millions of phones and tablets, many of which will insert a gate between users and their most intimate, daily activities. Let’s set aside for the moment that we’ll be paying Apple and our phone providers in order to supply OpenAI with free training for their next wave of AI companions. Many of the reactions I’ve seen have been lukewarm because of the perceived modesty of these developments. Developments like a more responsive Siri or the ability to summarize incoming and rewrite outgoing emails aren’t exactly moonshot features in their eyes. And I can imagine circumstances where these features might seem convenient, but that’s only when I ignore the massive scale of personal surveillance that this proposal involves. Cory Doctorow wrote last week about “surveillance pricing,” the increasingly common practice of using third-party services to adjust prices digitally. “Personalized pricing” claims to benefit those less fortunate, when in fact, it results in higher prices (and corporate profits) overall:
"Personalized pricing" is one of those cuddly euphemisms that should make the hair on the back of your neck stand up. A more apt name for this practice is surveillance pricing, because the "personalization" depends on the vast underground empire of nonconsensual data-harvesting, a gnarly hairball of ad-tech companies, data-brokers, and digital devices with built-in surveillance, from smart speakers to cars…
Although the rate of inflation has slowed substantially, the combination of higher prices and record profits points towards greedflation, and that spike was based largely on publicly available information (and budgetary support) about pandemic recovery. Doctorow writes about Plexure, the back-end price-fixing scheme that’s connected to McDonald’s, Ikea, 7-Eleven, White Castle and others:
For example, Plexure boasts that it can predict what day a given customer is getting paid on and use that information to raise prices on all the goods the customer shops for on that day, on the assumption that you're willing to pay more when you've got a healthy bank balance.
As chilling as this sounds, imagine your store raising the price on a gallon of milk just before you arrive after/because your partner’s texted you to pick it up on the way home. Or your rent rising the month after you get that promotion you’ve been working for. The dark forest tactics of the web have been with us for some time—and as Appleton notes, they’re only bound to get worse—but AI is poised to take them off-screen, powered by predatory transparency and corporate surveillance. “You’ve got nothing to hide” is about to become a lot less conditional.
I am aware that many of these reports come from self-interested sources, but I do think that the frequency has ticked up recently. Also, I find accounts like Ed Zitron’s convincing; as he notes, “When a company starts playing weird games with how it reports user activity, something is going very, very wrong.”
I don’t want to get too nested in my main post, but Strickler’s theory draws on Liu Cixin’s Three Body Problem and its dark forest theory of the universe. According to this theory, we don’t see other civilizations not because they’re not out there, but because the universe is dangerous: “Imagine a dark forest at night. It’s deathly quiet. Nothing moves. Nothing stirs. This could lead one to assume that the forest is devoid of life. But of course, it’s not. The dark forest is full of life. It’s quiet because night is when the predators come out. To survive, the animals stay silent.”
There’s a whole bunch of work that I could go back to about the spatial qualities of online platforms, stretching at least as far back to the 90s or early 00s. I’m thinking, for instance, of David Weinberger’s argument that the web of made up of places without space. Probably my favorite neologism for this was glocal, a portmanteau of global + local.
This phrase has a lot to do with Appleton’s and Strickler’s dark forest ideas, I know, but I’m not content letting the idea rest in the metaphor. I don’t know if this makes sense, but there’s only one place where it has to.
Interestingly enough, he also mentions as additional factors the private sector and then representative institutions that could exert resistant influences. I find this compelling, because I think neoliberal economics have refigured the relations here—it’s no accident that the “private sector” has (in my opinion) sought to weaken both the first and third of these obstacles on its own behalf. But that’d be a big, complicated argument that’s above my pay grade here.