[This is the final post in a series on Dan Davies’ book The Unaccountability Machine. See also Part 1, Part 2, and Part 3.]
I want to pick up some of the threads that I offered in my last post, and connect them to my larger project. I’d like to think that I haven’t embarrassed myself too badly over the past couple of weeks as I’ve worked through The Unaccountability Machine, but my familiarity with cybernetics is fairly minimal. I’ve tried to stay reasonably true to the ideas in Davies’ book, even as I recognize that my account of it is warped by my own interests. But this’ll most likely be my last post focusing on the book itself…
One of the key elements that I’ve neglected to mention so far is an idea that Stafford Beer inherited from another British cyberneticist named W. Ross Ashby, called the law of requisite variety:
In colloquial terms Ashby’s Law has come to be understood as a simple proposition: if a system is to be able to deal successfully with the diversity of challenges that its environment produces, then it needs to have a repertoire of responses which is (at least) as nuanced as the problems thrown up by the environment. So a viable system is one that can handle the variability of its environment. Or, as Ashby put it, only variety can absorb variety (Naughton).
As John Naughton goes on to explain, “our contemporary information ecosystem is orders of magnitude more complex than it was forty years ago.” As a consequence, “many of our organizations and social systems—ones that that evolved to cope with much lower levels of variety—are no longer viable.” That viability is characterized (by Davies following Beer) as homeostasis, but “there’s no implication that a homeostatic state is optimal,” which is where it differs (I think) from the kind of equilibrium that drives contemporary economics.
I don’t want to drift too far into this, because I’m not sure I have the vocabulary, but I’ve been thinking about Davies’ book in terms of Doctorow’s enshittification. For most of us (end-users), this feels like a present (and deplorable) shift in the quality1 of the internet, but as Doctorow explains, this hasn’t really been a change on the part of the tech industry itself. “Rather, these people – leaders of tech companies and the managers and product designers they command – have found themselves in an environment where the constraints that kept them honest have melted away.” That is, they are behaving in a way that maintains a corporate equilibrium designed around profit maximization and endless growth, and the monopoly conditions that they’ve pursued have matured:
The law of requisite variety cuts two ways: faced with an increasingly complex environment, a decision-making system has at least a couple of different options: one of them is to increase the variety in the system itself, so it can respond to that environment. The other is to take action to reduce environmental variety, to eliminate the conditions that Doctorow cites as constraints upon those industry leaders. That is, you simplify the environment, turning it into what Farrell and Berjon decry as a “technology monoculture.” But this has lasting implications, which we’re experiencing now: “When we simplify complex systems, we destroy them, and the devastating consequences sometimes aren’t obvious until it’s too late.”
[I want to add one more quick piece to this puzzle, a really smart essay I came across this week, by Anton Cebalo, “The Internet Is Like a City (But Not in the Way You'd Think).” Cebalo reads an essay from architect Christopher Alexander (whose A Pattern Language is another book you’d find on my secret syllabus) across these recent conversations about enshittification. Alexander’s point (from 1965!) is that “the fundamental problem of city planning lies in the limits of our minds and how we intuitively categorize our surroundings. These biases are unknowingly reproduced by top-down administrators and designers, often with good intentions, but make for bad outcomes.” This resistance to the command-and-control style currently favored by the tech industry was also problematic for Stafford Beer—Davies points out that Beer found it “dangerously inflexible.”]
Much of this discussion, both here and in my last post, has touched lightly on the question of scale, which is part of the reason why I’ve pursued it in as much detail as I have. Synecdoche is one of the ways that we add scales and layers to both our own perceptions of the world around us as well as the systems that we manage and/or interact with. It’s a vital piece of the cognitive apparatus with which we ourselves adhere to the law of requisite variety2. It’s part of how we adapt to environmental complexity. But it also has its limits, and Davies’ book provides some sense of the potentially catastrophic consequences when we ignore a system’s blindspots. Doctorow’s work on enshittification, I’d argue, has been tracing out some of those consequences. But I want to close my discussion of Davies’ book by returning to the question with which I began, which for me is grounded in discourse.
For me (as well as some of the writers I’ve been working with), irony represents a next step past synecdoche. If synecdoche allows us to understand the world as a puzzle whose pieces fit together perfectly to construct a larger picture of how things work, irony gives us the ability to admit that our understandings are always partial and in need of adjustment—there is no perfect big picture, if for no other reason than there are always some people who will light some of the pieces on fire. We can only ever try to make things better, and sometimes we fail at that. Sometimes, we get more information (or we fail and learn), and need to change our minds, regardless of how certain we may have been yesterday that we had found the right perspective. Irony may seem like an odd way to describe this capacity3, but I think it makes sense (at least in terms of how I’m defining it). But irony too can be taken to extremes, as I wrote a couple of months ago:
The double-edge of irony is that while we can use it to nuance our thoughts and build our shared contexts, and achieve better understanding, it can also be used to corrode those social connections.
If irony is the trope that identifies a(n inescapable) distance between what we say and what we mean, there will always be those who exploit it, often for personal gain but sometimes just because they can. Or because that’s what their PR staff tells them that they should do.
I turned to Davies initially because the idea of accountability struck me as one way of thinking about how we might distinguish between necessary (good) irony and corrosive (bad) irony. I’m not sure how much it’s helped me on that front—accountability is a much more variable phenomenon in Davies’ account, and he ultimately concludes that “accountability sinks” are a crucial part of any system. But Ashby’s Law, and the way that systems respond to environmental variety, made me think about irony differently.
Ashby’s Law isn’t a solution per se. But it provides a different lens though which we might understand recent trends in political discourse like radicalization, polarization, and conspiracism, if you think about it. In many ways, these are accountability sinks that assist us in dealing with the massive increase in environmental complexity triggered by the development of the internet. Naughton writes that “the rise of the Internet has made variety-reduction increasingly difficult,” but I’d supplement that by saying that the possibility of variety-reduction is that much more tempting to us as a result. Each of those trends I’ve mentioned above has (at least) one thing in common: they simplify people’s lives, whether it’s blaming a particular identity group for all our problems, voting for the party of your choice even if they nominate criminals, abusers, and/or clowns, or identifying a villain upon whom to pin all of the horrible things happening in the world.
Each of us has elements of our lives where, if we don’t fully grasp something, we’ll look into them further and figure them out. Most of us have to do this to one extent or another in the context of the jobs we perform. I had to spend time really figuring out my own diet and exercise habits as a result of my health problems. Heck, Anne Helen Petersen just did an entire week of Culture Study on the culture and process of housecleaning (which I’d share if it weren’t paywalled). But all of this is labor-intensive work, and we can’t spend our entire lives performing (Kahneman’s) “effortful thinking.” We’d be paralyzed by environmental complexity before we even had a chance to start. We need to be able to “automate” some pieces of our lives, by turning them into processes and systems (however quirky they may seem to others).
It’s not just “bad” ideas that function as accountability sinks for us, in other words. Another way of thinking about tradition is that it’s a way of coping with and compensating for variety. We don’t show up at our houses of worship expecting to have to invent our religious services from scratch every week. Or attend a sporting event where the rules of the game are determined on the spot. We might say the same about rituals, genres, patterns in general; we surround ourselves with constraints—and seek them out or develop them ourselves—because they help us manage the complexity of the world around us.
This gets me thinking about Jonathan Haidt’s “Anxious Generation” argument, and makes me wonder how much of it is the degree to which our variety has been increased and how much of it is the extent to which the tech industry has been steadily disrupting and eroding the tools (and institutions) with which we managed our complexity. (To what extent is the mental health epidemic Haidt describes a “red handle” moment, in Davies’ terms?) From the perspective of my own project, I think we need the sort of discursive machinery to cope with our environments, rather than the sort that allows us to ignore it (which is how I’d describe those trends above).
Writing this out, I feel like I may be onto something here. I may want to pick Haidt’s book up next. But this is probably enough for today.
For a quick gloss on this essay, I recommend John Naughton’s mention of it in the Guardian. And yes, that’s the same John Naughton whose 2017 summary of Ashby’s Law I quoted. This helped me understand how and why the distinction between equilibrium and homeostasis (as operative metaphors) matters.
Part of what intrigues me about this entire discussion is Davies’ (and Beer’s) insistence on “decision-making systems” as the unit of analysis, because it scales up to hedge funds and corporations and governments, but it scales down to us as well. You and I are both decision-making systems, and it’s been an interesting thought experiment for me to consider my life in terms of the decisions I make and the ways that I cope with (attenuate, amplify, adapt to) the variety in my local environment.
Dannagal Young characterizes this as intellectual humility, while Naomi Klein described it (via Hannah Arendt) as a sort of “doubling.” I talked about both of them (and irony) back in March.
Just through the sub-chapter, "The Sad Demise of the Cube Turning Society" in Davies' book on the Machine. Absorbing, slightly, the problem / solution, maybe, with 'scaling up' of info and data flow that the fellow from Bell Labs came to, and what I take as the overall issue of over-complexity leading to the deeper shittification of the social order; and finding some parallels with the conversations from Iain McGilchrist on the way the modern culture has fallen prey to "left hemisphere ways of being in the world.. skills have been downgraded and subverted into algorithms: we are busy imitating machines." (The Master and His Emissary, p 256), or in a recent case of mine, having to think past HP's 'help desk' algorithmnic system to get me to a real person so they'd send me a damn printer cartridge. And then, out of the jumble of memory, came the closing scene of a Big Film of my youth, in which a British officer, having followed a code of honor to save his men as POW's, and also fulfill a scheme of the Greater East Asia Co-Prosperity Sphere, in which everyone was just 'following orders' to build a railway through the SE Asian jungle (never mind the human cost), comes to a sudden, wide-eyed understanding with his last words, "What have I done?", and falls on the detonator for the charges planted on the bridge. I'll need to re-read Wendell Berry's "Hannah Coulter" before I pick up Unaccountability Machine for absorption. IOW, I think you're on to something.