Occasionally, in the academic biz, you’ll get an email from a graduate student who’s interested in your work, or who’s doing a project on you. It’s a weird sort of interaction: on the one hand, it’s flattering that your work has generated interest, but while I’m sure that there are those of my colleagues who feel as though they deserve the attention, I feel a little awkward about it. I’m generally pretty modest about my scholarship, perhaps to a fault, and I’ve always struggled with the kind of self-promotion that seems to come quite easily to some of my colleagues. And then there’s the fact that any writing is often more present to the reader than it is to the person who wrote it.
Which is all preface to the fact that I got just such an email a couple weeks back from someone who’s looking at my work. And one of the things that she asked me was about the phrase “new media,” which you’ll find in the subtitle of my first book. To wit,
One of my major research questions would be asking how you exactly define new media and how this definition has affected your research, as media that is considered new to one person could be old news to another…
It’s an interesting question, one that contains the seeds of a refutation from the start. One of the things that academics do, sometimes excessively, is to coin terms, the idea being that if they catch on widely, the term effectively becomes free publicity—the right term can launch a thousand footnotes. And when we’re inventing these turns of phrase, we’re not really thinking about their long-term usefulness. This is how you end up referring to a movement from the turn of the 20th century as “New Criticism,” for example, or a book from the 1960s introducing “The New Rhetoric.” Or indeed, a book like mine where “new media” is used to reference technologies that were current in the mid-2000s but now seem quaint (and some of which have disappeared). And that’s already to set aside the question of for whom any of these things might be new, or what happens to the phrase when they’re not.
Although my memories of writing that book have faded (next year is the 15th anniversary of its publication?), I do remember agonizing over how to frame that book’s discussion, and I remember working on a chapter, one that ultimately didn’t make it into the final manuscript, that focused entirely on the question of what we call all that stuff (from hypertext to the web to social software and beyond). Eventually, I ended up choosing “new media” as my phrase, not necessarily because it was the best alternative, but because it was the freshest. At the time I was drafting it, the books that I wanted to be in conversation with were also using the phrase (e.g. Lev Manovich’s The Language of New Media). Had I known at the time that it would take another five years for my manuscript to arrive in print, I might have thought twice.
The fact of the matter, though, is that at a certain point, we stopped arguing over what we should call the collection of technologies and media that the internet made possible. It would make for an interesting project to do that research, to pin down exactly when and why that happened, but I’m not sure that the audience for it would be especially large (Claire Lauer has an article about shifting terminology in my field that’s probably the closest thing to it). I would argue that, at a certain point in the life cycle of any field, there comes a point where you have to move on from unending definitional conversations. You either have to arrive at some shared language, or the conversation itself simply dries up.
But in the case of contemporary technologies, I think “new media” also became sort of passé because the terrain shifted. It shifted from discussions of what we might call new media objects (texts, websites, etc.) to discussions of methods, and the term that emerged in the early 2010s for that was “digital humanities.” And I think that one of the reasons that shift happened was because the phrase itself was identified so closely with national funding organizations, even if the definition seemed to encapsulate just about anything humanities researchers might do, as long as they were using a computer to do it. Digital humanities ended up being the key that unlocked a whole lot of funding for some folks (and certain schools), in a way that new media and some of its variations never did.
The other major shift that happened was the relentless commercialization of the internet, and I might also include the shift from desktop to mobile as part of this. Those of us who were around for the pre-phone internet experienced that time (and the possibilities of that time) in a much different way, I suspect. I don’t want to wax nostalgic, because the internet was a much more divided and dispersed space, in terms of access, resources, etc. Even as those issues have (slowly, incompletely) been addressed, though, the space itself has been commercialized, corporatized, and enshittified at the same time. Those of us with a longer memory watched as lots of alternatives got purchased by a handful of large companies, not to provide support, but to wipe out competition for all but a few major platforms.
Back in the late 90s and early 00s, there were a lot more tools, a lot more experimentation, and a fair number of ways to share the results. Over the years, this ecosystem was replaced by all-in-one platforms, walled gardens that limited the tools you could use, the things that you could actually do with them, and the audience with whom you might share them. The software tools that didn’t exist inside these platforms either got bought out or priced themselves out of individual users’ budgets by becoming bloated and over-complicated. I can’t say that this was purely intentional, but it certainly felt coordinated, and as someone who was pretty DIY in the early days, I miss that time.
[If there’s one person that embodies this for me, and who does something about it, it would be Robin Sloan. Read his essay where he likens an app to a home-cooked meal. That there is the difference between the early days of the web and what we have now—back then, we were learning to cook (and the meals that resulted were what I would describe as new media). The internet we have today is more like a food court by comparison.]
It’s not accident that Sloan also plays a role in my post about digital writing, because this brief, sketchy history here was the backdrop against which my attitudes about that course shifted. I used to walk into the classroom on the first day and tell students that my most important learning outcome was for them to “make cool shit,” and over the years this became harder and harder. Whether it was the 30% cut that app developers had to pay Apple, or the demographic data that Facebook extracted, or Google turning search into an endless advertisement, the “make” and the “cool” parts of that outcome felt pretty scarce.
So maybe I’d say that “new media,” as I thought about it then, was less a matter of “media that are new” than it was “media that generated the new.” For a while, it felt like we were exploring new forms of textuality—it wasn’t so much about using only the “newest” tools as it was trying to create things that we hadn’t seen before. I still remember basically living in my office for several weeks, teaching myself how to use Flash, so that I could create (and eventually publish) a digitally animated “essay.” That kind of work rewired my brain in interesting ways, and I just don’t see that happening on the platforms that we have these days. While there ate still new media appearing (corrypto might count, e.g.), I’m not sure that “new media” has any life left as a meaningful designation…
ps. If I had to go back and do it over, I probably would have just gone with “digital rhetoric.”