A couple of times recently, over at the Book of Faces, I’ve been encouraged to share my Memories of what life was like eleven years ago. The year was 2013, and the trendy new app on the Facebook beat was something called “What Would I Say?” It was produced by several students from Princeton during a hackathon, and it was pretty simple: you could give the app permission to scrape all of your Facebook posts, and then it would generate random phrases and sentences that you could in turn post to Facebook. Fortunately, their app added the hashtag #wwis to every one of them, which made it extremely easy for me to pull up the following examples from my feed:

Like I said, this was 13 years ago, back when folks were still pretty optimistic. We thought nothing of sharing heaps of personal data with a random app developer, as long at we got a little novelty in return. If you want a sense of how trendy WWIS was, here’s a rundown by Sarah Moir on This is Important, where she explained how the site works as well as linking the coverage it had received (in just a few days) by Huffington Post, Slate, Business Insider, and the New Yorker. A lot of the folks I knew on FB were happy to share their data, and marvel at the results. As Moir explained about herself and her friends, “we’ve all been delighted to discover something that nonsensically ‘understands’ us, by spitting our own words back at us.”
A little over a decade later, this cute little app should sound familiar to us, because it’s the same basic model that drives AI writing assistants, except that the dataset is exponentially larger and the tools for verifying the grammatical mechanics are much improved. The stakes are a little higher, too, than whether or not to put out a fake update on a social media platform (although that scale of things has not been left behind, as you’ll see below). Today’s writing assistants are “better” in certain senses. The massive scale of training data makes it unlikely (if not impossible) that anyone else will be measuring time with dismal basketballs, but that’s not because today’s tools recognize the absurdity of the idea. Rather, the phrase (which yields 0 results on Google) will never appear in the training data with enough frequency for it to find its way into your interoffice memo or college essay. These assistants are much better trained to provide the illusion of understanding, but they are still no more intelligent than the pair of dice that comes up showing 7 more often than it delivers snake eyes.
Tonewashing
Apple Intelligence commercials, now that I’ve written about them, seem to be crossing my stream with more frequency lately. Lately, I’ve been seeing the ones with the schlub (above) who comes off far more professionally than he actually is and then the office nerd who manages to lure his pudding thief to justice. “Write Smarter,” the commercials tell us as the protagonists preen smirkily to Krizz Kaliko’s “Genius.”
“Smarter” in these cases means dashing off some inappropriate junk, and then using AI’s single-button text filter to “tonewash” it into something more suitable for the occasion. This is a fairly common approach to most of the Grammarly commercials I’ve seen as well. If I had to guess, I imagine that tonewashing is based on a combination of stylometrics (word length, sentence length, comprehension levels, et al.) and digital humanities/natural language processing work in sentiment analysis, where machines process text and gauge its affect along various axes. (Facebook was doing this in the early teens, using sentiment analysis to rate posts, and then manipulating user feeds to prolong site engagement.) So, when Warren clicks “professional” in the commercial, his sentences probably get longer, more indirect and formal, and the tenor of the email becomes more neutral.
Of course, this isn’t really smarter writing, any more than that AI-generated slideshow made Mom a better or more loving parent. And it’s misleading with respect to rhetoric: it implies that ideas exist in our minds, waiting to be dropped onto a page or a screen, and that changing styles is as simple as trying on different clothes. That understanding of rhetoric, as purely decorative, is sloppy thinking that pokes its head up every so often1. What’s more, this understanding of writing (and these commercials, tbh) bears a strong similarity to trolling, where the point is to expend as little of one’s own energy in order to provoke (over)reaction on the part of others.
The idea that Warren’s supervisor would read a surprisingly professional email from him, and consider that perhaps he’s misjudged his subpar employee, is a fantasy. I’ve seen an uptick in emails over the past few months that have struck me as uncanny, and I suspect that this is tonewashed slop invading my inbox. It’s the same feeling I have when I see animals that are anthropomorphized with CGI to speak or otherwise behave like people. I don’t think to myself that this is someone who writes really well—if anything, I’m more likely to feel the opposite.
Trustwiping
I don’t discount the possibility that these tools will evolve to the point where I’m no longer able to tell the difference. But the difference isn’t between AI-generated and human text—this brand of Turing test misses the point. When someone writes me an email, I read it in the context of what I know of that person, and of all our prior communication. I can just about always tell when a student has plagiarized, not because I’m especially talented, but because I come to learn during a semester how that person thinks and communicates. We tend to underestimate how much our writing actually sounds like us, and if we don’t have the rhetorical resources to sound professional or to soften our hard edges, asking a machine to do it for us doesn’t all of a sudden make us smarter, more approachable, or more qualified. If I don’t have much data to go on, yeah, it’s possible to fool me. But the more I get to know you, the less likely that becomes.
The reason I was thinking about this right now is a story over at 404 Media, about how YouTube is now offering creators the option of auto-generated comment section replies. You might sense from the scare quotes around “Enhance” in the title how valuable they find this feature.
“Editable AI-enhanced reply suggestions” on YouTube work similarly, but instead of short, simple replies, they offer longer, more involved answers that are “reflective of your unique style and tone.” According Basinger’s video demoing the feature, it does appear the AI-generated replies are trained on his own comments, at times replicating previous comments he made word for word, but many of the suggested replies are strangely personal, wrong, or just plain weird.
Clint Basinger, the creator who demonstrated the feature online for his followers, explains that “The automated comments in particular come across as tone deaf, since a huge reason YouTube makes sense at all is the communication and relationship between audience and creator.” The examples in the article sound a lot like What Would I Say? sorts of replies, hampered by its inability to develop any sense of actual communicative context.
It’s tempting to shrug stuff like this off. Who really cares if AI wrote my reply or my memo? But Basinger gets at the deeper issue behind this, I think, when he reports that “I've had dozens of people say that they now second-guess every interaction with YouTubers in the comments since it could easily be a bot, a fake response2.” If the first wave of AI hype was an attempt to convince employers that they could just replace their employees with robots and accomplish the same work, perhaps the current approach is to make their “something for nothing” pitch to the rest of us now.
What we’re losing in situations like these is actually pretty important. It’s the baseline level of trust that fuels our casual interactions with one another. If my students can’t trust that I’m the one giving them feedback on their writing, or if I can’t trust that they’ve actually done the work themselves, that violation is humiliating and frustrating, and it defeats the purpose of education in the first place. It’s a fundamentally extractive attitude that erodes the trust that we place in each other as part of our everyday interactions, and it feeds directly into that paranoid style that I wrote about earlier this week.
It’s not that I don’t see the temporary appeal of convenience that these assistants offer, but I also see the dangers of making ourselves more and more dependent upon them. These services are quite literally billion dollar solutions in search of nickel and dime problems; first they will lock us in, then begin to charge us dollars to do the things that we’ve forgotten (or never learned) how to do on our own, and that’s after they’ve trained us not to trust any interaction we have. They’re happy to advertise the convenience of one-button tonewashing, but ultimately, they don’t make money until the monthly subscriptions begin.
This version of the world that Apple Intelligence, Grammarly, YouTube, and others offer for “free” sounds pretty dismal basketball to me, if I do say so (and I didn’t). More soon.
This is the source of the notion of dismissing something as “mere rhetoric” (as opposed to facts, or truth, or action), which is usually a good sign that the person who says this doesn’t actually understand rhetoric. There is no “substance” without “style.”
I used to enjoy reading AITA on Reddit, but it’s beginning to wear thin as nearly every post ends up with one or more comment threads accusing it of being written by AI and devolving into user debates over whether and how we can tell.
It defeats the purpose of human interaction in the second place...
Ehm, maybe it is a defeat in the first place.