We’ve all seen some variation of this comic: AI written vs AI read

A few weeks ago I asked my friend Qiming whether something he wrote was by hand or with an LLM, and his response has stuck with me:

I think if you want a high signal communication there’s no way the AI can do it for you. It just automates all the standard boring stuff

Perhaps simple and obvious, but I have mulled on this a lot. I think it’s at the intersection of two things I find interesting right now:

  1. When I write, I often delete entire posts after thinking “who would want to read this?”
  2. It has become increasingly tiring reading content online that is clearly AI-generated

As an example, my IPv4 post this week was something I’ve had in my head for a while, and on one hand it was really tempting to just ask Claude to write a blog post about it. But what’s the point? Conceptually I wanted to share (or in reverse: if you’re bothering to read my post, you want to hear about) my story of the waitlist, my experience watching this graph in real time over the past 6 years, etc. I’m sure Claude would have produced better graphs and stats and written more words, but you can already ask Claude for that yourself if you’re so inclined. I wanted to share my story, whether useful or not.

In the inverse form, LLMs are incredible at prompts like “give me an example cURL request to do XYZ with this API”. And it’s obvious when you think about it that the reason for this is that the signal is being compressed, not expanded. A “write a blog post about IPv4 including historical data for the past 6 years and thoughts about whether it’s because of hyperscalers, end it with a quip about the parallel to RAM shortages” only has 1 or 2 sorta-maybe signals in there that would get dragged out to 500 words. But compressing hundreds of lines of code and parameters and documentation into one line of JSON/cURL is packed full of signal. I find it helpful to view signal in this manner.

Similarly my post about agent loops started as an LLM-generated summary of my whole convo/context riff in Claude Code, but the whole experiment was done in Claude Code so what it summarized was the “whole point”, just condensed and reformatted to be a post. (I ended up rewriting most of it and just kept the code snippets for the most part though…) This too carries more signal than auto-generating text-to-text.

Perhaps the funniest part here is I love writing! I find it cathartic. I like thinking through how I want to articulate or express what’s on my mind (though I do find it consumes a bit too much of my brain as I toil on what I want to say while lying in bed or working out or doing other stuff until I’m “ready” to actually start writing). I like teaching, explaining, and sharing what I know. So it doesn’t really make sense to publish something that an LLM knows. It knows a lot! But it isn’t me. And if you’re taking the time to read my blog, presumably you want to know what I have to say.

I guess this is a bit of a meta post, a post about my posts. So to tie it off, I’m going to start tagging AI assistance on here (which I have now added to the agent loop post as an example). It probably won’t include review or research tasks (e.g. “review my post for spelling and grammar” type uses won’t be noted nor will “I used Claude to find 3 IP marketplaces”), but I’ll try to note any actual situations like “this was generated off a prompt of XYZ”. I intend to write everything myself, but some things will begin (or even possibly be mostly-written) with an LLM I’m sure. So I’d rather just be clear about what’s what.

And hopefully what I write is high signal, out of respect for you: the reader :)