I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.
However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.
For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).
> writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking
You see the same thing in teaching, perhaps even more because of the interactive element. But the dynamic in any case is the same. Ideas exist as a kind of continuous structure in our minds. When you try to distill that into something discrete you're forced to confront lingering incoherence or gaps.
> agent write release notes for your agent in the future...
I have been going back to verbose, expansive inline comments. If you put the "history" inline it is context, if you stuff it off in some other system it's an artifact. I cant tell you how many times I have worked in an old codebase, that references a "bug number" in a long dead tracking system.
But how do you deal with communicating that some library you maintain has a behavior change? People already need to know to look at your code in order to read your comments.
For your context, I'm an AI hater, so understand my assumptions as such.
> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?
It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.
This is kind of a fundamental issue with release notes. They are broadcasting lots of information, and only a small amount of information is relevant to any particular user (at least in my experience).
If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.
> For these cases, I think we should just drop the premise altogether that you're writing.
Sure.
> If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
No, no, no. You don't need to take that step. Whatever bullet-point list you're feeding in as the prompt is the relevant artifact you should be producing and adding to the bug, or sharing as an e-mail, or whatever.
I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."
It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.
I always find folks bringing up rubber ducking as a thing LLMs are good at to be misguided. IMO, what defines rubber ducking as a concept is that it is just the developer explaining what their doing to themselves. Not to another person, and not to a thing pretending to be a person. If you have a "two way" or "conversational" debugging/designing experience it isnt rubber ducking, its just normal design/debugging.
The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.
Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time.
I feel I've had the most success with treating it like another developer. One that has specific strengths (reference/checklists/scanning) and weaknesses (big picture/creativity). But definitely bouncing actual questions that I would say to a person off it.
Maybe it’s just a semantic distinction, which, sure. I guess I’d just call it research? It’s basically the “I’m reading blogs, repos, issue trackers, api docs etc. to get a feel for the problem space” step of meaningful engineering.
But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my
brain, not the LLM, by not exposing my workspace to it.
Sometimes people just need something else to tell them their ideas are valid. Validation is a core principle of therapeutic care. Procrastination is tightly linked to fear of a negative outcome. LLMs can help with both of these. They can validate ideas in the now which can help overcome some of that anxiety.
Unfortunately they can also validate some really bad ideas.
I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.
Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?
I guess I must feel it's slightly useful overall as I still do it.
I think it's just a confusing use of the term "generating." It's thinking of the LLM as a thesaurus. You actually generate the real idea -- and formulate the problem -- it's good at enumerating potential solutions that might inspire you.
Yes, I didn't get this portion at all. I feel as though letting an LLM brainstorm ideas for you would be worse in externally framing your thoughts than letting it write/proofread for you. If you pick one idea out of the 10 presented by the LLM, you are still confining yourself to the intersection of what the LLM thinks is important and what you think is important, because then you can never "generate" a thought that the LLM hasn't presented.
All LLM output is always dry as fuck quite frankly. At all levels from ideas and concepts through to the actual copy. And that’s dotted with pure excrement.
I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.
If I offend anyone I will not be apologising for it.
Because AI is not a search engine. It does not return the best search result every time.
What it considers best, is what occurs most often, which can be the most average answers. Unless the service is tuned for search (perplexity, or google itself for example), others will not provide as complete an answer.
How well we ask can make all the difference. It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
> When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.
This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.
The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.
For me, drawing the line as to when you will leverage AI and when you won't comes down to a quote from Kurt Vonnegut: "Practicing an art, no matter how well or badly, is a way to make your soul grow, for heaven's sake. Sing in the shower. Dance to the radio. Tell stories. Write a poem to a friend, even a lousy poem. Do it as well as you possibly can. You will get an enormous reward. You will have created something."
Art is where I choose to draw the line, for both ideation and content generation. That work report I leveraged AI to help flush out isn't art, but my personal blog is, as is anything I must internalize (that is thoroughly understand and remember). This is why I have the following disclaimer on my blog (and yes, the typo on this page is purposeful!): https://jasoneckert.github.io/site/about-this-site/
The title and of this article is Don't Let AI Write For You, when its point seems to be closer to Don't Let AI Think For You (see "Thinking").
This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.
Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4
This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.
From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.
> Audio can be a great way to capture ideas and thought processes ... This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.
Yes, this is my process:
Record yourself rambling out loud, and import the audio in NotebookLM.
Then use this system prompt in NotebookLM chat:
> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove filler words. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.
Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.
This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.
I would count direct dictation (eg someone writes down what you say, and that is the final text), as writing, in the context of producing a document (book, etc) that you intend others to read.
It's not the same thing as talking to someone (or a group) about something.
I do this a lot. Start by telling the AI to just listen and only provide feedback when asked. Lay out your current line of thinking conversationally. Periodically ask the AI to summarize/organize your thoughts "so far". Tactically ask for research into a decision or topic you aren't sure about and then make a decision inline.
Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).
Yeah this is my problem. I can come up with ideas, but in writing my ideas never come out well. AI has helped me to express my ideas better. People who write well or are successful at writing sometimes fail to understand how uncommon is it to actually be good at writing. Shit is hard.
I find LLM's particularly good at filtering and distilling a large rambling idea that I have into a well-formatted and coherent paragraph, and also removing any statements that would be perceived as overly argumentative or rude.
I write a fantasy football newsletter. I keep it really goofy. I release them 1 per wk during nfl season and then once a month or so during the offseason. I’m 120 or so in. It goes fast. But I found my voice after awhile and creative fun stupid writing is really up my alley. I get others to participate too and write guest editions.
Just last week I sent out hand written “c and f’s” as the league would call it. Remarkably, readership remains high. And when else am I handwriting letters to send across the country.
I have a feeling that the same idea absolutely does apply to code. Writing code is much closer to writing prose than it may seem. And the act of writing code also makes you think as you write. Even if you're writing boilerplate. Because how else would you uncover subtle opportunities to reduce the boilerplate and introduce new, better abstractions?
I feel very much the same way about debugging: it is through the process of repeatedly being slightly wrong that I come to actually understand what is happening.
Sometimes an LLM can shortcut me through a bunch of those misunderstandings. It feels like an easy win.
But ultimately, lacking context for how I got to that point in the debugging litany always slows me down more than the time it saved me. I frequently have to go backwards to uncover some earlier insight the LLM glossed over, in order to "unlock" a later problem in the litany.
If the problem is simple enough the LLM can actually directly tell you the answer, it's great. But using it as a debugging "coach" or "assistant" is actively worse than nothing for me.
I wrote about something similar this week[0]. Beyond doing your own writing and understanding the outcomes that you want clearly, there is an increasing need for us to write our own docs/tickets as all of these are also the prompts.
Docs written by agents almost always produce mediocre results.
Outsource things that aren't valuable to you and your core mission. Do the things that are valuable to you and your core mission.
This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.
I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.
Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.
When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.
Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.
So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.
> Outsource things that aren't valuable to you and your core mission.
When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.
In the office, that review step gets outsourced to your coworkers.
Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.
I use LLMs for compilation of information sometimes. I’m a teacher and I sometimes use it to hack together a quick worksheet for my students. I see they need some practice with a certain concept and I get the LLM to generate a LaTeX doc which I compile to pdf. I find it can be particularly useful at document creation but it is horrible at writing anything that’s in sentence form. It stinks and is not great at conveying my voice.
I will sometimes write a lesson and have an LLM generate a quiz and give me feedback on my content search for mistakes or unclear content.
I have also used it to help me structure a document. I give it requirements it makes a general outline that I then just fill in with my own words.
I’m still not sure how to approach my students’ uses of an LLM. I am loath to make a hard and fast rule of no LLMs because that’s ridiculous. I want to encourage appropriate use. I don’t know what is appropriate use in the context of a student.
An LLM can be a great learning tool but it also can be a crutch.
The author articulates perfectly what I think too. I’d recommend for everyone to read Writing to Learn by William Zinsser. It’s an incredible book showing that you can learn anything by writing about it.
With an LLM doing all the writing for you, you learn close to nothing.
The rational response to document overload is to mostly ignore it.
Workers and managers in organizations are being overwhelmed by large numbers of documents because it's so easy to bang something off that's 'about right' and convincing enough.
But there's still some value in writing documents. I agree with the original article - it's all about thinking. My take on it is this: it's possible to use LLMs to write decent documents so long as you treat the process as a partnership (man and machine), and conduct the process iteratively. Work on it, and yes, think.
> LLMs are useful for research and checking your work.
I have to disagree that it's good for LLMs to do the research, depending on the context.
If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great.
If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.
Nowadays my writing (and maybe all of ours) has totally devolved into "prompt-ese." Much like days of yore where we all approached Google searches with acrobatic language knowing how to specifically get something done.
Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.
> Nowadays my writing [] has totally devolved into "prompt-ese."
I've noticed this myself. Even in my Obsidian vault, which only I read and write in. I think it's a development into writing more imperatively, instinctually. Thinking more in instructions and commands than the speaking and writing habits I've developed organically over my life. Or just "talking to the computer" in plain English, after having to convert my thoughts to code anytime I want to make it do something.
I've been thinking about the role of "director" in media as an analogy to writing with LLMs. I'm working right now on an "essay," that I'm not sure I'll share with anyone, even family (who is my first audience). Right now, under the Authorship section, I wrote "Conceived, directed, and edited by Qaadika. Drafted by Claude", with a few sentences noting that I take responsibility for the content, and that the arguments, structure, audience, and editorial judgments are mine.
I had a unique idea and started with a single sentence prompt, and kept going from there until I realized it should be an essay. So the ideas in it are mine. The thesis is mine. I'm going back and forth with the LLM section by section. Some prompts are a sentence. Some are eight paragraphs. I can read the output and see exactly what was mine and what the LLM added. But my readers won't. They'll just see "Author: Qaadika" and presume every single word was mine. Or they'll sniff out the LLM-ness and stop reading.
I can make a film and call myself director without ever being seen in it. Is is the same if I direct the composition of words without ever writing any of the prose myself? Presuming I've written enough in prompts that it's identifiably unique from cheaper prompts and "LLM, fill in the blank".
We credit Steven Spielberg with E.T. But he didn't write the screenplay. He probably had comments on it, though. He didn't operate the camera. But he probably told the operators where to put it. He didn't act in it. But he probably told the actors where to stand and where to move and how to be. He didn't write the music. But he probably had a sense of when and where to place it in the audio. And he didn't spend every moment in the cutting room, placing every frame just so.
But his name is at the top. He must have done something, even if I can't point to anything specific. The "Vibe" of the film is Spielberg, but it's also the result of hundreds of minds, most of whole aren't named until the end of the film, and probably never read by most viewers.
His contribution to the film was instructions. Do this, don't do that. Let's move this scene to here. This shot would be better from this angle. The musical swell should be on this shot; cut it longer to fit.
So where, exactly, is "Spielberg" in E.T.? What can we objective credit him with, aside from the finished product: E.T. the Extra-Terrestrial: Coming June 1982?
Ok, but 3 generations ago, shorthand was a core skill that any competent professional could read and extract MORE value from than laboriously typeset prose. Something similar is probably happening now with prompt-ese and human-to-human (vs just AI) writing.
The way I approach having LLM help with writing documents like this is to have it help me clean up my writing, not write the substance of it.
I tend to do extensive research (that process in itself would involve LLMs too, sure) in a tech plan, a product spec, etc. and usually end up with a really solid idea in my head and like say, five critical key points about this tech plan or product spec that I absolutely must convey in this document.
Then I basically "brain dump" my critical key points (including everything about it, background/reasoning, why this or that way, what's counterintuitive about it, why is this point important, etc.) in pretty messy writing (but hitting all the important talking points) to a LLM prompt, asking it to produce the document I need (be it tech plan, product spec, whatever) and ask it to write it based on my points.
The resulting document has all the important substance on it this way.
If you use LLM to produce documents like this by a way of a prompt like "Write a tech plan for the product feature XYZ I want to build", you're going to get a lot of fluff. No substance, plenty of mistakes, wrong assumptions, etc.
I had an interesting experience the other day. I've been struggling with some lyrics to a song I am writing. I asked Claude to review them, and it did an amazing job of finding the weak lines and best lines, and nearly perfectly articulating to my why they were weak or strong. It was strange because the output of the analysis almost perfectly mirrored my own thoughts.
When I asked it for alternatives/edits, they were not good however.
> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.
I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.
I take it ideas to mean “well scoped replies” like “list pro and con if this vs that got flow”. While someone might think of N issues the LLM might present another six out of which three or four don’t make sense but one or two do. Might be worth adding these in the document.
Writing down specs for technical projects is a transformational skill.
I've had projects that seemed tedious or obvious in my head only to realize hidden complexity when trying to put their trivial-ness into written words. It really is a sort of meditation on the problem.
In the most important AI assisted project I've shipped so far I wrote the spec myself first entirely. But feeding it through an LLM feedback loop felt just as transformational, it didn't only help me get an easier to parse document, but helped me understand both the problem and my own solution from multiple angles and allowed me to address gaps early on.
I'm 100% an advocate for not using LLM for writing... But I'll tell you were I use them just for that. For ceremonies.
A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.
I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.
I fully agree with the sentiment of the article. I will say that I feel I've had some success in having an LLM outline a document, provided that I then go through and read / edit thoroughly. I think there's even an argument that this a) possibly catches areas you I have forgotten to write about, and b) hooks into my critique mode which feels more motivated than author mode sometimes (I'm slightly ashamed to say). This does come at the cost however of not putting my self in 'researcher' mode, where I go back through the system I'm writing about and follow the threads, reacquainting myself and judging my previous decisions.
>Letting an LLM write for you is like paying somebody to work out for you.
It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).
With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.
That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]
There's a lot of ways to use an LLM, the least effective is automating an entire process- yet it's the most compelling.
To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.
the gym analogy lands. you dont hire someone to do your reps, but its fine to hire a trainer to critique your form. that distinction matters when thinking about how to actually use these tools.
My blog is 100% written by me. You can tell because of all the typos.
I don't really understand why people will create blogs that are generated by Claude or ChatGPT. You don't have to have a blog, isn't the point of something like a blog to be your writing? If I wanted an opinion from ChatGPT I could just ask ChatGPT for an opinion. The whole point of a blog, in my mind, is that there's a human who has something that they want to say. Even if you have original ideas, if you have ChatGPT write the core article makes it feel kind of inhuman.
I'm more forgiving of stuff like Grammarly, because typos are annoying, though I've stopped using it because I found I didn't agree with a lot of its edits.
I admit that I will use Claude to bullshit ideas back and forth, basically as a more intelligent "rubber duck", but the writing is always me.
> Letting an LLM write for you is like paying somebody to work out for you.
This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.
But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.
I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.
so many thinkers/writers mistake writing prose for thinking. including Paul Graham. this is ABSOLUTELY not true.
You can write for yourself, through thinking, and it can be sloppy, bc you're doing it for yourself.
A homecooked meal does NOT look like a Thanksgiving meal.
Most of these writers think that all writing looks like Thanksgiving meals- they aren't. Homecooked meals can be simple, delicious, and not meant to cater for 20+ guests, from family to friends. Each with their own weird peculiarities and food allergies.
writing for thinking should be more like home cooked meals- really disorganized, really sloppy, with none of the presentation, but with all the nutrition and comfort that comes with home cooked meals.
writing is thinking for me, but writing looks like this post; something shot from the hip, and unpolished, to be consumed for myself. it'll probably be downvoted, and that's absolutely ok
Bad AI writing is bad, and obvious once you know what you're looking for. Nobody wants to read it.
Good AI writing takes time, can be valuable, and can inspire readers to send in praise about how insightful or thorough a particular article was (speaking from experience). Why do it? The same reason we all use Claude all day to write code - it is faster / you can do more of it. But in the same way that a junior engineer vibing code is a lot more likely to produce slop than a grizzled senior who is doing the same thing, you have to know what you are doing to get good results out of it.
Pushing back against AI writing in 2026 is like the people pushing back against AI coding in 2024. It's not a question of if it will happen. It's a question of how to do it well. ;)
I write far better than any llm ... I've tried to get them to help me with writing, they always fuck it up.
The biggest problem is they don't understand the time effort tradeoff between understanding and language so they don't know how to pack the densities of information properly or how to swim through choppy relationships with the world around them while effectively communicating.
But who knows, maybe they're more effective and I'm just an idiot.
I think a lot of people run their posts through an LLM after writing it and edit it accordingly, resulting in an output somewhere between human-made and AI-generated.
> The goal of writing is not to have written. It is to have increased your understanding, and then the understanding of those around you. When you are tasked to write something, your job is to go into the murkiness and come out of it with structure and understanding. To conquer the unknown.
Adults now have to be explained, like children, that you can’t just stream info through the eyes and ears and expect to learn anything.
That’s one explanation for this apparent need; there are also more sinister ones.
Agree with the underlying point: "don't let an LLM do your thinking, or interfere with processes essential to you thinking things clearly through."
My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't.
While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance.
The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.
Well said. The most important part of writing is thinking. LLMs cannot do the thinking for you.
This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?
Unless the idea is trivial, LLMs are probably just getting in the way.
Letting an LLM write for you is like paying somebody to work out for you.
The problem with writing is the feedback tends to be inconsistent. With going to the gym you can track your progress quantitatively such as how fast or far you can run or weight lifted, but it's sometimes hard to know if you're improving at writing.
However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.
For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).
You see the same thing in teaching, perhaps even more because of the interactive element. But the dynamic in any case is the same. Ideas exist as a kind of continuous structure in our minds. When you try to distill that into something discrete you're forced to confront lingering incoherence or gaps.
I have been going back to verbose, expansive inline comments. If you put the "history" inline it is context, if you stuff it off in some other system it's an artifact. I cant tell you how many times I have worked in an old codebase, that references a "bug number" in a long dead tracking system.
> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.
Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?
It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.
If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.
Sure.
> If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.
No, no, no. You don't need to take that step. Whatever bullet-point list you're feeding in as the prompt is the relevant artifact you should be producing and adding to the bug, or sharing as an e-mail, or whatever.
It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.
Explaining a design, problem, etc and trying to find solutions is extremely useful.
I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.
The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.
But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my brain, not the LLM, by not exposing my workspace to it.
Unfortunately they can also validate some really bad ideas.
That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.
I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.
Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?
I guess I must feel it's slightly useful overall as I still do it.
I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.
If I offend anyone I will not be apologising for it.
It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
Been using it this way for 2, almost 3 years.
What it considers best, is what occurs most often, which can be the most average answers. Unless the service is tuned for search (perplexity, or google itself for example), others will not provide as complete an answer.
How well we ask can make all the difference. It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
Been using it this way for 2, almost 3 years.
This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.
The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.
Art is where I choose to draw the line, for both ideation and content generation. That work report I leveraged AI to help flush out isn't art, but my personal blog is, as is anything I must internalize (that is thoroughly understand and remember). This is why I have the following disclaimer on my blog (and yes, the typo on this page is purposeful!): https://jasoneckert.github.io/site/about-this-site/
This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.
Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4
This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.
From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.
Yes, this is my process:
Record yourself rambling out loud, and import the audio in NotebookLM.
Then use this system prompt in NotebookLM chat:
> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove filler words. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.
Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.
This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.
I've definitely lost something since migrating my Artist's Way morning pages and to the netbook. (Worth it, though, to enable grep—and, now, RAG).
It's not the same thing as talking to someone (or a group) about something.
Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).
In essense, LLM's are a much better spell check.
Just last week I sent out hand written “c and f’s” as the league would call it. Remarkably, readership remains high. And when else am I handwriting letters to send across the country.
Sometimes an LLM can shortcut me through a bunch of those misunderstandings. It feels like an easy win.
But ultimately, lacking context for how I got to that point in the debugging litany always slows me down more than the time it saved me. I frequently have to go backwards to uncover some earlier insight the LLM glossed over, in order to "unlock" a later problem in the litany.
If the problem is simple enough the LLM can actually directly tell you the answer, it's great. But using it as a debugging "coach" or "assistant" is actively worse than nothing for me.
Docs written by agents almost always produce mediocre results.
[0] https://news.ycombinator.com/item?id=47579977
This applies at a business level (most software shops shouldn't have full-time book keepers on staff, for example), but applies even more in the AI age.
I use LLMs to help me code the boring stuff. I don't want to write CDK, I don't want to have to code the same boilerplate HTML and JS I've written dozens of times before - they can do that. But when I'm trying to implement something core to what I'm doing, I want to get more involved.
Same with writing. There's an old joke in the writing business that most people want to be published authors than they do through the process of writing. People who say they want to write don't actually want to do the work of writing, they just want the cocktail parties and the stroked ego of seeing their name in a bookshop or library. LLMs are making that more possible, but at a rather odd cost.
When I write, I do so because I want to think. Even when I use an LLM to rubber duck ideas off, I'm using it as a way to improve my thinking - the raw text it outputs is not the thing I want to give to others, but it might make me frame things differently or help me with grammar checks or with light editing tasks. Never the core thinking.
Even when I dabble with fiction writing: I enjoy the process of plotting, character development, dialogue development, scene ordering, and so on. Why would I want to outsource that? Why would a reader be interested in that output rather than something I was trying to convey. Art lives in the gap between what an artist is trying to say and what an audience is trying to perceive - having an LLM involved breaks that.
So yeah, coding, technical writing, non-fiction, fiction, whatever: if you're using an LLM you're giving up and saying "I don't care about this", and that might be OK if you don't care about this, but do that consciously and own it and talk about it up-front.
When you outsource the generation and thinking, you're also outsourcing the self-review that comes along with evaluating your own output.
In the office, that review step gets outsourced to your coworkers.
Having a coworker who ChatGPT generates slides, design docs, or PRs is terrible because you realize that their primary input is prompting Claude and then sending the output to other people to review. I could have done that myself. Reviewing their Claude or ChatGPT output so they can prompt Claude or ChatGPT to fix it is just a way to get me to do their work for them.
I will sometimes write a lesson and have an LLM generate a quiz and give me feedback on my content search for mistakes or unclear content.
I have also used it to help me structure a document. I give it requirements it makes a general outline that I then just fill in with my own words.
I’m still not sure how to approach my students’ uses of an LLM. I am loath to make a hard and fast rule of no LLMs because that’s ridiculous. I want to encourage appropriate use. I don’t know what is appropriate use in the context of a student.
An LLM can be a great learning tool but it also can be a crutch.
With an LLM doing all the writing for you, you learn close to nothing.
Workers and managers in organizations are being overwhelmed by large numbers of documents because it's so easy to bang something off that's 'about right' and convincing enough.
But there's still some value in writing documents. I agree with the original article - it's all about thinking. My take on it is this: it's possible to use LLMs to write decent documents so long as you treat the process as a partnership (man and machine), and conduct the process iteratively. Work on it, and yes, think.
I have to disagree that it's good for LLMs to do the research, depending on the context.
If by "useful for research" you mean useful for tracking down sources that you, as the writer, digest and consider, then great.
If by "useful for research" you mean that it will fill in your citations for you, that's terrible. That sends a false signal to readers about the credibility of your work. It's critical that the author read and digest the things they are citing to.
Now? I am pushing so much of my writing into prompts into AI where I know the AI will understand me even with lots of typos and run-on sentences... Is that a bad thing? A good thing? I am able to be so much more effective by sheer volume of words, and the precision and grammar is mostly irrelevant. But I am able to insert nuances and sidetracks that ARE passing vital context to AI but may be lost on people. Or at least pre-prompt-writing people.
I've noticed this myself. Even in my Obsidian vault, which only I read and write in. I think it's a development into writing more imperatively, instinctually. Thinking more in instructions and commands than the speaking and writing habits I've developed organically over my life. Or just "talking to the computer" in plain English, after having to convert my thoughts to code anytime I want to make it do something.
I've been thinking about the role of "director" in media as an analogy to writing with LLMs. I'm working right now on an "essay," that I'm not sure I'll share with anyone, even family (who is my first audience). Right now, under the Authorship section, I wrote "Conceived, directed, and edited by Qaadika. Drafted by Claude", with a few sentences noting that I take responsibility for the content, and that the arguments, structure, audience, and editorial judgments are mine.
I had a unique idea and started with a single sentence prompt, and kept going from there until I realized it should be an essay. So the ideas in it are mine. The thesis is mine. I'm going back and forth with the LLM section by section. Some prompts are a sentence. Some are eight paragraphs. I can read the output and see exactly what was mine and what the LLM added. But my readers won't. They'll just see "Author: Qaadika" and presume every single word was mine. Or they'll sniff out the LLM-ness and stop reading.
I can make a film and call myself director without ever being seen in it. Is is the same if I direct the composition of words without ever writing any of the prose myself? Presuming I've written enough in prompts that it's identifiably unique from cheaper prompts and "LLM, fill in the blank".
We credit Steven Spielberg with E.T. But he didn't write the screenplay. He probably had comments on it, though. He didn't operate the camera. But he probably told the operators where to put it. He didn't act in it. But he probably told the actors where to stand and where to move and how to be. He didn't write the music. But he probably had a sense of when and where to place it in the audio. And he didn't spend every moment in the cutting room, placing every frame just so.
But his name is at the top. He must have done something, even if I can't point to anything specific. The "Vibe" of the film is Spielberg, but it's also the result of hundreds of minds, most of whole aren't named until the end of the film, and probably never read by most viewers.
His contribution to the film was instructions. Do this, don't do that. Let's move this scene to here. This shot would be better from this angle. The musical swell should be on this shot; cut it longer to fit.
So where, exactly, is "Spielberg" in E.T.? What can we objective credit him with, aside from the finished product: E.T. the Extra-Terrestrial: Coming June 1982?
No. Don't pretend your taking shortcuts is less questionable because everyone else is doing it too. We're not. Own it yourself, don't get me involved.
> I am able to be so much more effective by sheer volume of words
If you think value comes from volume of words you really need to understand writing better.
I tend to do extensive research (that process in itself would involve LLMs too, sure) in a tech plan, a product spec, etc. and usually end up with a really solid idea in my head and like say, five critical key points about this tech plan or product spec that I absolutely must convey in this document.
Then I basically "brain dump" my critical key points (including everything about it, background/reasoning, why this or that way, what's counterintuitive about it, why is this point important, etc.) in pretty messy writing (but hitting all the important talking points) to a LLM prompt, asking it to produce the document I need (be it tech plan, product spec, whatever) and ask it to write it based on my points.
The resulting document has all the important substance on it this way.
If you use LLM to produce documents like this by a way of a prompt like "Write a tech plan for the product feature XYZ I want to build", you're going to get a lot of fluff. No substance, plenty of mistakes, wrong assumptions, etc.
When I asked it for alternatives/edits, they were not good however.
> LLM-generated writing undermines the authenticity of not just one’s writing but of the thinking behind it as well. If the prose is automatically generated, might the ideas be too?
Given your endorsement of using LLMs for generating ideas, isn't this the inverse of your thesis? The quote's issue with LLMs is the ideas that came out of them; the prose is the tell. I don't think they'd be happy with LLM generated ideas even if they were handwritten.
I feel like this post is missing the forest for the trees. Writing is thinking alright, but fueling your writing by brainstorming with an LLM waters down the process.
I've had projects that seemed tedious or obvious in my head only to realize hidden complexity when trying to put their trivial-ness into written words. It really is a sort of meditation on the problem.
In the most important AI assisted project I've shipped so far I wrote the spec myself first entirely. But feeding it through an LLM feedback loop felt just as transformational, it didn't only help me get an easier to parse document, but helped me understand both the problem and my own solution from multiple angles and allowed me to address gaps early on.
So I'll say: Do your own writing, first.
A large part of our work is about writing documents that no one will read, but you'll get 10 different reminders that they need to get done. These are documents that circulate, need approval from different stake holders. Everybody stamps their name on it, without ever reading it. I used to spend so much time crafting these documents. Now I use an LLM, the stakeholders are probably using an LLM to summarize it, someone is happy, they are filed for the records.
I call these "ceremonies" because they are a requirement we have, it helps no one, we don't know why we have to do it, but no one wants to question it.
It's worse than this. If someone is working out for you, they still own the outcome of that effort (their physique).
With an LLM people _act_ like the outcome is their own production. The thinking, reasoning, structural capability, modeling, and presentation can all just as easily be framed _as your creation_.
That's why I think we're seeing an inverse relationship between ideation output and coherence (and perhaps unoriginality) and a decline in creative thinking and creativity[0]
[0] https://time.com/7295195/ai-chatgpt-google-learning-school/
LLMs write poorly because most people write poorly. They didn’t cause it, they simply emulate it.
To your point, it's entirely a balance. I personally will record a 10-15 minute yap session on a concept I want to share and feed it to an agent to distill it into a series of observations and more compelling concepts. Then you can use this to write your piece.
I don't really understand why people will create blogs that are generated by Claude or ChatGPT. You don't have to have a blog, isn't the point of something like a blog to be your writing? If I wanted an opinion from ChatGPT I could just ask ChatGPT for an opinion. The whole point of a blog, in my mind, is that there's a human who has something that they want to say. Even if you have original ideas, if you have ChatGPT write the core article makes it feel kind of inhuman.
I'm more forgiving of stuff like Grammarly, because typos are annoying, though I've stopped using it because I found I didn't agree with a lot of its edits.
I admit that I will use Claude to bullshit ideas back and forth, basically as a more intelligent "rubber duck", but the writing is always me.
This. This is the big distinction. If you like something and/or want to improve it, you do it yourself. If not, you pay someone else to do it. And I think that's ok.
But I guess some people either choose a wrong job or had no other option. I'm happy to not be in that group.
I think it's the opposite. People have ideas and know what they want to do. If I need to write something, I provide some bullet points and instructions, and Claude does the rest. I then review, and iterate.
You can write for yourself, through thinking, and it can be sloppy, bc you're doing it for yourself.
A homecooked meal does NOT look like a Thanksgiving meal.
Most of these writers think that all writing looks like Thanksgiving meals- they aren't. Homecooked meals can be simple, delicious, and not meant to cater for 20+ guests, from family to friends. Each with their own weird peculiarities and food allergies.
writing for thinking should be more like home cooked meals- really disorganized, really sloppy, with none of the presentation, but with all the nutrition and comfort that comes with home cooked meals.
writing is thinking for me, but writing looks like this post; something shot from the hip, and unpolished, to be consumed for myself. it'll probably be downvoted, and that's absolutely ok
Good AI writing takes time, can be valuable, and can inspire readers to send in praise about how insightful or thorough a particular article was (speaking from experience). Why do it? The same reason we all use Claude all day to write code - it is faster / you can do more of it. But in the same way that a junior engineer vibing code is a lot more likely to produce slop than a grizzled senior who is doing the same thing, you have to know what you are doing to get good results out of it.
Pushing back against AI writing in 2026 is like the people pushing back against AI coding in 2024. It's not a question of if it will happen. It's a question of how to do it well. ;)
The biggest problem is they don't understand the time effort tradeoff between understanding and language so they don't know how to pack the densities of information properly or how to swim through choppy relationships with the world around them while effectively communicating.
But who knows, maybe they're more effective and I'm just an idiot.
>Essay structured like LLM output
Hmmm...
Adults now have to be explained, like children, that you can’t just stream info through the eyes and ears and expect to learn anything.
That’s one explanation for this apparent need; there are also more sinister ones.
My own experience, however, is that the best models are quite good and helping you with those writing and thinking processes. Finding gaps, exposing contradictions or weaknesses in your hypotheses or specifications, and suggesting related or supporting content that you might have included if you'd thought of it, but you didn't.
While I'm a developer and engineer now, I was a professional author, editor, and publisher in a former life. Would have _killed_ for the fast, often excellent feedback and acceleration that LLMs now provide. And while sure, I often have to "no, no, no!" or delete-delete, "redraft this and do it this way," the overall process is faster and the outcomes better with AI assistance.
The most important thing is to keep overall control of the tone, flow, and arguments. Every word need not be your own, at least in most forms of commercial and practical writing. True whether your collaborators are human, mecha, or some mix.
This is why I’m bearish on all of the apps that want to do my writing for me. Expanding a stub of an idea into a low information density paragraph, and then summarizing those paragraphs on the other end. What’s the point?
Unless the idea is trivial, LLMs are probably just getting in the way.
The problem with writing is the feedback tends to be inconsistent. With going to the gym you can track your progress quantitatively such as how fast or far you can run or weight lifted, but it's sometimes hard to know if you're improving at writing.