top | item 46629474

To those who fired or didn't hire tech writers because of AI

352 points| theletterf | 1 month ago |passo.uno

266 comments

order

nicbou|1 month ago

I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.

I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.

Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.

I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.

People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.

Nextgrid|1 month ago

The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.

You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.

Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".

GuB-42|1 month ago

And that's exactly the same for coding!

Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.

Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.

boilerupnc|1 month ago

Well said. I try to capture and express this same sentiment to others through the following expression:

“Technology needs soul”

I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.

order-matters|1 month ago

your ability to articulate yourself cleanly comes across in this post in a way that I feel AI is trying to be and never quite reaches as well.

I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.

ChrisMarshallNY|1 month ago

Thanks so much for this!

Nicely written (which, I guess, is sort of the point).

ajuc|1 month ago

Replacement will be 80% worse, that's fine. As long as it's 90% cheaper.

See Duolingo :)

gausswho|1 month ago

I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.

I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.

anal_reactor|1 month ago

In every single discussion AI-sceptics claim "but AI cannot make a Michelin-star five-course gourmet culinary experience" while completely ignoring the fact that most people are perfectly happy with McDonald's, as evidenced by its tremendous economic and cultural success, and the loudest complaint with the latter is the price, not the quality.

DeepSeaTortoise|1 month ago

Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?

Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.

IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:

Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.

You know, just like the human it'd replace.

sevensor|1 month ago

See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.

TimByte|1 month ago

The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things

Stratoscope|1 month ago

Your philosophy reminds me of my friend Caroline Rose. One of Caroline's claims to fame was writing the original Inside Macintosh.

You may enjoy this story about her work:

https://www.folklore.org/Inside_Macintosh.html

As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)

As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.

The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.

The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.

There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)

The assistant wrote its own "documentation" for each of these functions in this form:

"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"

Dude, anyone reading the code could have told you that!

All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:

When you're reading the docs, you're not reading the code.

When you're reading the code, you're not reading the docs.

Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.

Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.

esafak|1 month ago

Are you working in the legal field or is that separate? How big is your company?

chiefalchemist|1 month ago

I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:

- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.

Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."

- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.

p.s. One of my fave comms' heuristics is from Frank Luntz*:

"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)

One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.

* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.

rasmus-kirk|1 month ago

Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.

observationist|1 month ago

I think you fundamentally misunderstand how the technology can be used well.

If you are in charge of a herd of bots that are following a prompt scaffolding in order to automate a work product that meets 90% of the quality of the pure human output you produce, that gives you a starting point with only 10% of the work to be done. I'd hazard a guess that if you spent 6 months crafting a prompt scaffold you could reach 99% of your own quality, with the odd outliers here and there.

The first person or company to do that well then has an automation framework, and they can suddenly achieve 10x or 100x the output with a nominal cost in operating the AI. They can ensure that each and every work product is lovingly finished and artisanally handcrafted , go the extra mile, and maybe reach 8x to 80x output with a QA loss.

In order to do 8-80x one expert's output, you might need to hire a bunch of people to do segmented tasks - some to do interviews, build relationships, the other things that require in person socialization. Or, maybe AI can identify commonalities and do good enough at predicting a plausible enough model that anyone paying for what you do will be satisfied with the 90% as good AI product but without that personal touch, and as soon as an AI centric firm decides to eat your lunch, your human oriented edge is gone. If it comes down to beancounting, AI is going to win.

I don't think there's anything that doesn't require physically interacting with the world that isn't susceptible to significant disruption, from augmentation to outright replacement, depending on the cost of tailoring a model to the tasks.

For valuable enough work, companies will pay the millions to fine-tune frontier models, either through OpenAI or open source options like Kimi or DeepSeek, and those models will give those companies an edge over the competition.

I love human customer service, especially when it's someone who's competent, enjoys what they do, and actually gives a shit. Those people are awesome - but they're not necessary, and the cost of not having them is less than the cost of maintaining a big team of customer service agents. If a vendor tells a big company that they can replace 40k service agents being paid ~$3.2 billion a year with a few datacenters, custom AI models, AI IT and Support staff, and totally automated customer service system for $100 million a year, that might well be worth the reputation hit and savings. None of the AI will be able to match the top 20% of human service agents in the edge cases, and there will be a new set of problems that come from customer and AI conflict, etc.

Even so. If your job depends on processing information - even information in a deeply human, emotional, psychologically nuanced and complex context - it's susceptible to automation, because the ones with the money are happy with "good enough." AI just has to be good enough to make more money than the human work it supplants, and frontier models are far past that threshold.

rtgfhyuj|1 month ago

sounds like a bunch of agents can do a good amount of this. A high horse isn’t necessary

PlatoIsADisease|1 month ago

>insulting

As as writer, you know this makes it seem emotional rather than factual?

Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).

block_dagger|1 month ago

…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.

GuB-42|1 month ago

AI works well for one kind of documentation.

The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.

Common features are:

- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many

- References to documents you don't have access to

- UML diagrams, not matching the code of course

- Signatures by people who left the project long ago and are nowhere to be seen

- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists

- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care

Of course, no one reads them, besides maybe a depressive QA manager.

glemion43|1 month ago

I let it generate README.md files for my projects and they look awesome and they read nice and are theoretically helpful for everyone new.

And LLM are really good in reading your docs to help someone. So I make sure to add more concrete examples into them

asah|1 month ago

not true! it's read by other LLMs! /s

drob518|1 month ago

The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.

seanwilson|1 month ago

> They act as stand-ins for actual users and will flag all sorts of usability problems.

I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable.

I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs.

I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning.

And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that.

loudmax|1 month ago

AI may never be able to replace the best tech writers, or even pretty good tech writers.

But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation.

TimByte|1 month ago

In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing

throwaw12|1 month ago

> They act as stand-ins for actual users and will flag all sorts of usability problems

True, but it raises another question, what were your Product Managers doing in the first place if tech writer is finding out about usability problems

falcor84|1 month ago

> I don’t see AI doing either of those things well.

I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.

Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.

killerstorm|1 month ago

True.

Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.

As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.

tech_tuna|1 month ago

Agreed, they overlap with QA engineers and Product Managers, with some level of technical skill on their own e.g. they might know Python pretty well.

DeborahWrites|1 month ago

Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.

Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)

In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.

I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.

FeteCommuniste|1 month ago

> AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement.

[insert Pawn Stars meme]: "GOOD docs? Sorry, best I can do is 'slightly better than useless.'"

shiroiuma|1 month ago

>AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement.

That's fine, though: as long as the AI's output is better than "completely and utterly useless", or even "nonexistent", it'll be an improvement in many places.

sehugg|1 month ago

The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.

oenton|1 month ago

> bridging communication between product management, engineers, and users.

Thank you for putting this so eloquently into words. At my work (FAANG) tech writers are being let go and their responsibilities are being pushed on developers, who are now supposed to “use AI” to maintain customer facing documentation.

Is this the promise land? It sure doesn’t feel like it.

TimByte|1 month ago

AI can help with synthesis once those insights exist, but it doesn't naturally occupy that liminal space between groups, or sense the cultural and organizational gaps

ainiriand|1 month ago

And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.

I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.

In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.

jraph|1 month ago

> but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.

Yep, and reading you will feel less boring.

The uniform style of LLMs gets old fast and I wouldn't be surprised if it were a fundamental flaw due to how they work.

And it's not even sure speed gains from using LLMs make up for the skill loss in the long term.

entontoent|1 month ago

Was anyone else confused by this title?

I thought it was saying "a letter to those who fired tech writers because they were caught using AI," not "a letter to those who fired tech writers to replace them with AI."

The whole article felt imprecise with language. To be honest, it made me feel LESS confident in human writers, not more.

I was having flashbacks to all of the confusing docs I've encountered over the years, tightly controlled by teams of bad writers promoted from random positions within the company, or coming from outside but having a poor understanding of our tech or how to write well.

I'm writing this as someone who majored in English Lit and CS, taught writing to PhD candidates for several years, and maintains most of my own company's documentation.

ThrowawayR2|1 month ago

Given the steady parade of headlines on HN about workers supposedly being replaced by AI, it seems fairly self-evident that the first interpretation is the less likely of the two.

motbus3|1 month ago

I like the post but we can learn from insurance companies.

They have AI finding reasons to reject totally valid requests

They are putting to court that this is a software bug and they should not be liable.

That will be the standard excuse. I hope it does not work.

InMice|1 month ago

Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.

marcosdumay|1 month ago

> Is it expected that LLMs will continue to improve over time?

By whom?

Your expectations aren't the same everybody has.

LtWorf|1 month ago

The LLM can't actually use the product and realise that the description is wrong.

tmvnty|1 month ago

I’m already seeing colleagues at work using AI to generate documentations and then call it a day. It’s like they are oblivious to how _ugly_ and _ineffective_ the AI generated AI slops are:

- too many emojis - too many verbose text - they lack the context of what’s important - critical business and historical context are lost - etc..

They used AI to satisfy the short-term gain: “we have documentation”, without fully realising the long-term consequences of low quality. As a result, imo we’ll see the down spiral effects of bugs, low adoption, and unhappy users.

shiroiuma|1 month ago

>I’m already seeing colleagues at work using AI to generate documentations and then call it a day. It’s like they are oblivious to how _ugly_ and _ineffective_ the AI generated AI slops are:

I'm sure their slop looks FAR better than the garbage my coworkers write. I really wish my coworkers would use AI to edit their writing, because then it might actually be comprehensible.

aniou|1 month ago

First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.

But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.

Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.

This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."

Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.

aurareturn|1 month ago

But you might not need 5 tech writers anymore. Just 1 who controls an LLM.

theletterf|1 month ago

Perhaps. Could the same be said for engineers?

TimByte|1 month ago

The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand

ChrisMarshallNY|1 month ago

Good points.

I suspect a lot of folks are asking ChatGPT to summarize it…

I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.

The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.

groovy2shoes|1 month ago

as a writer, i have found AI editing tools to be woefully unhelpful. they tend to focus on specific usage guidelines (think Strunk & White) and have little to offer for other, far more important aspects of writing.

i wrote a 5 page essay in November. the AI editor had sixty-something recommendations, and i accepted exactly one of them. it was a suggestion to hyphenate the adjectival phrase "25-year-old". i doubt that it had any measurable impact on the effectiveness of the essay.

thing is, i know all the elements of style. i know proper grammar and accepted orthographic conventions. i have read and followed many different style guides. i could best any English teacher at that game. when i violate the principles (and i do it often), i do so deliberately and intentionally. i spent a lot of time going through suggestions that would only genericize my writing. it was a huge waste of my time.

i asked a friend to read it and got some very excellent suggestions: remove a digressive paragraph, rephrase a few things for persuasive effect, and clarify a sentence. i took all of these suggestions, and the essay was markedly improved. i'm skeptical that an LLM will ever have such a grasp of the emotional and persuasive strength of a text to make recommendations like that.

throwaw12|1 month ago

I will share my experience, hopefully it answers some questions to tech writers.

I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.

The cycle was (reasonably takes 1 week, depending on tech writer workload):

    1. prepare context
    2. create ticket to tech writers, wait until they respond
    3. discuss messaging over the call
    4. couple days later I get first draft
    5. iterate on draft, then finally publish it
Today its different:

    1. I prepare all the context and style guide, then feed them into LLM.
    1.1. context is extracted directly from code by coding agents 
    2. I proofread it and 97% of cases accept it, because it follows the style guide and mostly transforms my context correctly into customer consumable content
    3. Done. less than 20 minutes
Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.

arionmiles|1 month ago

If you're getting such value out of LLMs, I'm intrigued to learn more about what exactly it is that you're feeding them.

People boast about the gains with LLMs all the damn time and I'm sceptical of it all unless I see their inputs.

anonymous_sorry|1 month ago

Your docs are probably read many more times than they are written. It might be cheaper and quicker to produce them at 90% quality, but surely the important metric is how much time it saves or costs your readers?

codesparkle|1 month ago

It’s not so much that AI is replacing “tech writers”; with all due respect to the individuals in those roles, it was never a good title to identify as.

Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.

If you’re a technical writer, now’s the time to reinvent yourself.

viraptor|1 month ago

The specialisations will always exist. A good software engineer can't replace a good tester, DBA, or writer. There are specific extra skills necessary for those roles. We may not need those full skills in every environment (most companies will be just fine without a DBA), but they sure are not going away globally.

You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.

EagnaIonat|1 month ago

> Technical writing is part of the job of software engineering.

Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.

elzbardico|1 month ago

A somewhat related anecdote:

Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.

You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.

Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.

It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.

An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.

So in the end, you end up with some bizarre stuff that looks like:

"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"

squigz|1 month ago

You had an LLM rewrite your resume, and then sent it to employers... without proofreading it? That was certainly a choice.

ponector|1 month ago

To get through the hiring process nowadays you actually need an AI written CV because no one is reading it except of AI powered ATS used by HR department.

jr-throw|1 month ago

None of the ten or so staff tech writers I have worked closely with over the years have honestly been great. This has been disappointing.

Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.

prakashn27|1 month ago

I have not fired a technical writer, but writing documentation that understands and maintains users focus is hard even with llm. I am trying to write documentation for my start up and it is harder than I expected even with llm.

Kudos to all technical writer who made my job as software engineer easier.

lubujackson|1 month ago

Based on the trajectory of LLMs I bet a good tech writer will soon be a more valuable engineer than a "leetcode-hard" engineer for most teams.

Obviously we still need people to oil the machine, but... a person who deeply understands the product, can communicate shortcomings in process or user flows, can quickly and effectively organize their thoughts and communicate them, can navigate up and down abstraction levels and dive into details when necessary - these are the skills LLMs require.

EagnaIonat|1 month ago

Nice read after the earlier post saying fire all your tech writers. Good post.

One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.

Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.

theletterf|1 month ago

> after the earlier post saying fire all your tech writers

What post was that?

gettingoverit|1 month ago

Not their manager here, but

We fired our professional tech writers. They've been using AI all the time (with horrible results), and were basically incapable of tech writing without it at all.

Looking for tech writers on the market is nigh impossible. Even people with decent portfolio tend to be very bad at their job.

The only good option now is to hire a software developer to do the writing. There's a decent amount of them who have experience with that. Obviously devs won't like to have it on their CV instead of proper development.

Honestly this is a catastrophe. If you're firing a tech writer that writes something even semi-decent, you're ruining your business.

Reminder: AI is only good at things that existed in bulk during its training, such as README files, configs that always look the same (package.json, dockerfile), and tests. The documentation for your product, or for products of that kind, or even in general, either never existed, or not such a commodity to have AI generate it well.

theletterf|1 month ago

So you're generalizing from your bad experience and your less than optimal recruitment process.

maxdo|1 month ago

I’m on engineering side . We are in the same boat.

Writers become more productive = less writers needed not 0 but less.

That’s current step. Now if the promise of cursor that capable of Multi week system to be automated completely. All the internal docs become ai driven .

So only exception are external docs . But … if all software is written by machine there are no readers .

This obviously a vector not a current state :( very dark and gloom

stackedinserter|1 month ago

> So here’s my request for you: Reconsider

Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.

I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.

"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"

Let's compare!

the_af|1 month ago

> Why should I hire a dedicated writer if I have people with better understanding of the system?

Many engineers are terrible at documentation, not just because they find it boring or cannot put it into words (that's the part an LLM could actually help with) but because they cannot tell what to document, what is unneeded detail, how best to address the target audience (or what is the profile of the target audience to begin with; something you can tell an LLM but which it cannot find on its own), etc, etc. The Fine Article goes into these nuances; it's the whole point of it.

> "Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"

Air travel is a well-known thing, surely different from your bespoke product.

adrian_b|1 month ago

While I agree with the article, the reducing of the number of technical writers due to the belief that their absence can be compensated by AI is just the most recent step of a continuous process of degradation of the technical documentation that has characterized the last 3 decades.

During the nineties of the last century I was still naive enough to believe that the great improvements in technology, i.e. the widespread availability of powerful word processors and the availability of the Internet for extremely cheap distribution will lead to an improvement in the quality of technical documentation and to easy access to it for everybody.

The reverse has happened, the quality of the technical documentation has become worse and worse, with very rare exceptions, and the access to much of what has remained has become very restricted, either by requiring NDAs or by requiring very high prices (e.g. big annual fees for membership to some industry standards organization).

A likely explanation for the worse and worse technical documentation is a reduction in the number of professional technical writers.

It is very obvious that the current management of most big companies does not understand at all the value of competent technical writers and of good product documentation; not only for their customers and potential customers, but also for their internal R&D teams or customer support teams.

I have worked for several decades at many companies, very big and very small, on several continents, but, unfortunately only at one of them the importance of technical documentation was well understood by the management, therefore the hardware and software developers had an adequate amount of time planned for writing documentation in their schedules for product development. Despite the fact that the project schedules at that company appeared to allocate much more time for "non-productive tasks" like documentation, than in other places, in reality it was there where the R&D projects were completed the fastest and with the least delays over the initially estimated completion time, one important factor being that every developer understood very well what must be done in the future and what has already been done and why.

wagwang|1 month ago

The obvious explanation is that the pace of writing software has speed up 100x but documentation has remained slow... until now.

vasco|1 month ago

With every job replaced by AI the best people will be doing a better job than the AI and it'll be very frustrating to be replaced by people that can't tell the difference.

But most people aren't that great at their jobs.

NemoNobody|1 month ago

Ahh, that was well written by a human, well done!

If that was more technical tho, like something more similar to technical writing... I would have had Copilot summarise it for me.

You are correct, the future is collaborative with AI, but not everything will still need to be collaborative...

Technical writing, like manuals and whatnots, that is simply akin to a math problem that, post calculator, has always calculated by calculators - even by people who didn't need them.

It will not be better, there is absolutely loss, it will still happen.

burroisolator|1 month ago

"Productivity gains are real when you understand that augmentation is better than replacing humans..." Isn't this where the job losses happen? For example, previously you needed 5 tech writers but now you only need 4 to do the same work. Hopefully it just means that the 5th person finds more work to do, but it isn't clear to me that Jevons paradox kicks in for all cases.

nlawalker|1 month ago

This was at #1 on the front page like an hour ago, and now after almost 100 new comments it’s off the front page at #40. What happened?

dang|1 month ago

The answer could be: (1) users flagged it; (2) mods downweighted it; and/or (3) it set off the flamewar detector, a,k.a. the overheated discussion detector.

In this case it was #3.

That's one of the ways the system autocorrects. A sensational/indignant post attracts upvotes because that's how upvotes work (this is a weakness of the upvoting system), and this triggers an overheated discussion, which trips the flamewar detector which penalizes the post. It's about as simple a feedback mechanism as a thermostat.

That's why it's not uncommon for something to be at #1 and have tons of upvotes and comments, and then suddenly plummet. We do review all the threads that get that particular penalty but sometimes it takes a while.

Edit: ok, I've reviewed it. In this case, the thread is actually pretty good. I'm not sold on the article*, but a good thread is enough to turn off the flamewar penalty in this case, and I've done so.

(* not a judgment about article quality in general, only about how good a fit it is or isn't for HN)

theletterf|1 month ago

I don't know how this works. @dang might have an explanation?

tlogan|1 month ago

However, the writing is on the wall: AI will completely replace technical writers.

The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.

ninalanyon|1 month ago

I remember the days when every large concern employed technical writers and didn't expect us programmers and engineers to write for the end users. But that stopped decades ago in most places at least as far as in house applications are concerned, long before AI could be used as an excuse for firing technical writers.

Barathkanna|1 month ago

I agree with the core concern, but I think the right model is smaller, not zero. One or two strong technical writers using AI as a leverage tool can easily outperform a large writing team or pure AI output. The value is still in judgment, context, and asking the right questions. AI just accelerates the mechanics.

agentultra|1 month ago

A lot of this applies to programming as well. And pretty much everything people are using GenAI for.

If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.

theshrike79|1 month ago

Documentation needs to be tested.

Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.

beej71|1 month ago

I think using AI for tech documentation is great for people who don't really give a shit about their tech documentation. If you were going to half-ass it anyway, you can save a lot of money half-assing it with AI.

threethirtytwo|1 month ago

Im not kidding when I say the tone feels AI generated.

It’s obviously not AI generated but I’m more speaking to the tonality of the latest gpt. It’s now extremely hard to tell the difference.

theletterf|1 month ago

Author here. I'm human. I wrote the full thing. :)

Havoc|1 month ago

All valid points but I fear our brave new world cares not

billy99k|1 month ago

Getting emotional about it won't work. Companies only see results. If replacing your job with AI works, your job will be replaced.

j45|1 month ago

The fired writers should get together start their own publications.

AI can’t generate insights far beyond what it’s trained on.

Their writing will be a different moat.

leosanchez|1 month ago

> The fired writers should get together start their own publications.

What if the next version of AI model gets trained on their work ?

1980phipsi|1 month ago

How is this for an ordering:

Good human written docs > AI written docs > no docs > bad human written docs

osigurdson|1 month ago

>> liability doesn’t vanish just because AI wrote it

I think this is going to be a defining theme this year.

Starlevel004|1 month ago

Hey Claude, summarise this letter for me and write a response.

alameenpd|1 month ago

This was sooo well written (that’s the point I guess)

NitpickLawyer|1 month ago

Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...

The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)

What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.

PlatoIsADisease|1 month ago

I love AI and use it daily, but I still run into hallucinations, even in COT/Thinking. I don't think hallucinations are as bad as people make it out to be. But I've been using AI since GPT3, so I'm hyper aware.

friartuck69|1 month ago

[deleted]

marstall|1 month ago

are you talking about the hashes (##, ###) etc in the subheadings? I think that's an intentional design thing, a bit of a nod to the back row, if you will.

murderfs|1 month ago

I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.

DeborahWrites|1 month ago

You sound unlucky in your tech writer encounters!

There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.

Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.

imtringued|1 month ago

A tech writer isn't a class of person. "Tech writer" is a role or assignment. You can be an engineer working as a tech writer.

Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.

parados|1 month ago

> they documented what they thought should happen instead of what actually happened.

The other way around. For example the Python C documentation is full of errors and omissions where engineers described what they thought should happen. There is a documentation project that describes what actually happens (look in the index for "Documentation Lacunae"): https://pythonextensionpatterns.readthedocs.io/en/latest/ind...

saagarjha|1 month ago

Not everyone wants to write code.

jillesvangurp|1 month ago

I'm currently in the middle of restructuring our website. 95% of the work is being done by codex. That includes content writing, design work, implementation work, etc. But it's a lot of work for me because I am critical about things like wording/phrasing and not hallucinating things we don't actually do. That's actually a lot of work. But it's editorial work and not writing work or programming work. But it's doing a pretty great job. Having a static website with a site generator means I can do lots of changes quickly via agentic coding.

My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.

Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.

But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.