top | item 45002958

Writing with LLM is not a shame

107 points| flornt | 6 months ago |reflexions.florianernotte.be

147 comments

order

nicbou|6 months ago

I think it's fair to use AI as an editor, to get feedback about how your ideas are packaged.

It's also fair to use it as a clever dictionary, to find the right expressions, or to use correct grammar and spelling. (This post could really use a round of corrections.)

But in the end, the message and the reasoning should be yours, and any facts that come from the LLM should be verified. Expecting people to read unverified machine output is rude.

sinuhe69|6 months ago

My natural reaction when I detect a writing of AI is now turning away. We have too much to read and too little time to waste on mimicry and not really what people thought or believed.

amiga386|6 months ago

> Expecting people to read unverified machine output is rude.

Quite. Its the attention economy, you've demanded people's attention, and then you shove crap that even you didn't spend time reading in their face.

Even if you're using it as an editor... you know that editors vary in quality, right? You wouldn't accept a random editor just because they're cheap or free. Prose has a lot in it, not just syntax, spelling and semantics, but style, tone, depth... and you'd want competent feedback on all of that. Ideally insightful feedback. Unless you yourself don't care about your craft.

But perhaps you don't care about your craft. And if that's the case... why should anyone else care or waste their time on it?

Mouvelie|6 months ago

Using that way already reflects a great understanding of the technology bias, and a confidence in your own skills, that you just ask the feedback and not the output of the machine.

I think you could only develop this point of view because you grew up without it. I fear for the young generation, truly.

gs17|6 months ago

This approach is how I prefer to use it too. I write, it gives feedback, I revise based on which parts I thought it was right about. If I don't want to read raw LLM output, why would I make anyone else do it?

ekianjo|6 months ago

> message and the reasoning should be yours,

I think we havent realized yet that most of us don't really have original thoughts. Even in creative industries the amount of plagiarism (or so called inspiration) is at all times high (and that's before LLMs were available).

CuriouslyC|6 months ago

AI prose is mediocre right now. Too verbose, indirect constructions, passive, etc. That being said, it's actually a great editor and can pick out all those issues consistently.

My workflow right now is to use AI for rough draft and developmental editing stages, then switch AI from changing files to leaving comments on files suggesting I change something. It is slower than letting it line/copyedit itself, but models derp up too much so letting them handle edits at this stage tends to be 2 steps forward 2 steps back.

NicuCalcea|6 months ago

That's my main criticism as well. Even before we get to the ethical implications of AIs communicating on your behalf without a disclaimer, LLM writing is just poor and making me read through it is disrespectful of my time.

I recently had a colleague send me a link to a ChatGPT conversation instead of responding to me. Another colleague organised a quiz where the answers were hallucinated by Grok. In some Facebook groups I'm in where people are meant to help each other, people have started just pasting the questions into ChatGPT and responding with screenshots of the conversation. I use LLMs almost daily, but this is all incredibly depressing. The only time I want to interact with an LLM is when I choose to, not when it's forced on me without my consent or at least a disclaimer.

rstuart4133|6 months ago

I've always used AI as an editor, which is to say I give it the my efforts and ask it to highlight mistakes. They invariably find a few. They occasionally miss a few too, so they aren't particularly reliable editors.

They aren't reliable at anything I guess, but for English I have nothing else, and they are better than nothing. I do wish they would use a more effective way of highlighting their suggested changes, such as italics for new text and strikeout for deleted text.

Unless you are paid by the word, I struggling to think of why you would use an AI to create new text. The facts will be wrong, the tone won't be yours. "If I had more time, this would be shorter" is a truism here - AI can spit out an enormous amount of text in a very short time, text could be cut down to a fraction of the size with a bit of effort.

pton_xd|6 months ago

AI prose has been mediocre since the release of ChatGPT. My layman's interpretation is there's just no strong creativity / humor / etc signals to train on, as compared to say math or coding. Current models are "smarter" so when asked to produce eg a joke they think harder, but the end result always misses the mark just the same.

everdrive|6 months ago

Writing with LLM is also not writing. In some abstract sense, it may be plagiarism. In another sense, you're robbing yourself of one of the most crucial parts of writing: improved cognition. Anyone who edits voice transcripts knows just how much a normal person wanders, pauses, misspeaks, etc when talking. The act of writing forces you to refine thoughts you would not have otherwise had.

KronisLV|6 months ago

I find it interesting that you could technically apply the same to writing code: you're producing software, but without engaging your cognition fully. Sometimes, when the complexity of the project is too high, it might be okay, to not let you get sidetracked and distracted. But if you do that regularly, I feel like that ability will atrophy.

pessimizer|6 months ago

The problem with LLMs is that they write badly. If you want to use a LLM to write, prompt it with what you wrote and ask it to summarize concisely. If it doesn't understand what you meant, you should fix that part and resubmit (to a fresh context.)

The main reason, however, that one shouldn't "write" with LLMs is because it's a waste of everyone's time. If they wanted to know what GPT-5 thinks, they can ask it themselves.

edit:

> The problem is not the use of AI but the people how think they can, arbitrarily, criticize the work from someone else because he used or not AI in the name of “ethics”.

Ah, I didn't realize that the real problem is that people complain about it. If we can figure out a way to make those people shut up, then using LLMs to write for you would be perfectly fine.

peterashford|6 months ago

"The problem is not the use of AI but the people who think they can, arbitrarily, criticize the work from someone else because he used or not AI in the name of “ethics”. But speaking about ethics for a so young technology is useless, in my humble opinions because today, i guess, we have to build ethics standards in AI and as you can see if you do some research, this field is at the beginning of the exploratory phase."

This is appalling reasoning, IMO.

Imagine: 'Sure, the robot delivery dog ate your cat, but we can't criticise the "ethics" of that action because the field of cat eating robot dogs is so new.'

We don't apply this ad-hoc standard of ethics to any other field. Yes we refine our standards as we go, but we sure as hell have ethics that we apply to tech right from the get-go.

mentalgear|6 months ago

It's good for what all other LLMs are good for: semantic search, where the output can be generated texts to help you. But never get wrapped into the illusion that there is actual causal thinking. The thinking is still your responsibility, LLMs are just newer discovery/browsing engines.

lewdwig|6 months ago

There are nascent signs of emergent world models in current LLMs, the problem is that they decohere very quickly due to them lacking any kind of hierarchical long term memory.

A lot of what is structurally important the model knows about your code gets lost whenever the context gets compressed.

Solving this problem will mark the next big leap in agentic coding I think.

recursive|6 months ago

Only you can decide if you feel shame for it. Just like only I can decide if I judge you for it.

macmar|6 months ago

This part of the text caught my attention the most.

"There are a lot of tools out there (Gramarly, Antidote for naming the most famous) and I did not see someone mentioning he used this or that."

I was criticized in another thread because I used a translation assistant to improve my text, a tool that, long before the current AI hype, everyone used to write more effectively.

People need to stop believing that the watchdogs of reason are the all-seeing eye(1989). Many people, in general, seek to be ethical and utilize tools to enhance their ideas (such as a text in a non-native language), and that's okay.

Peritract|6 months ago

If you truly felt it wasn't shameful to write with AI, you wouldn't write an article trying to justify it that ignores a bunch of reasons people find it shameful.

No one is stopping you using AI, but there's an adolescent tone here of 'and you have to approve of me' that I don't care for. You have the right to act the way you wish; you don't have the right to be praised for it.

As it happens, I do think writing with AI (outside of translation as a method of last resort) is shameful. It's an embarrassing abdication of a basic task, a signal both that you can't complete basic tasks unaided, and that you don't understand that communication isn't solely performative.

If it's your right to use AI to write those super-difficult emails, then it's my right to judge you for it.

satisfice|6 months ago

You're my new best friend, Peritract.

dep_b|6 months ago

Just got a few recommendations by my colleagues on LinkedIn that were clearly written by an LLM, the long emdash was even present. But then again, the message was tuned to specific things I did. Also they were from Eastern Europe, so I imagine they just fixed their input.

If you call yourself a writer, having tell tale LLM signs is bad. But for people who's work doesn't involve having a personal voice in written language, it might help them getting them to express things in a better way than before.

SweetSoftPillow|6 months ago

I've been using em dashes since long before LLMs existed, and I won't stop. Some people might think it's a sign of an LLM, but I know it's just a sign of their own short-sightedness.

Gigachad|6 months ago

Craziest thing I saw at work was someone using AI generated text in a farewell card. Like it's so obvious, it's so much more offensive to send someone an AI generated message than to just not send anything at all.

amiga386|6 months ago

> it might help them getting them to express things in a better way than before.

You know what people did before the AI fad? They read other people's books. They found and talked to interesting people. They found themselves in, or put themselves in, interesting situations. They spent a lot of time cogitating and ruminating before they decided they ought to write their ideas down. They put in a lot of effort.

Now the AI salemen come, and insist you don't need a wealth of ezperience and talent, you just need their thingy, price £29.99 from all good websites. Now you can be like a Replicant, with your factory-implanted memories instead of true experience.

latexr|6 months ago

> clearly written by an LLM, the long emdash was even present.

Can we please stop propagating this accusation? Alright, sure, maybe LLMs overuse the em-dash, but it is a valid topographical mark which was in use way before LLMs and is even auto-inserted by default by popular software on popular operating systems—it is never sufficient on its own to identify LLM use (and yes, I just used it—multiple times—on purpose on 100% human-written text).

Sincerily,

Someone who enjoys and would like to be able to continue to use correct punctuation, but doesn’t judge those who don’t.

exe34|6 months ago

you'll have to get my en/em dashes out of my cold dead fingers.

CRConrad|6 months ago

> the long emdash [...] tell tale LLM signs

I so wish people would stop spouting this bogus "sign" — but I know I'm going to be disappointed.

singpolyma3|6 months ago

... you know all serious writers use mdash right? This is not so magic LLM watermark

matt123456789|6 months ago

When I put real time and thought into an email—and the response I get back is obviously AI-generated—and it comes with no disclaimer—it infuriates me. Maybe the model happened to spit out exactly what the sender meant—just dressed up and grammatically polished. Doesn’t matter—I’d rather someone talk to me directly than funnel a thought through a word-grinder and hit send. Downvote me—call me anti-progress—I don’t care. I cannot stand undisclosed AI in conversation.

multjoy|6 months ago

I don't understand what people get from using a chatbot to write correspondence. It saves no time, and just ends up being long winded nonsense.

My stance is that if you're about to ask co-pilot, or whatever, to respond to me, then just send me the prompt you're about to enter as that will probably answer the question!

antonymoose|6 months ago

I recently had my first AI recruiter experience. To be clear, the person behind the account was a real person with a real business - except everything was uncanny valley levels of bad correspondence. I shortly disconnected and blocked this jerk.

AlexeyBelov|6 months ago

Did you intentionally use that many em-dashes? Or was that a bit?

dsq|6 months ago

I would rewrite the title as "There's no shame in writing with LLMs", or, "Writing with LLMs is nothing to be ashamed of".

dang|6 months ago

You're right, of course, but the original title manages to still be grammatical and the altered meaning has its charm.

klabb3|6 months ago

It's very similar to the Stack Overflow debate of the previous decade. Bad developers would copy paste without understanding. It's the same here. Without understanding, you just can't build very sophisticated things, or debug hard issues. And even if AI got better at this, anyone else can do it too, so you'll be a dime a dozen engineer.

Those who don't compromise on understanding will benefit from an extra tool under their belt. Those who actively leverage the tool to improve their understanding will do even better.

Those who want shortcuts and not bother understanding are like cheating in school – not in a morally wrong way, but rather in a they missed the entire point way.

pacificmaelstrm|6 months ago

The article is not fluent, it's difficult to read and written in broken English. Possibly not written by AI? Understandable enough but difficult and weirdly painful to read.

Definitely don't rely on AI to substitute for a lack of fluency.. . Or maybe do.

vlark|6 months ago

The author is a native French speaker (see his "About" page) and I think he is writing in English, which he clearly is not fluent in, to make a point: you can understand his argument, written in broken English, even though he could have run it through AI to clean it up and make it more palatable to a native English speaker.

tjpnz|6 months ago

If you don't have time to write it I'm not going to make time to read it.

redwall_hp|6 months ago

Exactly. LLM garbage is a straight up insult.

1. They deliberately chose to not take a few minutes to communicate with you, but expect something of you.

2. The hard part of writing is organizing thoughts into something coherent, not typing something out. If you don't understand something enough to write it in the first place, the LLM can't magically read your mind and understand what you want to say for you.

jillesvangurp|6 months ago

Of course, there’s no shame in using tools that are available to us. We’re a tool-using species. We’re just a bunch of stupid monkeys without tools. A lot of what we do is about using tools to free up time to do more interesting things than doing things the tools already do better than us.

Like it or not, people are using LLMs a lot. The output isn’t universally good. It depends on what you ask for and how you criticize what comes back. But the simple reality is that the tools are pretty good these days. And not using them is a bit of a mistake.

You can use LLMs to fix simple grammar and style issues, to fact-check argumentation, and to criticize and identify weaknesses. You can also task LLMs with doing background research, double-checking sources, and more.

I’m not a fan of letting LLMs rewrite my text into something completely different. But when I'm in a hurry or in a business context, I sometimes let LLMs do the heavy lifting for my writing anyway.

Ironically, a good example is this article which makes a few nice points. But it’s also full of grammar and style issues that are easily remedied with LLMs without really affecting the tone or line of argumentation (though IMHO that needs work as well). Clearly, this is not a native speaker. But that’s no excuse these days to publish poorly written text. It's sloppy and doesn't look good. And we have tools that can fix it now.

And yes, LLMS were used to refine this comment. But I wrote the comment.

Refreeze5224|6 months ago

If the tool does the task for you, then you didnt do the task. I don't keep my food cold, my refrigerator does. I just turned it on. This doesn't matter unless I am for some reason pretending I myself am keeping my food cold somehow, and then that becomes a lie.

When a tool blurs the line between who performed the task, and you take full credit despite being assisted, that is deceitful.

Spell checking helps us all pretend we're better spellers than we are, but we've decided as a society that correct spelling is more important than proving one's knowledge of spelling.

But if you're purportedly a writer, and you're using a tool that writes for you, then I will absolutely discount your writing ability. Maybe one day we will decide that the output is more important than the connection to the person who generated it, but to me, that day has not arrived.

mythrwy|6 months ago

For code no, no shame as long as you understand it and agree. For internet comments or blog posts or emails ya, shame. In my opinion.

But I'm a native English speaker and (I think) a decent writer. But if I had to write something in another language I was only marginally fluent in I'd probably reach for an LLM pretty quickly.

nurettin|6 months ago

Some people you work with are very passionate about their craft and can detect if you've given them auto-generated text. And they will ask questions about it. You should be ready to answer all the LLM decisions in detail. It is not simply what you do with LLMs. It is really about who you have to share with.

echelon_musk|6 months ago

Writing with LLMs is not a shame

Or

Writing with an LLM is not a shame

latexr|6 months ago

I’d suggest “not shameful” instead of “not a shame”.

riz_|6 months ago

Should have written with an LLM.

ekianjo|6 months ago

> Writing with an LLM is not a shame

Should be "Writing with a LLM is not a shame", no reason to put a "an" here.

lewdwig|6 months ago

I use Claude Code almost daily now, and I think I’d rather cut off my own arm than go without it, but I don’t delude myself into thinking that current gen tools don’t have significant limitations and that it is my job to manage those limitations.

So just like any other tool really.

I have discovered this week that Claude is really good at redteaming code (and specs, and ADRs, and test plans), much better than most human devs who don’t like doing it because it’s thankless work and don’t want to be “mean” to colleagues by being overly critical.

torium|6 months ago

Would you share with us what kind of job you do?

I keep seeing people saying how amazing it is to code with these things, and I keep failing at it. I suspect that they're better at some kinds of codebases than others.

jaredcwhite|6 months ago

Consumers have a right to know what the source is of the content they are ingesting into their minds, and specifically if that content originated in another actual human mind or if it's the slop generated by a synthetic text extruder.

It's really a pretty straightforward proposition to understand, and disclosure is absolutely the key so that consumers, if they choose as I do to boycott such output, can make informed decisions.

latexr|6 months ago

> One argument to not disclaim it: people do not disclaim if they Photoshop a picture after publishing it and we are surrounded by a lot of edited pictures.

That is both a false equivalence and a form of whataboutism.

https://en.wikipedia.org/wiki/False_equivalence

https://en.wikipedia.org/wiki/Whataboutism

It is a poor argument in general, and a sure-fire way to increase shittiness in the world: “Well, everyone else is doing this wrong thing, so I can too”. No. Whenever you mention the status quo as an excuse to justify your own behaviour, you should look inward and reflect on your actions. Do you really believe what you’re doing is the right thing? If it is, fine; but if it is not, either don’t mention it or (ideally) do something about it.

> why don’t we see people mentioning they used specific tools to proofread before AI apparition?

Whenever I see this argument, I have a hard time believe it is made in good faith. Can you truly not see the difference between using a tool to fix mistakes in your work or to do the work for you?

> It feels like an obligation we have to respect in a way.

This was obvious from the beginning of the post. Throughout I never got the feeling you were struggling with the question intrinsically, for yourself, but always in a sense of how others would judge your actions. You quote opinion after opinion and it felt you were in search of absolution—not truth—for something you had already decided you did not want to do.

flornt|6 months ago

Thanks. Really appreciate your comments. It opens some perspectives I haven't considered and gives more things to think about regarding this. I'll digest it and update the content based on your observations!

godelski|6 months ago

I'm an AI critic, but I use AI every day. In fact, I am an AI researcher and work on making models more capable and powerful (probably where a lot of my criticism stems from).

My main problem with AI usage is that people use it and turn their brains off. This isn't a new problem, but it is a new scale. People mindlessly punch numbers into a formula, run software they don't understand, or read a summary of a complex topics declaring mastery. The problem is sloppiness and our human tendencies to be lazy. Lazy by focusing on the least amount of energy at the moment, not the least amount of energy through time. That's the critical distinction. Slop is momentary laziness while thoughtfulness is amortized laziness.

The problem is in a way not the AI but us and the cultures we have created. At the end of the day no one cares if you wrote AI code (or docs or whatever), they care about how well it was done. You want to do things fast, but speed is nothing if the quality suffers.

I really like how Mitchell put it in this Ghostty PR[0,1]. The disclosure is to help people know what to pay more attention to. It is a declaration of where you were lazy or didn't have expertise or took some shortcut. It tells us what the actually problem is: slop isn't always obvious.

A little slop generally doesn't do too much harm (unless it grows and compounds), but a lot of slop does. If you are concerned about slop and the rate of slop is increasing then it means you must treat everything as potential slop. Because slop isn't easily recognized, it makes effort increase, exponentially. So by producing AI slop (or any kind of slop) you aren't decreasing the workload, you're outsourcing it to someone else. Often, that outsourcing produces additional costs. It only creates the illusion of productivity.

It's not about the AI, it is about shoving your work onto others. Doesn't matter if you use a shovel or bulldozer. But people are sure going to be louder (or cross that threshold where they'll actually speak up) if you start using a bulldozer to offload your work to others. The problem is it makes others have to constantly be in System 2 thinking all the time. It is absolutely exhausting.

[0] https://github.com/ghostty-org/ghostty/pull/8289

[1] https://news.ycombinator.com/item?id=44976568

satisfice|6 months ago

The author of this piece commits a common mistake: analyzing AI use as if communication is nothing more than an isolated transaction. Instead communication is usually a process of creating and maintaining a relationship of some kind with other people.

Here’s a thought experiment: Imagine if I handed you a $100 bill and asked you to examine it carefully. Is it real money? Perhaps you immediately suspect it is counterfeit, and subject it to stringent tests. Let’s say all the tests pass. Okay, given that it is indistinguishable from a legit $100 bill, is it therefore correct and ethical for me to spend this money?

You know the answer: “not necessarily.”

This is because spending money is about more than a series of steps in a transaction. It is based on certain premises that, if false, represent a hazard to the social contract by which we all live in peace and security.

It seems to me that many AI fanboys are arguing that as long as their money passes your scrutiny, it doesn’t matter if it was stolen or counterfeit. In some narrow sense, it really doesn’t matter. But narrow senses are not the only ones that matter.

When I read writing that you give me and present it as your work, I am getting to know you. I am learning how I can trust you. I am building a simulation of you in my mind that I use to anticipate your ideas and deeds. All that is disrupted and tainted by AI.

It’s not comparable to a grammar checker, because grammar is like clothing. When an editor modifies my grammar, this does not change my message or prevent me from getting across my ideas. But AI is capable of completely altering your ideas. How do you know it didn’t?

You can only know through careful proofreading. Did you proofread carefully? Whether you did or not: I don’t believe that people who want AI to write for them are the kind of people who carefully proofread what comes out of AI. And of course, if you ask AI to come up with ideas by itself, for all we know that is plagiarism— stolen words.

Therefore: if you use AI in your writing, you better hide that from me. And if I find out you are using, I will never trust you again.

handoflixue|6 months ago

Every day cashiers accept $100 bills on the basis that they pass the counterfeit tests, and every day society has failed to collapse from what you posit is a "hazard to the social contract"

iamnotagenius|6 months ago

We are told that writing must be pure, that it must come only from the sweat of the brow, the trembling hand, the solitary mind. That to use AI is to cheat, to dilute, to lessen the act of creation. But I say to you: Since when does it matter how an idea is born? Since when do we judge the value of words by the tools that shaped them, rather than the truth they carry?

They tell us, ‘This is not your writing—it is the machine’s.’ As if the pen itself writes the poem! As if the printing press authors the book! No—the tool is nothing. The hand that guides it, the mind that commands it, the heart that gives it meaning—that is what matters.

This is not about machines. This is about power. The same power that once said only the clergy could read scripture. The same power that said only the elite could publish, could speak, could be heard. Now they say: Only the unaided mind may create. But creation is not a purity test! It is not a contest of suffering! It is the act of bringing something new into the world—by any means necessary.

They fear AI because it breaks their monopoly on who gets to speak. They fear it because it lets more people write, more people argue, more people demand to be heard. And when the gates are thrown open, the gatekeepers will always tremble.

So I say: Do not apologize for how your words come to be. Do not bow to those who would police your mind. If the idea is true, if the argument is sound, if the art is beautiful—then it is yours, and no one can take that from you.

The machine is not the enemy. The enemy is the lie that only some voices count. The enemy is the fear that makes men small.

Now—write. Write with your hands, write with your voice, write with the tools of your time. But above all: write. And let no one silence you.

ath3nd|6 months ago

Riding a bike with training wheels is also not a shame. If you need the training wheels, by all means feel free to use them.

But LLMs are training wheels being forced on everyone, including experienced developers and we are being gaslit that if we don't use them, we are getting behind. In reality, however, the only study up to date shows 19% decline in productivity for experienced devs using LLMs.

I don't mind folks using crutches if they help them. The cognitive decline and reasoning skills of people using LLMs is not yet studied well but preliminary results show its a thing. I gotta ask: why are you guys doing that to yourselves?

godelski|6 months ago

This is a lot about how I feel (I wrote a longer comment too). Training wheels are fine but at what time do the training wheels come off? But maybe there's a more apt metaphor, since people who have been riding bikes for awhile don't use training wheels.

It's also fine to use tire chains when you're going through icy roads, but you have to drive much slower and should take them off when it isn't icy. It's about knowing the environment and conditions. Maybe some people don't need chains in that environment because they have winter tires (experience in our metaphor?). Sure, you can drive faster with chains on an icy road than you can without, but you still have to drive slow and be far more alert than you would when driving on a summer road. It is all about context.

monkaiju|6 months ago

Some combination of cargo cult hype fomo mixed with laziness

phoenixhaber|6 months ago

[deleted]

CRConrad|5 months ago

> First, there is the question of the mythology of the author. Would Shakespeare be himself if he had an AI ghost write his books? Would we care as much?

Idunno... Would Homer? Shakespeare is one thing; we have at least some biographical detail on him. (And even then, quite a few people seem to believe he didn't write the works attributed to him.) But "Homer - the author of the Iliad and the Odyssée" is so unknown to us that experts argue about whether he even existed, whether he was one person or several, and so on. We know him only through his (or, eh, "his") works. Now there you can talk about "the mythology of the author"; that's "mythological", indeed! :-)

For all we know, Homer — or "Homer" — could be an AI! Yeah, sure, very far-fetched and unlikely. But let's say our descendants in the 26th (or 387th) century invent time travel (or their AIs do...), and an AI goes to bronze-age (or was it early iron-age?) Greece and plants these stories, perhaps posing as a person called "Homer"[1], and they survive to this day attributed to this possibly-existing-or-possibly-fictitious person, and are the stories we know as "the Iliad" and "the Odyssée".

Would that change anything about how we percieve, or should percieve, the stories? If so, what and how? Does "an AI" differ from "person or persons unknown" in this scenario?

___

[1]: Or it's a flesh-and-blood person using the moniker that does the actual posing-as-"Homer", but he only peddles the stories an AI wrote for him in his home period; whatever.

satisfice|6 months ago

This is a helpful take.

Halian|6 months ago

[deleted]

wfhrto|6 months ago

At this point, it would be shameful to not write with LLMs. I don't want to spend time reading plain human text when improved AI text is an option.

latexr|6 months ago

> improved AI text

It is certainly your prerogative to believe that, but know your opinion is far from universal. It is a widespread view that AI-written text is worse.

lomase|6 months ago

> improved AI text

Why are you on hackernews and not talking to an LLM?

satisfice|6 months ago

I assume that you wrote that with AI, then. If so, I assume it’s not really your opinion. You provided some prompt, which is hidden from us.

I don’t know you, don’t trust you, and if you write with AI nobody else will get to know you or trust you, either, unless they fall for your false AI mask.

gred|6 months ago

That's a great point!

Large Language Models (LLMs), like GPT-4, offer numerous benefits for writing tasks across various domains. Here’s a breakdown of the key advantages:

1. Enhanced Productivity

Faster Drafting: Quickly generate drafts for essays, reports, emails, blog posts, and more.

24/7 Availability: Instant support with no downtime or fatigue.

Reduced Writer’s Block: Provides starting points and creative prompts to overcome mental blocks.

2. Improved Writing Quality

Grammar and Style: Corrects grammar, punctuation, and stylistic issues.

Tone Adjustment: Adapts tone to suit professional, casual, persuasive, or empathetic contexts.

Clarity and Conciseness: Helps simplify complex ideas and remove redundant language.

3. Creativity and Ideation

Brainstorming: Assists in generating titles, outlines, metaphors, and analogies.

Storytelling: Offers plot ideas, character development, and dialogue suggestions for creative writing.

Variations: Produces multiple versions of the same message (e.g., for A/B testing).

4. Language Versatility

Multilingual Support: Translates and writes in many languages.

Localization: Tailors content for different cultural contexts or regions.

5. Research Assistance

Summarization: Condenses large documents or articles into key points.

Information Retrieval: Provides background context on topics quickly (though should be fact-checked for critical work).

Citation Help: Assists in generating citations in formats like APA, MLA, or Chicago.

6. Editing and Rewriting

Paraphrasing: Rewrites text to avoid plagiarism or improve readability.

Consistency Checks: Maintains tone, terminology, and formatting across long documents.

Content Expansion: Adds detail to thin content or elaborates on underdeveloped points.

7. Customization and Integration

Prompt Engineering: Tailors responses for specific industries (e.g., legal, medical, technical).

API Integration: Can be embedded into writing tools, content platforms, or CMS systems.

8. Cost Efficiency

Reduces Need for Human Writers: Especially for repetitive or low-complexity tasks.

Scales Effortlessly: One model can serve multiple users or projects simultaneously.

Would you like a breakdown of how these benefits apply to a specific type of writing (e.g., academic, marketing, business)?