I am 100% behind this. I've been browsing hackernews since I started in tech, it is the only forum i regularly browse, and partake in. Simply because the quality of submissions and conversations are so high. There has been more AI related articles this part year, and it only seems ramping. I personally haven't found the AI part of the comments as big of a deal but dang and tom might be doing more than I realize on that front.
Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.
At this point there nearly should be a "tax" on category, as of this moment I count 8-10 related posts on the front page related to AI / LLMs. It is a hot field, but I come to hackernews, to partake in discussions about things that are interesting, and many of those just doesn't cut it, in my opinion.
The dynamics of content production are shifting hard right now. Things that used to signal something interesting are being generated in minutes with little thought. It's getting democratized, but also commoditized.
It's too soon to know how this is going to shake out, so we should resist the temptation to impose rules prematurely. And we should especially not do so out of resistance to change (when has that ever worked out?)
But we'll do what we need to do to keep our heads above water. Example: https://news.ycombinator.com/showlim. I figure pragmatics are fine as long one keeps adjusting.
I feel the same and find myself extending it beyond forums. I've started skipping over articles about AI more and more from authors I normally enjoy reading because so few of those articles end up being particularly interesting or insightful.
AI is obviously an important topic but it has been discussed to absolute death the past couple years and very few people have anything useful to add at this point. Things will of course evolve and change in the near term but someone speculating that maybe this will happen or that will happen isn't very useful.
Given the risks and unknowns I think we should collectively be treating it as a major risk to our economic and national security, and figuring out how to mitigate the downside risks without stifling the upside. But most of the people in power have zero interest in doing that so we're all going to YOLO this in real time.
> Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.
Exactly. I feel like HN has never been this boring. Enough of the slop, let’s talk about interesting stuff again!
If you haven't yet checked it out, I'd recommend taking a look at Tildes for similarly high quality submissions/conversations as on HN. It really is such a breath of fresh air compared to most other platforms.
I personally joined HN because of various AI discussions.
Comparatively, other sites such as Reddit, Twitter and YouTube just shill content, applications or products. A ton of the posts on Reddit are just AI written ffmpeg wrappers which no one should care about but apparently people do...
Using AI to write content is seen so harshly because it violates the previously held social contract that it takes more effort to write messages than to read messages. If a person goes through the trouble of thinking out and writing an argument or message, then reading is a sufficient donation of time.
However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.
This is very well put, and captures my feelings on it. I take it as disrespect that someone would have any expectation for me to read something they can’t be bothered to write. LinkedIn is a great example - my entire professional network is just spamming at this point, which drowns out others that DO put in any effort.
If it takes longer to read, it's not an AI problem, but the author failing to catch that the comment is too drawn out. I don't see how it is a problem to have AI write a comment if you agree with the content. If it is bad content, it will eventually reflect badly on the author anyway.
When I have AI write things for me, I'm spending a good amount of time on it - certainly longer than it takes to read. I'm also usually editing it quite a bit. Maybe I'm an outlier, but I still don't think it's appropriate to make a blanket statement about using AI to write content violating this social contract you described.
Where does the line fall? I can use an LLM to help form new and novel thoughts into prose, right? To structure and present it in conventional language rather than stream of thought. Is that disrespectful? It doesn't feel so.
I guess, in theory, this can eventually be countered by people using LLM browser integrations to tell them whether comments are worth reading (and maybe to summarize long comments). Is anyone currently working on that? It might be interesting to see.
It's not just about the increase in volume, it's about the delta between the prompt and the generation.
If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?
But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).
So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.
We've all heard the phrase "the sum of all human knowledge".
I've been feeling more and more that generative AI represents the average of all human knowledge. Which has its place. But a future in which all thought and creativity is averaged away is a bleak one. It's the heat death of thought.
Thought and creativity won't be averaged away because human beings have a drive for these things. This just raises the bar for it. And why not? We get complacent when not pushed.
Dostoevsky said that if all human knowledge could ever be reduced to 2 + 2 = 4, man would stick out his tongue and insist that 2 + 2 = 5. That was a 19th century formulation—he was a contemporary of Boole. I wonder what the equivalent would be for the LLM era.
Perhaps closer to “the mean vector point such that all outbound vectors to different training tests are in sum the smallest”? I assume that’s a property of neural networks anyways, though I’m out of date on current math for them.
> I've been feeling more and more that generative AI represents the average of all human knowledge.
No, it's far worse. It's the mode of all human knowledge. The amount of effort you have to put into an LLM to get it to choose an option that isn't the most salient example of anything that could fit as a response is monumental. They skip exact matches for most common matches; it's basically a continuity from when search engines stopped listening to your queries and just decided what query they wanted to respond to - and it suddenly became nearly impossible to search for people who had the same first name as anyone who was famous or in the news.
I've tried a dozen times to get LLMs to find authors for me, or papers, where I describe what I remember about them fairly exactly. They deliver me a bunch of bestsellers and popular things, over and over again, who don't even match at all large numbers of the criteria I've laid out.
It's why they're dumb and can't accomplish anything original. It's structural. They're inherently biased to deliver lowest common denominator work. If you're trying to deliver something original or unusual, what bubbles up is samplings of the slop that surrounds us every day. They're fed everything, meaning everything in proportion to its presence in the world. The vast majority of things are shit, or better said, repetitions of the same shit that isn't productive. The things that are most readily available are already tapped out. The things that are productive are obscure.
You can't even get LLMs to say some words by asking them to "say word X." They just will always find a word that will fill that slot "better." As I said, this is just google saying "did you mean Y?" But it's not asking anymore, it's telling.
edit: It's also why asking it to solve obscure math problems is a dumb test. If the math problem is obscure enough, and there's only one way to possibly solve it, and somebody did it once, somewhere, or referred to the possibility of solving it that way, once, somewhere, you're going to have a single salient example. It's not a greenfield, it's not a white sheet of paper: it's a green field with one yellow flower on it, or a piece of white paper with one black sentence on it, and you're asking it to find the flower or explain the sentence.
I feel the same about Claude Code. It's a fast but average developer at just about everything and there are some things that average developers are just consistently bad at and therefore Claude is consistently bad at.
> I've been feeling more and more that generative AI represents the average of all human knowledge.
Have you tried the paid versions of frontier models? They certainly do not feel like they spew the average of all human knowledge. It's not uncommon for them to find and interpret the cutting edge of papers in any of the domains that I've asked them questions about.
pooling as it is called, is, well the same as averaging. has nothing to do with swimming really. it happens all the time in latent space. it is a tool, not a side effect.
I feel a little bit of irony in this post of a company/forum that is asking its users to not use AI while simultaneously trying to fund countless companies that are responsible for ruining the internet as we speak.
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.
There should be a "flag as AI" link in addition to "flag" and then a setting for people to show flagged as AI. Once the flagged as AI reaches a certain threshold then it disappears unless you enable "Show AI".
Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.
That would be cool.
Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
For quite a while, I like use LLM to refine and fix my grammar issue, but my colleagues and professors reminds me that it was way too obvious. They said they can tolerate some mistakes in my words, but no tolerance for AI generated content.
What a welcome post. The whole reason I come here is to get thoughtful input from smart people, and not what I could get myself from an LLM. While we are at it; Think your own thoughts as well :) I know how easy it is to "let it come up with a first draft" and not spend the real effort of thinking for yourself on questions, but you'll find it's a road to perdition if you let yourself slip into the habit. Thanks to all the humans still here!!
I finished reading the thin book "Systemantics" by John Gall yesterday (thanks @dang).
I realized that the problem of AI generated/edited content flooding everywhere around us is a symptom of something wrong with the System.
It might have something to do with sensory deprivation. Here is a quote from the book caught my attention because of the word "hallucination":
> As we all know, sensory deprivation tends to produce hallucinations.
> FUNCTIONARY’S FAULT: A complex set of malfunctions induced in a Systems-person by the System itself, and primarily attributable to sensory deprivation.
(As I typed the text above on my iPhone, I was fighting auto completion because AI was trying to “correct” the voice of John Gall and mine to conform the patterns in its training data. Every new character is a fight against Gradient Descend.)
All you need is attention but the cost of attention is getting higher and higher when there is little worth our attention.
My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.
So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.
(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)
How about comments that include AI output if labeled?
Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).
I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.
I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.
I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.
Would that be OK or would that count as an AI written comment?
I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:
1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).
2. Use too many commas.
3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.
I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.
Honest question, why were folks posting AI generated comments in the first place? There's such a high inertia to comment. I only comment when I have something to contribute OR find something incredibly interesting.
So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?
Don’t be afraid to make grammar mistakes or misspell stuff. Others will understand. You’re a human after all. That’s okay to make mistakes and feel uncomfortable with that.
I use AI for the elements I feel are weak or unclear in the transcription. Sometimes I copy-paste a paragraph into ChatGPT or whatever, to ensure my (aging) thoughts are being communicated in a crystal clear manner. I cannot always point out why I think they are unclear or jumbled.
I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.
Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.
[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^
As a type nerd, I was very happy with Grammarly swapping my dashes to em dashes. But now everyone associates em dashes with AI, I can no longer enjoy that luxury.
Now that it's in the rules, I hope we also see less of "your comment was obviously AI generated so I won't respond" (ironically, in a response comment).
If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.
The most telling sign of a human commenter is brevity.
Consequently, I hardly ever spend the time to write out long and detailed HN comments like I used to in the pre-LLM era. People nowadays have a much harder time believing that an Internet stranger is meticulously crafting a detailed and grammatically-airtight message to another Internet stranger without AI assistance.
This is interesting to me because I'm a degenerate "massive comment" guy. People have gotten mad at me for it before, I'll take a comment from them, break it down, address it portion by portion with citations, and then ask their thoughts. It's probably an obsessive level of engagement that people aren't really interested in, which is fair, but I don't know how else to get my point across in its totality.
Also there's some subset of users on this site who are rate limited, such as me. So for me that manifests in avoiding post for post conversations and more seeking to engage in an exchange of essays where I try to predict future points and address them, to save comments, which obviously results in long comments.
Not quite. Brevity is more like a modern virtue, not an absolute sign of human-ness. Often longer sentences are necessary to express comprehensive logic more tightly. TBH, these days I feel like being penalized by the rise of LLM because my writing style used to be a bit similar to that of LLM, which emphasizes accurate logical connection (not that its logic is reliable), uses em-dashes (yes, I did use it tho I had to stop), and includes a bit of mumbling.
[+] [-] kjuulh|4 days ago|reply
Though I do wish we'd see less AI related posts on the front page, they simply aren't sparking curiosity, it is the same wrapped in a different format, a different person commenting on our struggles and wins with AI, the 10th software "rewritten" by an AI.
At this point there nearly should be a "tax" on category, as of this moment I count 8-10 related posts on the front page related to AI / LLMs. It is a hot field, but I come to hackernews, to partake in discussions about things that are interesting, and many of those just doesn't cut it, in my opinion.
[+] [-] dang|4 days ago|reply
It's too soon to know how this is going to shake out, so we should resist the temptation to impose rules prematurely. And we should especially not do so out of resistance to change (when has that ever worked out?)
But we'll do what we need to do to keep our heads above water. Example: https://news.ycombinator.com/showlim. I figure pragmatics are fine as long one keeps adjusting.
[+] [-] rurp|4 days ago|reply
AI is obviously an important topic but it has been discussed to absolute death the past couple years and very few people have anything useful to add at this point. Things will of course evolve and change in the near term but someone speculating that maybe this will happen or that will happen isn't very useful.
Given the risks and unknowns I think we should collectively be treating it as a major risk to our economic and national security, and figuring out how to mitigate the downside risks without stifling the upside. But most of the people in power have zero interest in doing that so we're all going to YOLO this in real time.
[+] [-] davidguetta|4 days ago|reply
[+] [-] Hendrikto|4 days ago|reply
Exactly. I feel like HN has never been this boring. Enough of the slop, let’s talk about interesting stuff again!
[+] [-] blank_dvth|4 days ago|reply
[+] [-] iso-logi|4 days ago|reply
Comparatively, other sites such as Reddit, Twitter and YouTube just shill content, applications or products. A ton of the posts on Reddit are just AI written ffmpeg wrappers which no one should care about but apparently people do...
[+] [-] Freebytes|4 days ago|reply
However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.
[+] [-] kouunji|3 days ago|reply
[+] [-] stefap2|3 days ago|reply
[+] [-] mitchdoogle|2 days ago|reply
[+] [-] mlhpdx|4 days ago|reply
[+] [-] waterhouse|4 days ago|reply
[+] [-] ericmcer|4 days ago|reply
That reminds me of the gmail LLM usage where AI can writes your emails for you and also summarize incoming ones. Maybe we lost the thread somewhere...
[+] [-] eslaught|3 days ago|reply
If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?
But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).
So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.
[+] [-] kindkang2024|3 days ago|reply
[deleted]
[+] [-] strangattractor|4 days ago|reply
[+] [-] caditinpiscinam|4 days ago|reply
I've been feeling more and more that generative AI represents the average of all human knowledge. Which has its place. But a future in which all thought and creativity is averaged away is a bleak one. It's the heat death of thought.
[+] [-] dang|4 days ago|reply
Dostoevsky said that if all human knowledge could ever be reduced to 2 + 2 = 4, man would stick out his tongue and insist that 2 + 2 = 5. That was a 19th century formulation—he was a contemporary of Boole. I wonder what the equivalent would be for the LLM era.
[+] [-] kruffalon|4 days ago|reply
[+] [-] ModernMech|4 days ago|reply
[+] [-] altairprime|4 days ago|reply
[+] [-] ludicrousdispla|4 days ago|reply
[+] [-] pessimizer|4 days ago|reply
No, it's far worse. It's the mode of all human knowledge. The amount of effort you have to put into an LLM to get it to choose an option that isn't the most salient example of anything that could fit as a response is monumental. They skip exact matches for most common matches; it's basically a continuity from when search engines stopped listening to your queries and just decided what query they wanted to respond to - and it suddenly became nearly impossible to search for people who had the same first name as anyone who was famous or in the news.
I've tried a dozen times to get LLMs to find authors for me, or papers, where I describe what I remember about them fairly exactly. They deliver me a bunch of bestsellers and popular things, over and over again, who don't even match at all large numbers of the criteria I've laid out.
It's why they're dumb and can't accomplish anything original. It's structural. They're inherently biased to deliver lowest common denominator work. If you're trying to deliver something original or unusual, what bubbles up is samplings of the slop that surrounds us every day. They're fed everything, meaning everything in proportion to its presence in the world. The vast majority of things are shit, or better said, repetitions of the same shit that isn't productive. The things that are most readily available are already tapped out. The things that are productive are obscure.
You can't even get LLMs to say some words by asking them to "say word X." They just will always find a word that will fill that slot "better." As I said, this is just google saying "did you mean Y?" But it's not asking anymore, it's telling.
edit: It's also why asking it to solve obscure math problems is a dumb test. If the math problem is obscure enough, and there's only one way to possibly solve it, and somebody did it once, somewhere, or referred to the possibility of solving it that way, once, somewhere, you're going to have a single salient example. It's not a greenfield, it's not a white sheet of paper: it's a green field with one yellow flower on it, or a piece of white paper with one black sentence on it, and you're asking it to find the flower or explain the sentence.
edit: https://news.ycombinator.com/item?id=47346901 - I'm late and long-winded.
[+] [-] red_hare|4 days ago|reply
[+] [-] baxtr|4 days ago|reply
[+] [-] oblio|4 days ago|reply
It's literally what it is. Fairly sure that mathematically it's a fancier regression/prediction so it's a form of average.
[+] [-] permo-w|4 days ago|reply
[+] [-] ninjagoo|4 days ago|reply
Have you tried the paid versions of frontier models? They certainly do not feel like they spew the average of all human knowledge. It's not uncommon for them to find and interpret the cutting edge of papers in any of the domains that I've asked them questions about.
[+] [-] larodi|4 days ago|reply
[+] [-] meiuqer|4 days ago|reply
[+] [-] dang|4 days ago|reply
Fortunately I found some things we could cut as well, so https://news.ycombinator.com/newsguidelines.html actually got shorter.
---
Edit: here are the bits I cut:
Videos of pratfalls or disasters, or cute animal pictures.
It's implicit in submitting something that you think it's important.
I hate cutting any of pg's original language, which to me is classic, but as an editor he himself is relentless, and all of those bits—while still rules—no longer reflect risks to the site. I don't think we have to worry about cute animal pictures taking over HN.
---
Edit 2: ok you guys, I hear you - I've cut a couple of the cuts and will put the text back when I get home later.
[+] [-] arrsingh|4 days ago|reply
Maybe once enough posts have been flagged like that then that corpus could be used to train an AI to automatically detect content generated by AI.
That would be cool.
Maybe the HN site wouldn't add this feature but if someone wrote a client then maybe it could be added there.
[+] [-] dang|4 days ago|reply
A nice side effect is that it will double as a confirmation step, solving the FFF (fat finger flagging) problem.
[+] [-] uni_baconcat|4 days ago|reply
[+] [-] nkh|4 days ago|reply
[+] [-] ontouchstart|4 days ago|reply
I realized that the problem of AI generated/edited content flooding everywhere around us is a symptom of something wrong with the System.
It might have something to do with sensory deprivation. Here is a quote from the book caught my attention because of the word "hallucination":
> As we all know, sensory deprivation tends to produce hallucinations.
> FUNCTIONARY’S FAULT: A complex set of malfunctions induced in a Systems-person by the System itself, and primarily attributable to sensory deprivation.
(As I typed the text above on my iPhone, I was fighting auto completion because AI was trying to “correct” the voice of John Gall and mine to conform the patterns in its training data. Every new character is a fight against Gradient Descend.)
All you need is attention but the cost of attention is getting higher and higher when there is little worth our attention.
It takes a lot of efforts to be human.
[+] [-] jedberg|4 days ago|reply
My only caution is that good writers and LLMs look very similar, because LLMs were trained on a corpus of good writers. Good writers use semicolons and em-dashes. Sometimes we used bulleted lists or Oxford commas.
So we should make sure to follow that other HN rule, and assume the person on the other end is a good faith actor, and be cautious about accusing someone of using AI.
(I've been accused multiple times of being an AI after writing long well written comments 100% by hand)
[+] [-] tzs|4 days ago|reply
Earlier today I remembered that there was a Supreme Court case I'd heard about 35 years ago that was relevant to on an ongoing HN discussion, but I could not remember the name of the case nor could I find it by Googling (Google kept finding later cases involving similar issues that were not relevant to what I was looking for).
I asked Perplexity and given my recollection and when I heard about the case it suggested a candidate and gave a summary. The summary matched my recollection and a quick look at the decision itself verified it had found the right case and did a good job summarizing it--probably better than I would have done.
I posted a cite to the case and a link to decision. I normally would have also linked to the Wikipedia article on the case since those usually have a good summary but there was no Wikipedia article for this one.
I though of pasting in Perplexity's summary, saying it was from Perplexity but that I had checked and it was a good summary.
Would that be OK or would that count as an AI written comment?
I have also considered, but not yet actually tried, running some of my comments through an AI for suggested improvements. I've noticed I have a tendency to do three things that I probably should do less of:
1. Run on sentences. (Maybe that's why of all the people in the 11th-100th spot on the karma list I have the highest ratio of words/karma, with 42+ words per karma point [1]).
2. Use too many commas.
3. Write "server" when I mean "serve". I think I add "r" to some other words ending in "e" too.
I was thinking those would be something an AI might be good at catching and suggesting minimal fixes for.
[1] https://news.ycombinator.com/item?id=46867167
[+] [-] abtinf|4 days ago|reply
99% of rule enforcement, both IRL and online, comes down to individuals accepting the culture.
Rules aren’t really for adversaries, they are for ordinary situations. Adversaries are dealt with differently.
[+] [-] schopra909|4 days ago|reply
So I'm just baffled, why anyone was using AI to generate comments. Like what was the incentive driving the behavior?
[+] [-] SoKamil|4 days ago|reply
[+] [-] Supermancho|4 days ago|reply
I don't feel this is an imposition on others. I think it's the opposite. It enhances signal by reducing nitpicking, spelling/grammar errors that might muddle intent, and reminds me of proper sentence structure.
Many of us are guilty of run-ons, fragments, overly large blocks of text[1] because it's closer to how people often converse, verbally. Posts on the internet are not casual conversation between humans. They are exchanges of ideas.
[1] This is a classic example where I had to go back and edit it to ensure it was readable. As you do self-review with any commit ^^
[+] [-] mulhoon|4 days ago|reply
[+] [-] p0w3n3d|4 days ago|reply
As a Polish man I am repulsed when I hear AI generated Polish voice in a commercial, but can't see problems in AI generated English speech
[+] [-] spzzz|4 days ago|reply
Im of course exaggerating, but it is so easy just to run the text through an AI to make it sound "better" without changing what im trying to express.
---
I’m not a native speaker, so AI helps me get my point across more clearly. It’s hard not to come across like a dummy otherwise.
Of course I’m exaggerating, but it’s really easy to run the text through AI to make it sound better without changing what I’m trying to say.
[+] [-] abustamam|4 days ago|reply
If you suspect it to be a bot, flag it and move on! If it is indeed a bot and you comment that it's a bot, it doesn't care! If it is not a bot and you call it a bot, you may have offended someone. If it's a human using AI, I don't think a comment will make them change their ways. In any case though, I think it's a useless comment.
[+] [-] primitivesuave|4 days ago|reply
Consequently, I hardly ever spend the time to write out long and detailed HN comments like I used to in the pre-LLM era. People nowadays have a much harder time believing that an Internet stranger is meticulously crafting a detailed and grammatically-airtight message to another Internet stranger without AI assistance.
[+] [-] komali2|4 days ago|reply
Also there's some subset of users on this site who are rate limited, such as me. So for me that manifests in avoiding post for post conversations and more seeking to engage in an exchange of essays where I try to predict future points and address them, to save comments, which obviously results in long comments.
[+] [-] esjeon|4 days ago|reply