(no title)
rising-sky | 6 months ago
> " This pretty negative post topping Hacker News last month sparked these questions, and I decided to find some answers, of course, using AI"
The pretty negative post cited is https://tomrenner.com/posts/llm-inevitabilism/. I went ahead to read it, and found it, imo, fair. It's not making any direct pretty negative claims about AI, although it's clear the author has concerns. But the thrust is inviting the reader to not fall into the trap of the current framing by proponents of AI, rather questioning first if the future being peddled is actually what we want. Seems a fair question to ask if you're unsure?
I got concerned that this is framed as "pretty negative post", and it impacted my read of the rest of this author's article
ryandrake|6 months ago
throw10920|6 months ago
For instance, in a lot of threads on some new technology or idea, one of the top comments is "I'm amazed by the negativity here on HN. This is a cool <thing> and even though it's not perfect we should appreciate the effort the author has put in" - where the other toplevel comments are legitimate technical criticism (usually in a polite manner, no less).
I've seen this same comment, in various flavors, at the top of dozens of HN thread in the past couple of years.
Some of these people are being genuine, but others are literally just engaging in amigdala-hijacking because they want to shut down criticism of something they like, and that contributes to the "everything that isn't gushing positivity is negative" effect that you're seeing.
mrexroad|6 months ago
Not sure if part of a broader trend, or a simply reflection of it, but when mentoring/coaching middle and high school aged kids, I’m finding they struggle to accept feedback in anyway other than “I failed.” A few years back, the same age group was more likely to accept and view feedback as an opportunity so long as you led with praising strengths. Now it’s like threading a needle every time.
duxup|6 months ago
I get it to some extent, a lot of people looking to inject doubt and their own ideas show up with some sort of Socratic method that really is meant to drive the conversation to a specific point, not honest.
But it also means actually honest questions are often voted or shouted down.
It seems like the methodology of discussion on the internet now only allows for everyone to show up with very concrete opinions and your opinion will then be judged. No opinion or honest questions... citizens of the internet assume the worst if you're anything but in lock step with them.
phyzix5761|6 months ago
camillomiller|6 months ago
zahlman|6 months ago
And most people here seem to think that's fine; but it's not in line with what I understood when I read the guidelines, and it absolutely strikes me as negativity.
everdrive|6 months ago
popalchemist|6 months ago
So the emotional process which results in the knee-jerk reactions to even the slightest and most valid critiques of AI (and the value structure underpinning Silicon Valley's pursuit of AGI) comes from the same place that religous nuts come from when they perceive an infringement upon their own agenda (Christianity, Islam, pick your flavor -- the reactivity is the same).
Eddy_Viscosity2|6 months ago
Now of course I'm not including aggressive or rude posts, because they are a different category.
Dylan16807|6 months ago
TylerE|6 months ago
EGreg|6 months ago
unknown|6 months ago
[deleted]
perching_aix|6 months ago
joshdavham|6 months ago
benreesman|6 months ago
Though it does sort of show the Overton window that a pretty bland argument against always believing some rich dudes buckets as negative even in the sentiment analysis sense.
I think a lot of people have like half their net worth in NVIDIA stock right now.
epolanski|6 months ago
The only subset where HN gets overly negative is coding, way more than they should.
rising-sky|6 months ago
srcreigh|6 months ago
The author (tom) tricked you. His article is flame bait. AI is a tool that we can use and discuss about. It's not just a "future being peddled." The article manages to say nothing about AI, casts generic doubt on AI as a whole, and pits people against each other. It's a giant turd for any discussion about AI, a sure-fire curiosity destruction tool.
sensanaty|6 months ago
Instead it's being shoved down our throats at every turn and is being marketed at the world as the Return of Christ. Whenever anyone says anything even slightly negative the evangelists crawl out of the woodwork to tell you how you're using the wrong model, or not prompting good enough, or long enough, or short enough, or "Well I've become a 9000000x developer using 76 agents in parallel!" type of posts.
sumeno|6 months ago
Any number of Sam Altman quotes display this: "A child born today will never be smarter than an AI" "We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence" "ChatGPT is already more powerful than any human who has ever lived" "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
Every bit of this is nonsense being peddled by the guy selling an AI future because it would make him one of the richest people alive if he can convince enough people that it will come true (or, much much much less likely, it does come true).
That's just from 10 minutes of looking at statements by a single one of these charlatans.
redbell|6 months ago
unknown|6 months ago
[deleted]
johnfn|6 months ago
It’s certainly not the worst article I’ve read here. But that’s why I didn’t really like it.
rising-sky|6 months ago
xelxebar|6 months ago
- Positive → AI Boomerist
- Negative → AI Doomerist
Still not great, IMHO, but at the very least the referenced article is certainly not AI Boomerist, so by process of elimination... probably more ambivalent? How does one quickly characterize "not boomerist and not really doomerist either, but somewhat ambivalent on that axis but definitely pushing against boomerism" without belaboring the point? Seems reasonable read that as some degree of negative pressure.