top | item 46482769

(no title)

etamponi | 1 month ago

I spent the last 14 days chasing an issue with a Spark transform. Gemini and Claude were exceptionally good at giving me answers that looked perfectly reasonable: none of them worked, they were almost always completely off-road.

Eventually I tried with something else, and found a question on stackoverflow, luckily with an answer. That was the game changer and eventually I was able to find the right doc in the Spark (actually Iceberg) website that gave me the final fix.

This is to say that LLMs might be more friendly. But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions.

Not sure why someone is thinking this is a good thing.

discuss

order

specproc|1 month ago

What I always appreciate about SO is the dialogue between commenters. LLMs give one answer, or bullet points around a theme, or just dump a load of code in your IDE. SO gives a debate, in which the finer points of an issue are thrashed out, with the best answers (by and large) floating to the top.

SO, at its best, is numerous highly-experienced and intelligent humans trying to demonstrate how clever they are. A bit like HN, you learn from watching the back and forth. I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience.

Whatever people's gripes about the site, I learned a hell of a lot from it. I still find solutions there, and think a world without it would be worse.

NewJazz|1 month ago

The fundamental difference between asking on SO and asking an LLM is that SO is a public forum, and an LLM will be communicated with in private. This has a lot of implications, most of which surround the ability for people to review and correct bad information.

andy81|1 month ago

SO also isn't afraid to tell you that your question is stupid and you should do it a better way.

Some people take that as a personal attack, but it can be more helpful than a detailed response to the wrong question.

zahlman|1 month ago

> What I always appreciate about SO is the dialogue between commenters.

Stack Overflow is explicitly not for "dialogue", recent experiments (which are generally not well received by the regulars on the meta site) notwithstanding. The purpose of the comments on questions is to help refine the question and ensure it meets standards, and in some cases serve other meta purposes like pointing at different-but-related questions to help future readers find what they're looking for. Comments are generally subject to deletion at any time and were originally designed to be visually minimal. They are not part of the core experience.

Of course, the new ownership is undoing all of that, because of engagement metrics and such.

djfergus|1 month ago

> I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience

Interesting question - the result is just words so surely a LLM can simulate an ego. Feed it the Linux kernel mailing list?

Isn’t back and forth exactly what the new MoE thinking models attempt to simulate?

And if they don’t have the experience that is just a question of tokens?

dpkirchner|1 month ago

I don't know if this is still the case but back in the day people would often redirect comments to some stackoverflow chat feature, the links to which would always return 404 not found errors.

n49o7|1 month ago

This comment and the parent one make me realize that people who answer probably value the exchange between experts more than the answer.

Perhaps the antidote involves a drop of the poison.

Let an LLM answer first, then let humans collaborate to improve the answer.

Bonus: if you can safeguard it, the improved answer can be used to train a proprietary model.

solumunus|1 month ago

You can ask an LLM to provide multiple approaches to solutions and explore the pros and cons of each, then you can drill down and elaborate on particular ones. It works very well.

bluedino|1 month ago

There are so many "great" answers on StackOverflow. Giving the why and not just the answer.

ianbutler|1 month ago

It's flat wrong to suggest SO had the right answer all the time, and in fact in my experience for trickier work it was often wrong or missing entirely.

LLMs have a better hit rate with me.

paganholiday|1 month ago

The example wasn't even finding a right answer so I don't see where you got that..

Searching questions/answers on SO can surface correct paths on situations where the LLMs will keep giving you variants of a few wrong solutions, kind of like the toxic duplicate closers.. Ironically, if SO pruned the history to remove all failures to match its community standards then it would have the same problem.

vultour|1 month ago

It entirely depends on the language you were using. The quality of both questions and answers between e.g. Go and JavaScript is incredible. Even as a relative beginner in JS I could not believe the amount of garbage that I came across, something that rarely happened for Go.

jtrn|1 month ago

No point in arguing with people who bring a snowball into Congress to disprove global warming.

danver0|1 month ago

[deleted]

solumunus|1 month ago

Because what you’re describing is the exception. Almost always with LLM’s I get a better solution, or helpful pointer in the direction of a solution, and I get it much faster. I honestly don’t understand anyone could prefer Google/SO, and in fact that the numbers show that they don’t. You’re in an extreme minority.

znpy|1 month ago

> But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions.

Which by the way is incredibly ironic to read on the internet after like fifteen years of annoying people left and right about toxic this and toxic that.

Extreme example: Linus Torvalds used to be notoriously toxic.

Would you still defend your position if the “grumpy” guy answered in Linus’ style?

etamponi|1 month ago

> Would you still defend your position if the “grumpy” guy answered in Linus’ style?

If they answered correctly, yes.

My point is that providing _actual knowledge_ is by itself so much more valuable compared to _simulated knowledge_, in particular when that simulated knowledge is hyper realistic and wrong.

johnnyanmac|1 month ago

Sadly, an accountable individual representing an organization is different from a community of semi-anonymous users with a bunch of bureaucracy that can't or doesn't care about every semis anonymous user

johnsmith1840|1 month ago

You still get the same thing though?

That grumpy guy is using an LLM and debugging with it. Solves the problem. AI provider fine tunes their model with this. You now have his input baked into it's response.

How you think these things work? It's either a human direct input it's remembering or a RL enviroment made by a human to solve the problem you are working on.

Nothing in it is "made up" it's just a resolution problem which will only get better over time.

nprateem|1 month ago

How does that work if there's no new data for them to train on, only AI slurry?

RobinL|1 month ago

I'm hoping increasing we'll see agents helping with this sort of issue. I would like an agent that would do things like pull the spark repo into the working area and consult the source code/cross reference against what you're trying to do.

Once technique I've used successfully is to do this 'manually' to ensure codex/Claude code can grep around the libraries I'm using

kurtis_reed|1 month ago

Q&A isn't going away. There's still GitHub Discussions.