(no title)
etamponi | 1 month ago
Eventually I tried with something else, and found a question on stackoverflow, luckily with an answer. That was the game changer and eventually I was able to find the right doc in the Spark (actually Iceberg) website that gave me the final fix.
This is to say that LLMs might be more friendly. But losing SO means that we're getting an idiot friendly guy with a lot of credible but wrong answers in place of a grumpy and possibly toxic guy which, however, actually answered our questions.
Not sure why someone is thinking this is a good thing.
specproc|1 month ago
SO, at its best, is numerous highly-experienced and intelligent humans trying to demonstrate how clever they are. A bit like HN, you learn from watching the back and forth. I don't think this is something that LLMs can ever replicate. They don't have the egos and they certainly don't have the experience.
Whatever people's gripes about the site, I learned a hell of a lot from it. I still find solutions there, and think a world without it would be worse.
NewJazz|1 month ago
andy81|1 month ago
Some people take that as a personal attack, but it can be more helpful than a detailed response to the wrong question.
zahlman|1 month ago
Stack Overflow is explicitly not for "dialogue", recent experiments (which are generally not well received by the regulars on the meta site) notwithstanding. The purpose of the comments on questions is to help refine the question and ensure it meets standards, and in some cases serve other meta purposes like pointing at different-but-related questions to help future readers find what they're looking for. Comments are generally subject to deletion at any time and were originally designed to be visually minimal. They are not part of the core experience.
Of course, the new ownership is undoing all of that, because of engagement metrics and such.
djfergus|1 month ago
Interesting question - the result is just words so surely a LLM can simulate an ego. Feed it the Linux kernel mailing list?
Isn’t back and forth exactly what the new MoE thinking models attempt to simulate?
And if they don’t have the experience that is just a question of tokens?
dpkirchner|1 month ago
n49o7|1 month ago
Perhaps the antidote involves a drop of the poison.
Let an LLM answer first, then let humans collaborate to improve the answer.
Bonus: if you can safeguard it, the improved answer can be used to train a proprietary model.
solumunus|1 month ago
bluedino|1 month ago
ianbutler|1 month ago
LLMs have a better hit rate with me.
paganholiday|1 month ago
Searching questions/answers on SO can surface correct paths on situations where the LLMs will keep giving you variants of a few wrong solutions, kind of like the toxic duplicate closers.. Ironically, if SO pruned the history to remove all failures to match its community standards then it would have the same problem.
vultour|1 month ago
jtrn|1 month ago
danver0|1 month ago
[deleted]
solumunus|1 month ago
znpy|1 month ago
Which by the way is incredibly ironic to read on the internet after like fifteen years of annoying people left and right about toxic this and toxic that.
Extreme example: Linus Torvalds used to be notoriously toxic.
Would you still defend your position if the “grumpy” guy answered in Linus’ style?
etamponi|1 month ago
If they answered correctly, yes.
My point is that providing _actual knowledge_ is by itself so much more valuable compared to _simulated knowledge_, in particular when that simulated knowledge is hyper realistic and wrong.
johnnyanmac|1 month ago
johnsmith1840|1 month ago
That grumpy guy is using an LLM and debugging with it. Solves the problem. AI provider fine tunes their model with this. You now have his input baked into it's response.
How you think these things work? It's either a human direct input it's remembering or a RL enviroment made by a human to solve the problem you are working on.
Nothing in it is "made up" it's just a resolution problem which will only get better over time.
nprateem|1 month ago
RobinL|1 month ago
Once technique I've used successfully is to do this 'manually' to ensure codex/Claude code can grep around the libraries I'm using
kurtis_reed|1 month ago