top | item 46483567

(no title)

omneity | 1 month ago

Thinking from first principles, a large part of the content on stack overflow comes from the practical experience and battle scars worn by developers sharing them with others and cross-curating approaches.

Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc. It will be this shared interlocutor with vast swaths of experiential knowledge collected and redistributed at an even larger scale than SO and forum-style platforms allow for.

It does remove the human touch so it's quite a different dynamic and the amount of data to collect is staggering and challenging from a legal point of view, but I suspect a lot of the knowledge used to train LLMs in the next ten years will come from large-scale telemetry and millions of hours in RL self-play where LLMs learn to scale and debug code from fizzbuzz to facebook and twitter-like distributed system.

discuss

order

inejge|1 month ago

> Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc.

That might work until an LLM encounters a question it's programmed to regard as suspicious for whatever reason. I recently wanted to exercise an SMTP server I've been configuring, and wanted to do it by an expect script, which I don't do regularly. Instead of digging through the docs, I asked Google's Gemini (whatever's the current free version) to write a bare bones script for an SMTP conversation.

It flatly refused.

The explanation was along the lines "it could be used for spamming, so I can't do that, Dave." I understand the motivation, and can even sympathize a bit, but what are the options for someone who has a legitimate need for an answer? I know how to get one by other means; what's the end game when it's LLMs all the way down? I certainly don't wish to live in such a world.

Boltgolt|1 month ago

I don't know how others use LLMs, but once I find the answer to something I'm stuck on I do not tell the LLM that it's fixed. This was a problem in forums as well but I think even fewer people are going to give that feedback to a chatbot

pigpop|1 month ago

The problem that you worked out is only really useful if it can be recreated and validated, which in many cases it can be by using an LLM to build the same system and write tests that confirm the failure and the fix. Your response telling the model that its answer worked is more helpful for measuring your level of engagement, not so much for evaluating the solution.

firesteelrain|1 month ago

You can also turn off the feature to allow ChatGPT to learn from your interactions. Not many people do but those that do would also starve OpenAI for information assume they respect that setting

llbeansandrice|1 month ago

Am I the only one that sees this as a hellscape?

No longer interacting with your peers but an LLM instead? The knowledge centralized via telemetry and spying on every user’s every interaction and only available thru a enshitified subscription to a model that’s been trained on this stolen data?

cornel_io|1 month ago

Asking questions on SO was an exercise in frustration, not "interacting with peers". I've never once had a productive interaction there, everything I've ever asked was either closed for dumb reasons or not answered at all. The library of past answers was more useful, but fell off hard for more recent tech, I assume because people all were having the same frustrations as I was and just stopped going there to ask anything.

I have plenty of real peers I interact with, I do not need that noise when I just need a quick answer to a technical question. LLMs are fantastic for this use case.

martin-t|1 month ago

Y'know how "users" of modern tech are the product? And how the developers were completely fine with creating such systems?

Well, turns out developers are now the product too. Good job everyone.

llbeansandrice|1 month ago

Replying to my own comment surprised that everyone is latching on to just poor moderation on a single site and ignoring the wealth of other options for communication and problem solving like slack communities, Reddit, blog posts, running a site like SO but with a better/different moderation policy, the list goes on and on.

I’ve seen this trend a number of times on HN that feels strawman-y. Taking the worst possible example of the status quo but also yada-yadaing or outright ignoring the massive risks of the tech du jour.

The comment I’m replying to hand waves over “legal issues” and totally ignores the fact that this hypothetical (and idealized) version of AI fundamentally destroys core aspects of community problem solving and centralizes the existing knowledge into a black box subscription all for the benefit of a clunky UX and underlying product that has yet to be proven effective enough to justify all the negative externalities.

QuesnayJr|1 month ago

I actively hated interacting with the power users on SO, and I feel nothing about an LLM, so it's a definite improvement in QoL for me.

CamperBob2|1 month ago

The "human touch" on StackOverflow?! I'll take the "robot touch," thanks very much.

stackghost|1 month ago

The UX sounds better than Stack Overflow.

casey2|1 month ago

How is it much different than trading say a bar for livestream? For any org if you can remove the human meatware you should otherwise you are just making a bunch of busywork to exlude people from using your service.

Just through the act of existing meatware prevents other humans from joining. The reasons may be shallow or well thought out. 95+% of answers on stack overflow are written by men so for most women stack overflow is already a hellscape.

If companies did more work on bias (or at least not be so offensive to various identities) that benefit, of distributing knowledge/advice/RTFM, could be even greater.