(no title)
omneity | 1 month ago
Privacy concerns notwithstanding, one could argue having LLMs with us every step of the way - coding agents, debugging, devops tools etc. It will be this shared interlocutor with vast swaths of experiential knowledge collected and redistributed at an even larger scale than SO and forum-style platforms allow for.
It does remove the human touch so it's quite a different dynamic and the amount of data to collect is staggering and challenging from a legal point of view, but I suspect a lot of the knowledge used to train LLMs in the next ten years will come from large-scale telemetry and millions of hours in RL self-play where LLMs learn to scale and debug code from fizzbuzz to facebook and twitter-like distributed system.
inejge|1 month ago
That might work until an LLM encounters a question it's programmed to regard as suspicious for whatever reason. I recently wanted to exercise an SMTP server I've been configuring, and wanted to do it by an expect script, which I don't do regularly. Instead of digging through the docs, I asked Google's Gemini (whatever's the current free version) to write a bare bones script for an SMTP conversation.
It flatly refused.
The explanation was along the lines "it could be used for spamming, so I can't do that, Dave." I understand the motivation, and can even sympathize a bit, but what are the options for someone who has a legitimate need for an answer? I know how to get one by other means; what's the end game when it's LLMs all the way down? I certainly don't wish to live in such a world.
immibis|1 month ago
Boltgolt|1 month ago
pigpop|1 month ago
firesteelrain|1 month ago
llbeansandrice|1 month ago
No longer interacting with your peers but an LLM instead? The knowledge centralized via telemetry and spying on every user’s every interaction and only available thru a enshitified subscription to a model that’s been trained on this stolen data?
cornel_io|1 month ago
I have plenty of real peers I interact with, I do not need that noise when I just need a quick answer to a technical question. LLMs are fantastic for this use case.
martin-t|1 month ago
Well, turns out developers are now the product too. Good job everyone.
llbeansandrice|1 month ago
I’ve seen this trend a number of times on HN that feels strawman-y. Taking the worst possible example of the status quo but also yada-yadaing or outright ignoring the massive risks of the tech du jour.
The comment I’m replying to hand waves over “legal issues” and totally ignores the fact that this hypothetical (and idealized) version of AI fundamentally destroys core aspects of community problem solving and centralizes the existing knowledge into a black box subscription all for the benefit of a clunky UX and underlying product that has yet to be proven effective enough to justify all the negative externalities.
QuesnayJr|1 month ago
CamperBob2|1 month ago
stackghost|1 month ago
casey2|1 month ago
Just through the act of existing meatware prevents other humans from joining. The reasons may be shallow or well thought out. 95+% of answers on stack overflow are written by men so for most women stack overflow is already a hellscape.
If companies did more work on bias (or at least not be so offensive to various identities) that benefit, of distributing knowledge/advice/RTFM, could be even greater.