(no title)
sbt | 7 months ago
My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.
My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.
oezi|7 months ago
danbruc|7 months ago
tom_m|7 months ago
But yes, AI put the nail in its coffin. Sadly, ironically, AI trained off it. I mean AI quite literally stole from Stackoverflow and others and somehow got away with it.
That's why I don't really admire people like Sam Altman or the whack job doomer at Anthropic whatever his name is. They're crooks.
lithos|7 months ago
So makes sense that SO like users will use AI, not to mention they get the benefit of avoiding the neurotic moderator community at SO.
logicchains|7 months ago
I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.
surgical_fire|7 months ago
I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.
I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).
Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.
Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.
My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.
Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.
rurp|7 months ago
sandworm101|7 months ago
(Most AI will simply find where twitter disagrees with Wikipedia and spout out ridiculousness conspiracy junk.)
tom_m|7 months ago
However. It doesn't mean AI will go away. AI is really useful. It can do a lot actually. It is a slow adoption because it's somehow not the most intuitive to use. I think that may have a lot to do with tooling and human communication style - or the way we use it.
Once people learn how to use it, I think it'll just become ubiquitous. I don't see it taking anyone's job. The doomers who like to say that are people pushing their own agenda, trolling, or explaining away mass layoffs that were happening BEFORE AI. The layoffs are a result of losing a tax credit for R&D, over hiring, and the economy. Forgetting the tax thing for a moment, is anyone really surprised that companies over hired?? I mean come on. People BARELY do any work at all at large companies like Google, Apple, Amazon, etc. I mean that not quite fair. Don't get me wrong, SOME people there do. They work their tails off and do great things. That's not all of the company's employees though. So what do you expect is going to happen? Eventually the company prunes. They go and mass hire again years later, see who works out, and they prune again. This strategy is why hiring is broken. It's a horrible grind.
Sorry, back to AI adoption. AI is now seen by some caught in this grind as the "enemy." So that's another reason for slow adoption. A big one.
It does work though. I can see how it'll help and I think it's great. If you know how everything gets put together then you can provide the instructions for it to work well. If you don't, then you're not going to get great results. Sorry, if you don't know how software is built, what good code looks like, AND you don't "rub it the right way." Or as people say "prompt engineering."
I think for writing blog posts, getting info, it's easier. Though there's EXTREME dangers with it for other use cases. It can give incredibly dangerous medical advice. My wife is a psychiatrist and she's been keeping an eye on it, testing it, etc. To date AI has done more to harm people than it has help them in terms of mental health. It's also too inaccurate to use for mental health as well. So that field isn't adopting it so quickly. BUT they are trying and experimenting. It's just going to take some time and rightfully so. They don't want to rush start using something that hasn't been tested and validated. That's an understaffed field though, so I'm sure they will love any productivity gain and help they can get.
All said, I don't know what "slow" means for adoption. It feels like it's progressing quickly.
fauigerzigerk|7 months ago
I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.
LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.
notarobot123|7 months ago
Yes, network effects and hyper scale produce perverse incentives. It sucks that Wikipedia can be gamed. Saying that, you'd need to be actively colluding with other contributors to maintain control.
Imagining that AI is somehow more neutral or resistant to influence is incredibly naive. Isn't it obvious that they can be "aligned" to favor the interests of whoever trains them?
ramon156|7 months ago
Panzer04|7 months ago
Yeah u can download the entirety of Wikipedia if you want to. What's the single point of failure?