top | item 44602053

(no title)

sbt | 7 months ago

I have been using it for coding for some time, but I don't think I'm getting much value out of it. It's useful for some boilerplate generation, but for more complex stuff I find that it's more tedious to explain to the AI what I'm trying to do. The issue, I think, is lack of big picture context in a large codebase. It's not useless, but I wouldn't trade it for say access to StackOverflow.

My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.

My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.

discuss

order

oezi|7 months ago

One of the issues certainly is that Stackoverflow is absolutely over. Within the last twelve months the number of users just fell off a cliff.

danbruc|7 months ago

That might be a good thing after all, at least in a certain sense. Stack Overflow has been dying for the last ten years or so. In the first years there where a lot of good questions that were interesting to answer but that changed with popularity and it became an endless sea of low effort do my homework duplicates that were not interesting to answer and annoying to moderate. If this now gets handled by large language models, it could maybe become similar to the beginning again, only those questions that are not easily answerable by looking into the documentation or asking a chat bot will end up on Stack Overflow, it could be fun again to answer questions on Stack Overflow. On the other hand if nobody looks up things on Stack Overflow, it will be hard to sustain the business, maybe even when downscaled accordingly.

tom_m|7 months ago

To be honest, I used Stackoverflow less and less over the years. Not sure that was because I learned more. I just think most times I went there I was looking for snippets to save time with boilerplate. As better frameworks, tools, packages, languages came along I just had less need to go to Stackoverflow.

But yes, AI put the nail in its coffin. Sadly, ironically, AI trained off it. I mean AI quite literally stole from Stackoverflow and others and somehow got away with it.

That's why I don't really admire people like Sam Altman or the whack job doomer at Anthropic whatever his name is. They're crooks.

lithos|7 months ago

SO sold off their own data and made it insanely web-crawlable.

So makes sense that SO like users will use AI, not to mention they get the benefit of avoiding the neurotic moderator community at SO.

logicchains|7 months ago

>I have been using it for coding for some time, but I don't think I'm getting much value out of it.

I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.

surgical_fire|7 months ago

> I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did.

I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.

I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).

Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.

Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.

My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.

Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.

rurp|7 months ago

If you ever end up working on large complicated code bases you'll likely have an easier time relating to the sentiment. LLMs are vastly better at small greenfield coding than for working on large projects. I think 100% of the people I've heard rave about AI coding are using them for small isolated projects. Among people who work on large projects sentiment seems to range from mildly useful to providing negative value.

sandworm101|7 months ago

Fun test: ask chatgtp to find where Wikipedia is wrong about a subject. It does not go well, proving that it is far less trustworthy than wikipedia alone.

(Most AI will simply find where twitter disagrees with Wikipedia and spout out ridiculousness conspiracy junk.)

tom_m|7 months ago

It is over hyped for sure. This is among the biggest hype cycles we've seen yet. When it bursts, it'll be absolutely devastating. Make no mistake. Many companies will go out of business, many people affected.

However. It doesn't mean AI will go away. AI is really useful. It can do a lot actually. It is a slow adoption because it's somehow not the most intuitive to use. I think that may have a lot to do with tooling and human communication style - or the way we use it.

Once people learn how to use it, I think it'll just become ubiquitous. I don't see it taking anyone's job. The doomers who like to say that are people pushing their own agenda, trolling, or explaining away mass layoffs that were happening BEFORE AI. The layoffs are a result of losing a tax credit for R&D, over hiring, and the economy. Forgetting the tax thing for a moment, is anyone really surprised that companies over hired?? I mean come on. People BARELY do any work at all at large companies like Google, Apple, Amazon, etc. I mean that not quite fair. Don't get me wrong, SOME people there do. They work their tails off and do great things. That's not all of the company's employees though. So what do you expect is going to happen? Eventually the company prunes. They go and mass hire again years later, see who works out, and they prune again. This strategy is why hiring is broken. It's a horrible grind.

Sorry, back to AI adoption. AI is now seen by some caught in this grind as the "enemy." So that's another reason for slow adoption. A big one.

It does work though. I can see how it'll help and I think it's great. If you know how everything gets put together then you can provide the instructions for it to work well. If you don't, then you're not going to get great results. Sorry, if you don't know how software is built, what good code looks like, AND you don't "rub it the right way." Or as people say "prompt engineering."

I think for writing blog posts, getting info, it's easier. Though there's EXTREME dangers with it for other use cases. It can give incredibly dangerous medical advice. My wife is a psychiatrist and she's been keeping an eye on it, testing it, etc. To date AI has done more to harm people than it has help them in terms of mental health. It's also too inaccurate to use for mental health as well. So that field isn't adopting it so quickly. BUT they are trying and experimenting. It's just going to take some time and rightfully so. They don't want to rush start using something that hasn't been tested and validated. That's an understaffed field though, so I'm sure they will love any productivity gain and help they can get.

All said, I don't know what "slow" means for adoption. It feels like it's progressing quickly.

fauigerzigerk|7 months ago

I used to donate to Wikipedia, but it has been completely overrun by activists pushing their preferred narrative. I don't trust it any more.

I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.

LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.

notarobot123|7 months ago

> at least they are not as much a single point of failure

Yes, network effects and hyper scale produce perverse incentives. It sucks that Wikipedia can be gamed. Saying that, you'd need to be actively colluding with other contributors to maintain control.

Imagining that AI is somehow more neutral or resistant to influence is incredibly naive. Isn't it obvious that they can be "aligned" to favor the interests of whoever trains them?

ramon156|7 months ago

Can I ask for some examples? I'm not this active on Wikipedia, so I'm curious where a narrative is being spread

Panzer04|7 months ago

Single point of failure?

Yeah u can download the entirety of Wikipedia if you want to. What's the single point of failure?