top | item 43733095

(no title)

cafed00d | 10 months ago

To me, AI hype seems to be the most tangible/real hype in a decade.

Ever since mobile & cloud era at their peaks in 2012 or 2014, we’ve had Crypto, AR, VR, and now AI.

I have some pocket change bitcoin, ethereum, played around for 2 minutes on my dust-gathering Oculus & Vision Pro; but man, oh man! Am I hooked to ChatGpt or what!

It’s truly remarkably useful!

You just can’t get this type of thing in one click before.

For example, here’s my latest engineering productivity boosting query: “when using a cfg file on the cmd line what does "@" as a prefix do?”

discuss

order

mcdeltat|10 months ago

It's astonishing how the two camps of LLM believers vs LLM doubters has evolved even though we as people are largely very similar, doing similar work.

Why is it that e.g. you believe LLMs are truly revolutionary, whereas e.g. I think they are not? What are the things you are doing with LLMs day to day that are life changing, which I am not doing? I'm so curious.

When I think of things that would be revolutionary for my job, I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me - creating an application that is correct, maintainable, efficient, and scalable. That would solve 80% of my job. From my trials of LLMs, they are nowhere near that level, and barely pass the "correct" requirement.

Further, the cynic in me wonders what work we can possibly be doing where text generation is revolutionary. Keeping in mind that most of our jobs are ultimately largely pointless anyway, so that implies a limit on the true usefulness of any tool. Why does it matter if I can make a website in 1/10th the time if the website doesn't contribute meaningfully to society?

bredren|10 months ago

> I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me

It could be that you’re falling into a complete solution fallacy. LLMs can already be great at working each of these problems. It helps to work on a small piece of these problems. It does take practice and any sufficiently complicated problem will require practice and multiple attempts.

But the more you practice with them, you start getting a feel for it and these things start to eat away at this 80% you’re describing.

It is not self driving, if anything, software engineering, automation is only accessible to those who nerd out at it, the same way using a PC used to be sending email or programming.

A lot of the attention is on being able to run increasingly capablemodels on machines with less resources. But there’s not much use to fuss over Gemini 2.5 Pro if you don’t already have a pretty good feel for deep interaction with sonnet or GPT 4o.

It is already impressive and can seriously accelerate software engineering.

ryandrake|10 months ago

I think the difference is between people who accept nondeterministic behavior from their computers and those who don’t. If you accept your computer being confidently wrong some unknowable percentage of the time, then LLMs are miraculous and game changing software. If you don’t, then the same LLMs are defective and unreliable toys, not suitable as serious tools.

People have different expectations out of computers, and that accounts for the wildly different views on current AI capabilities.

mmcnl|10 months ago

I guess everyone has a different interpretation of revolutionary. Some people think ChatGPT is just faster search. But 10x faster search is revolutionary in terms of productivity.

lgrapenthin|10 months ago

Your example is a better search engine. The AI hype however is the promise that it will be smarter (not just more knowledgeable) than humans and replace all jobs.

And it isn't on the way there. Just today, a leading state of the art model, that supposedly passed all the most difficult math entry exams and whatever they "benchmark", reasoned with the assumption of "60 days in January". It would simply assume that and draw conclusions, as if that were normal. It also wasn't able to corrrectly fill out all possible scores in a two player game with four moves and three rules, that I made up. It would get them wrong over and over.

kristianp|10 months ago

It's not a better search engine, it's qualitatively different to search. An LLM compose its answers based on what you ask it. Search returns pre-existing texts to you.