(no title)
cafed00d | 10 months ago
Ever since mobile & cloud era at their peaks in 2012 or 2014, we’ve had Crypto, AR, VR, and now AI.
I have some pocket change bitcoin, ethereum, played around for 2 minutes on my dust-gathering Oculus & Vision Pro; but man, oh man! Am I hooked to ChatGpt or what!
It’s truly remarkably useful!
You just can’t get this type of thing in one click before.
For example, here’s my latest engineering productivity boosting query: “when using a cfg file on the cmd line what does "@" as a prefix do?”
mcdeltat|10 months ago
Why is it that e.g. you believe LLMs are truly revolutionary, whereas e.g. I think they are not? What are the things you are doing with LLMs day to day that are life changing, which I am not doing? I'm so curious.
When I think of things that would be revolutionary for my job, I imagine: something that could input a description + a few resources, and write all the code, docs, etc for me - creating an application that is correct, maintainable, efficient, and scalable. That would solve 80% of my job. From my trials of LLMs, they are nowhere near that level, and barely pass the "correct" requirement.
Further, the cynic in me wonders what work we can possibly be doing where text generation is revolutionary. Keeping in mind that most of our jobs are ultimately largely pointless anyway, so that implies a limit on the true usefulness of any tool. Why does it matter if I can make a website in 1/10th the time if the website doesn't contribute meaningfully to society?
bredren|10 months ago
It could be that you’re falling into a complete solution fallacy. LLMs can already be great at working each of these problems. It helps to work on a small piece of these problems. It does take practice and any sufficiently complicated problem will require practice and multiple attempts.
But the more you practice with them, you start getting a feel for it and these things start to eat away at this 80% you’re describing.
It is not self driving, if anything, software engineering, automation is only accessible to those who nerd out at it, the same way using a PC used to be sending email or programming.
A lot of the attention is on being able to run increasingly capablemodels on machines with less resources. But there’s not much use to fuss over Gemini 2.5 Pro if you don’t already have a pretty good feel for deep interaction with sonnet or GPT 4o.
It is already impressive and can seriously accelerate software engineering.
ryandrake|10 months ago
People have different expectations out of computers, and that accounts for the wildly different views on current AI capabilities.
mmcnl|10 months ago
lgrapenthin|10 months ago
And it isn't on the way there. Just today, a leading state of the art model, that supposedly passed all the most difficult math entry exams and whatever they "benchmark", reasoned with the assumption of "60 days in January". It would simply assume that and draw conclusions, as if that were normal. It also wasn't able to corrrectly fill out all possible scores in a two player game with four moves and three rules, that I made up. It would get them wrong over and over.
kristianp|10 months ago