(no title)
xkbarkar | 4 months ago
Current models are the embryos of what is to come.
Code quality of the current models is not replacing skilled software engineers, network or ops engineers.
Tomorrows models may well do that though.
Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.
Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.
We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.
All while todays tech talent spends energy bickering on HN about the loss of being the code review King.
Everyone hated the code review royalty anyway. No one mourns them. Move on.
awesan|4 months ago
sirwhinesalot|4 months ago
GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).
Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.
Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.
[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.
Macha|4 months ago
jamil7|4 months ago
bagacrap|4 months ago
hitarpetar|4 months ago
yeah, and they also may not. I give it about a 50/50 chance, it either happens or it doesn't
throw-10-13|4 months ago
The marketing around AI as a feature complete tool ready for production is disingenuous at best, and outright fraud in many cases.