(no title)
asielen | 8 months ago
When I hear a CEO say this, what I hear is that they are going to use AI as an excuse to do massive layoffs to juice stock price and then cash out before the house of cards comes tumbling down. Every public company CEOs dream. The GE model in the age of AI.
Will AI drastically reshape industries and careers? Absolutely. Do currently CEOs understand or even care how (outside of making them richer in the next few quarters)? No.
CEOs are just marketing to investors with ridiculous claims because their products have stagnated. (See Benioffs recent claim that 50% of work at Salesforce is AI. Maybe that is why it sucks so much)
jmathai|8 months ago
dpoloncsak|8 months ago
Sure, you can have all of SalesForce run entirely by AI, but people may just find a better solution that actually works. Claude ran a vending machine after all, but it was deemed a failure.
So yeah, maybe there's a rocky month or two, and I'm not trying to downplay the severity of that...but the demand for the roles these services fulfill will still exist, and become market opportunities
janalsncm|8 months ago
afinlayson|8 months ago
We could also see CEO wages fall as their job can be done by anyone because of AI.
happymellon|8 months ago
Its happened before, it'll happen again, and ~~Visual Basic~~ AI may or may not change the landscape. I'm not that impressed with the current guise, but after a few revisions it may be better.
impossiblefork|8 months ago
Literally everything hallucinated even basic things, like what named parameters a function had etc.
It made me think that the core of the benefit of LLMs is that, even though they may not be smart, at least they've read what they need to answer the question you have, but if they haven't-- if there isn't much data on the software framework, not very many examples, etc., then nothing matters, and you can't really feed in the whole of vLLM. You actually need the companies running the AI to come up with training exercises for it, train on the code, train it on answering questions about the code, ask it to write up different simple variations of things in the code, etc.
So you really need to use LLMs to see the limits, and use them on 'weird' stuff, frameworks no one imagines that anyone will mess with. Even being a researcher and fiddling with improving LLMs every day may not be enough to see their limits, because they come very suddenly and then any accuracy or relevance goes away completely.