top | item 46385607

(no title)

computerdork | 2 months ago

Hmm, actually lines up for me at least. It was a pretty big news item a few months ago when Salesforce did this drastic reduction in their Customer Service department, and Marc Benioff raved about how great AI was (you might have just missed it):

  https://www.ktvu.com/news/salesforce-ai-layoffs-marc-benioff
At the time, it was such a big deal to a lot of us because it was a signal what could eventually happen to the rest of us white collar workers.

Of course, it could still happen, as maybe AI systems just need another few years to mature before trying to fully replace jobs like this...

... although, one thing I agree with you is that there isn't much info online on these quotes from Salesforce executives, so could be made up.

discuss

order

DougN7|2 months ago

I’m beginning to doubt very much that will happen. AI/LLMs are already based on 99% of all accessible text in the world (I made that stat up, but I think I’m not far off). Where will the additional intelligence come from that SalesForce needs for the long tail, the nuance, and the tough cases? AI is good at what it’s already good at - I predict we won’t see another order of magnitude improvement with all the current approaches.

computerdork|2 months ago

Hmm, am no LLM expert, but agree with you that the models themselves for the individual subject domains seem like they're starting to reach their peaks (Writing, solving math, coding, music gen...) and the improvements are becoming a lot less dramatic than couple of years ago.

But, feel like combining LLM's with other AI techniques seems like it could do so much more...

... As mentioned, am no expert, but seems like one of the next major focuses on LLM's is on verification of its answers, and adding to this, giving LLM's a sense for when its result are right or wrong. Yeah, feel like the ability for an LLM to introspect itself so it can gain an understanding of how it got its answer might be of help if knowing if its answer is right (think Anthropic has been working on this for awhile now), as well as scoring the reliability of the information sources.

And, they could also mix in a formal verification step, using some form of proof to prove that its results are right (for those answers that lend themselves to formal verification).

Am sure all this is all currently being tried. So any AI experts out there, feel free to correct me. Thanks!