There is a huge backlash coming when the general public learns AI is plagued with errors and hallucinations. Companies are out there straight up selling snake oil to them right now.
Observing the realm of politics should be enough to disabuse anyone of the notion that people generally assign any value at all to truthfulness.
People will clamor for LLMs that tell them what they want to hear, and companies will happily oblige. The post-truth society is about to shift into overdrive.
It depends on situation. People want their health care provider to be correct. Same goes with chat bot when they are trying to get support.
On other hand at same time they might not want to me moralized to like told that they should save more money, spend less or go on diet...
AI providing incorrect information in many cases when dealing with regulations, law and so on can have significant real world impact. And such impact is unacceptable. For example you cannot have tax authority or government chatbot be wrong about some regulation or tax law.
This is shockingly accurate. Other than professional work, AI just has to learn how to respond to the individual's tastes and established beliefs to be successful. Most people want the comfort of believing they're correct, not being challenged in their core beliefs.
It seems like the most successful AI business will be one in which the model learns about you from your online habits and presence before presenting answers.
Exactly. This is super evident when you start asking for more complex questions in CS, and when asking for intermediate-level code examples.
Also the same for asking about apps/tools. Unless it is a super known app like Trello which has been documented and written about to death - the LLM will give you all kinds of features for a product, which it actually doesn’t have.
It doesn’t take long to realize that half the time all these LLMs just give you text for the sake of giving it.
Asking LLMs for imaginary facts is the wrong thing here, not the hallucination of the LLMs.
LLMs have constraints, these are computation power and model size. Just like a human would get overwhelmed if you request too much with vague instructions LLMs also get overwhelmed.
We need to learn how to write efficient prompts to use LLMs. If you do not understand the matter, be able to provide enough context, the LLM hallucinates.
Currently criticising LLMs on hallucinations by asking factual questions is akin to saying I tried to divide by zero on my calculator and it doesn't work. LLMs were not designed for providing factual information without context, they are thinking machines excelling at higher level intellectual work.
No. From my experience, many people think that AI is an infallible assistant, and even some are saying that we should replace any and all tools with LLMs, and be done with it.
The art part is actually pretty nice, because everyone can see directly if the generated art fits their taste, and back-and-forth with the bot to get what you want is actually pretty funny.
It gets frustrating sometimes, but overall it's decent as a creative activity, and because people don't expect art to be knowledge.
Yes, calling an LLM "AI" was the first HUGE mistake.
A statistical model the can guess the next word is in no way "intelligent" and Sam Altman himself agrees this is not a path to AGI (what we used to call just AI).
Please define the word intelligent in a way accepted by doctors, scientists, and other professionals before engaging in hyperbole or you're just as bad as the AGI is already here people. Intelligence is a gradient in problem solving and our software is creeping up that gradient in it's capabilities.
No, AI also needs to fail in similar ways as humans. A system that makes 0.001% errors, all totally random and uncorrelated, will be very different in production than a system that makes 0.001% errors systematically and consistently (random errors are generally preferable).
kibwen|1 year ago
People will clamor for LLMs that tell them what they want to hear, and companies will happily oblige. The post-truth society is about to shift into overdrive.
Ekaros|1 year ago
On other hand at same time they might not want to me moralized to like told that they should save more money, spend less or go on diet...
AI providing incorrect information in many cases when dealing with regulations, law and so on can have significant real world impact. And such impact is unacceptable. For example you cannot have tax authority or government chatbot be wrong about some regulation or tax law.
Loughla|1 year ago
It seems like the most successful AI business will be one in which the model learns about you from your online habits and presence before presenting answers.
llamaimperative|1 year ago
I don’t think defeatism is helpful (or correct).
skilled|1 year ago
Also the same for asking about apps/tools. Unless it is a super known app like Trello which has been documented and written about to death - the LLM will give you all kinds of features for a product, which it actually doesn’t have.
It doesn’t take long to realize that half the time all these LLMs just give you text for the sake of giving it.
kylebenzle|1 year ago
terminalcommand|1 year ago
LLMs have constraints, these are computation power and model size. Just like a human would get overwhelmed if you request too much with vague instructions LLMs also get overwhelmed.
We need to learn how to write efficient prompts to use LLMs. If you do not understand the matter, be able to provide enough context, the LLM hallucinates.
Currently criticising LLMs on hallucinations by asking factual questions is akin to saying I tried to divide by zero on my calculator and it doesn't work. LLMs were not designed for providing factual information without context, they are thinking machines excelling at higher level intellectual work.
add-sub-mul-div|1 year ago
bayindirh|1 year ago
pyrale|1 year ago
It gets frustrating sometimes, but overall it's decent as a creative activity, and because people don't expect art to be knowledge.
duxup|1 year ago
Every AI use I have comes with a big warning.
The internet is full of lies and I still use it.
drewcoo|1 year ago
Well snake oil sells. And the margins are great!
kylebenzle|1 year ago
A statistical model the can guess the next word is in no way "intelligent" and Sam Altman himself agrees this is not a path to AGI (what we used to call just AI).
pixl97|1 year ago
Please define the word intelligent in a way accepted by doctors, scientists, and other professionals before engaging in hyperbole or you're just as bad as the AGI is already here people. Intelligence is a gradient in problem solving and our software is creeping up that gradient in it's capabilities.
unknown|1 year ago
[deleted]
throwawaysleep|1 year ago
llamaimperative|1 year ago