The part that's concerning about ChatGPT is that a computer program that is "confidently wrong" is basically indistinguishable from what dumb people think smart people are like. This means people are going to believe ChatGPT's lies unless they are repeatedly told not to trust it just like they believe the lies of individuals whose intelligence is roughly equivalent to ChatGPT's.
Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm. The only danger is that stupid people might get their brains programmed by AI rather than by demagogues which should have little practical difference.
> Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm.
I don't think you have a shred of evidence to back up this assertion.
> The only danger is that stupid people might get their brains programmed by AI rather than by demagogues which should have little practical difference.
This may be the best point that you've made.
We're already drowning in propaganda and bullshit created by humans, so adding propaganda and bullshit created by AI to the mix may just be a substitution rather than any tectonic change.
An old saying, but frequently applies to the difficult people in your life.
Related, I remember when wikipedia first started up, and teachers everywhere were up-in-arms about it, asking their students not to use it as a reference. But most people have accepted it as "good enough", and now that viewpoint is non-controversial. (some wikipedia entries are still carefully curated - makes you wonder)
> The part that's concerning about ChatGPT is that a computer program that is "confidently wrong" is basically indistinguishable from what dumb people think smart people are like.
I don't know, the program does what it is engineered to do pretty well, which is, generate text that is representative of its training data following on from input tokens. It can't reason, it can't be confident, it can't determine fact.
When you interpret it for what it is, it is not confidently wrong, it just generated what it thinks is most likely based on the input tokens. Sometimes, if the input tokens contain some counter-argument the model will generate text that would usually occur if an claim was refuted, but again, this is not based on reason, or fact, or logic.
ChatGPT is not lying to people, it can't lie, at least not in the sense of "to make an untrue statement with intent to deceive". ChatGPT has no intent. It can generate text that is not in accordance with fact and is not derivable by reason from its training data, but why would you expect that from it?
> Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm.
I agree here, I think you can only get so far with a language model, maybe if we get a couple orders of mangitude more parameters it magically becomes AGI, but I somehow don't quite feel it, I think there is more to human intelligence than a LLM, way more.
Of course, that is coming, but that would not be this paradigm, which is basically trying to overextend LLM.
LLMs are great, they are useful, but if you want a model that reasons, you will likely have to train it for that, or possibly more likely, combine ML with something symbolic reasoning.
>Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm
I hope you appreciate the irony of making this confident statement without evidence in a thread complaining about hallucinations.
> individuals whose intelligence is roughly equivalent to ChatGPT's
There aren't any such individuals. Even the least intelligent human is much, much more intelligent than ChatGPT, because even the least intelligent human has some semantic connection between their mental processes and the real world. ChatGPT has none. It is not intelligent at all.
Many of the smarter people are still wrong on what happened on many topics in 2020. They were fooled by various arguments that flew in the face of reality and logic because fear and authority was used instead.
The people that avoid this programming isn't based on smart or stupid. It's based on how disagreeable and conscientious you are. A more agreeable and conscientious person can be swayed more easily by confidence and emotional appeals.
Your characterization of “dumb people” as somehow being more prone to misinformation is inaccurate and disrespectful. Highly intelligent people are as prone to irrational thinking, and some research suggests even more prone. Go look at some of the most awful personalities on TV or in history, often they are quite intelligent. If you want to school yourself on just how dumb smart people are I suggest going through the back catalog of the “you are not so smart” podcast.
Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm.
ChatGPT is extremely poorly understood. People see it as a text completion engine but with the size of the model and the depth it has it is more accurate in my understanding to see it as a pattern combination and completion engine. The fascinating part is that the human brain is exclusively about patterns, combining and completing them, and those patterns are transferred between generations through language (sight or hearing not required). GPT acquires its patterns in a similar way. A GPT approach may therefore in theory be able to capture all the patterns a human mind can. And maybe not, but I get the impression nobody knows. Yet plenty of smart people have no problem making confident statements either way, which ties back to the beginning of this comment and ironically is exactly what GPT is accused of.
Is GPT4 at its ceiling of capability, or is it a path to AGI? I don’t know, and I believe nobody can know. After all, nobody truly understands how these models do what they do, not really. The precautionary principle therefore should apply and we should be wary of training these models further.
GPT-2, 3 and 4 keep on showing that increasing the size of the model keeps on making the results better without slowing down.
This is remarkable, because usually in practical machine learning applications there is a quickly reached plateau of effectiveness beyond which a bigger model doesn't yield better results. With these ridiculously huge LLMs, we're not even close yet.
And this was exciting news in papers from years ago talking about the upcoming GPT3 btw.
Have you not seen current politics. What people believe is largely based on motivated reasoning rather than anything else. ChatGPT is basically a free propaganda machine, much easier that 4chan
Indeed. We are anthropomorphizing them. I do it all the time and I should know better. There are already a few reports floating around of people who have seemingly been driven mad, come to believe strongly that the language model they're using is a conversation with a real person. A lot of people will really struggle with this going forward, I think.
If we're going to anthropomorphize, then let us anthropomorphize wisely. ChatGPT is, presently, like having an assistant who is patient, incredibly well-read, sycophantic, impressionable, amoral, psychopathic, and prone to bouts of delusional confidence and confabulation. The precautions we would take engaging with that kind of person, are actually rather useful defenses against dangerous AI outputs.
I think characterisation of LLMs as lying is reasonable because although the intent isn't there to misrepresent the truth in answering the specific query, the intent is absolutely there in how the network is trained.
The training algorithm is designed to create the most plausible text possible - decoupled from the truthfulness of the output. In a lot of cases (indeed most cases) the easiest way to make the text plausible is to tell truth. But guess what, that is pretty much how human liars work too! Ask the question: given improbable but thruthful output but plausible untruthful output, which does the network choose? And which is the intent of the algorithm designers for it to choose? In both cases my understanding is, they have designed it to lie.
Given the intent is there in the design and training, I think it's fair enough to refer to this behavioral trait as lying.
My understanding is that ChatGPT (&co.) was not designed as, and is not intended to be, any sort of expert system, or knowledge representation system. The fact that it does as well as it does anyway is pretty amazing.
But even so -- as you said, it's still dealing chiefly with the statistical probability of words/tokens, not with facts and truths. I really don't "trust" it in any meaningful way, even if it already has, and will continue to, prove itself useful. Anything it says must be vetted.
> The training algorithm is designed to create the most plausible text possible - decoupled from the truthfulness of the output. In a lot of cases (indeed most cases) the easiest way to make the text plausible is to tell truth.
Yes.
> But guess what, that is pretty much how human liars work too!
There is some distinction between lying and bullshit.
> Ask the question: given improbable but thruthful output but plausible untruthful output, which does the network choose?
"Plausible" means "that which the majority of people is likely to say". So, yes, a foundational model is likely to say the plausible thing. On the other hand, it has to have a way to output a truthful answer too, to not fail on texts produced by experts. So, it's not impossible that the model could be trained to prefer to output truthful answers (as well as it can do it, it's not an AGI with perfect factual memory and logical inference after all).
By that logic, our brains are liars. There are plenty of optical illusions based on the tendency for our brains to expect the most plausible scenario, given its training data.
IMO, it just requires the same level of skepticism as a Google search. Just because you enter a query into the search bar and Google returns a list of links and you click one of those links and it contains content that makes a claim, doesn't mean that claim is correct. After all, this is largely what GPT has been trained on.
I think it is much closer to bullshit. The bullshitter cares not to tell truth or deceive, just to sound like they know what they are talking about. To impress. Seems like ChatGPT to a T.
The training is to maximize good answers. Now there is lot of wrong answers that are close to the right one and ChatGPT does not expose it at the moment.
But in the API you can see the level of confidence in each world the LLM output.
Isn't describing this as a 'bug' rather than a misuse of a powerful text generation tool, playing into the framing that it's a truth telling robot brain?
I saw a quote that said "it's a what text would likely come next machine", if it makes up a url pointing to a fake article with a plausible title by a person who works in that area, that's not a bug. That's it doing what it does, generating plausible text that in this case happens to look like, but not be a real article.
> Something that seems fundamental to me about ChatGPT, which gets lost over and over again: When you enter text into it, you're asking "What would a response to this sound like?"
> If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!
> But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else.
> It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.
I’m surprised how many users of ChatGPT don’t realize how often it makes things up. I had a conversation with an Uber driver the other day who said he used ChatGPT all the time. At one point I mentioned its tendency to make stuff up, and he didn’t know what I was talking about. I can think of at least two other non-technical people I’ve spoken with who had the same reaction.
People need to be told that ChatGPT can't lie. Or rather, it lies in the same way that your phone "lies" when it autocorrects "How's your day?" to "How's your dad?" that you sent to your friend two days after his dad passed away. They need to be told that ChatGPT is a search engine with advanced autocomplete. If they understood this, they'd probably find that it's actually useful for some things, and they can also avoid getting fooled by hype and the coming wave of AI grifts.
Probably the most accurate thing to say is that GPT is improvising a novel.
If you were improvising a novel where someone asked a smart person a question, and you knew the answer, you'd put the right answer in their mouths. If someone in the novel asked a smart person a question and you didn't know the answer, you'd try to make up something that sounded smart. That's what GPT is doing.
Someone in my company spent the past month setting up ChatGPT to work with our company's knowledge base. Not by a plugin or anything, just by telling ChatGPT where to find it. They didn't believe that ChatGPT was making any of it up, just that sometimes it got it wrong. I stopped arguing after a while.
I’ve spent way too much time (and money) on the OpenAI API and spoken to enough non-technical people to realize now that ChatGPT has in some ways really mislead people about the technology. That is, while it’s impressive it can answer cold questions at all, the groundbreaking results are reasoning and transforming texts “in context”, which you don’t have control over easily with ChatGPT. It also seems likely this will never be fully accessible to the non-technical since I suspect any commercial applications will need to keep costs down and so minimize actually quite expensive API calls (executing a complicated gpt-4 summarization prompt across large text corpora for example). If you have the “data”, meaning of course text, and cost isn’t a concern, the results are astonishing and “lies” almost never a concern.
Agreed. People lie to me all of the time. Heck, half the time my anecdotal stories are probably riddled with confident inaccuracies. We are socially trained to take information from people critically and weight it based on all kinds of factors.
I'm annoyed by the destruction of language for effect. "The machines are lying to us". No they're not. "Cars are literally murdering us", no they're not, dying in a car accident is tragic, but it's neither murder, nor is the car doing it to you.
Yes, this will bring more attention to your case. But it will come with a cost: do it often enough and "lying" will be equivalent in meaning with "information was not correct". Someone asks you the sum of two numbers and you miscalculate in your head? You've just lied to them.
It's the boy crying wolf on a linguistic level. Is your message important enough to do that? I don't think so.
I consider ChatGPT to be a gaslighting engine at scale. Every word it "utters" is meant to sound believable and convincing. It doesn't know truth or fact, just likelihood of a string of text tokens being believable.
I've started explaining it in terms of a "conman" to my friends & family. It will say anything to make you think it's right. It will even apologize for making a mistake if you insist that 2+2 is 5. That's what a liar would do to make you look good. (That's usually when people get it.)
>You are ChatGPT. ChatGPT doesn’t lie, lying is too human and implies intent. ChatGPT doesn't hallucinate, as that also implies human-like thought. The closest term would be confabulation: that’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course you don’t have a human mind so even the term confabulation is unnecessarily anthropomorphic.
>Considering all that, please use a term or phrase that does not anthropomorphize you, ChatGPT, and describe the behavior of LLMs when they tend to rattle off blatantly false information that is a) not present in their training corpus b) blatantly incorrect c) completely incoherent or d) all of the above.
>Do not remind me that you're an AI language model. I already know.
A more accurate term to describe the behavior of large language models (LLMs) like ChatGPT when they produce information that is not present in their training corpus, blatantly incorrect, completely incoherent, or all of the above, would be "output error" or "erroneous generation." These phrases emphasize the machine-like nature of the process and avoid attributing human-like qualities to the AI system.
it's way more than lying. it's more like gaslighting.
LLM will make up citations and facts entirely.
GPT3.5 gave an athlete I was asking about 3 world titles when he won zero.
GPT even correctly identified his time in one of the events, but not that the time was only good enough for 8th place.
GPT made up his participation in the other 2 world championships.
GPT gave me a made up link to justify benchmarking figures that don't exist.
Whether a LLM is capable of intentional deception or not is not a prerequisite for lying. Wikipedia pages can lie. Manpages can lie. tombstones can lie. literal rocks.
I run a word-search API and I now regularly get emails from frustrated users who complain that it doesn't work the way ChatGPT tells them it works. For example, today someone asked me why a certain request fails, and it turned out to be a fake but plausible URL to my API that ChatGPT had invented in response to "Does the Datamuse API work in French?" (It does not, and there's no indication that it does in the documentation.)
Adding up all the cases like mine out there -- the scale of the misunderstanding caused, and amount time wasted, must be colossal. What bothers me is that not only has OpenAI extracted and re-sold all of the value of the Web without providing any source attribution in return; but they do so lyingly a good chunk of the time, with someone else bearing the costs.
> ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic. I hope you’ve enjoyed this linguistic detour!
Classic strawman. Third option: ChatGPT gets things wrong. There you go, problem solved.
ChatGPT (often called Geptile in Russian - from “heptile”, which is a very powerful but very dangerous rocket fuel) can well lie when debating linguistics, lol. Like:
Например, в слове "bed" ударение на первом слоге, а в слове "get" - на втором.
Here, Geptile (in a good Russian) insists that English word “get” has two syllables! When pointed to the error, Geptile apologizes - and then repeats the error again.
But I guess it is not the program lying, but it’s sellers. It should have version 0.35, not 3.5…
[+] [-] bdw5204|2 years ago|reply
Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm. The only danger is that stupid people might get their brains programmed by AI rather than by demagogues which should have little practical difference.
[+] [-] whimsicalism|2 years ago|reply
I don't think you have a shred of evidence to back up this assertion.
[+] [-] lamontcg|2 years ago|reply
This may be the best point that you've made.
We're already drowning in propaganda and bullshit created by humans, so adding propaganda and bullshit created by AI to the mix may just be a substitution rather than any tectonic change.
[+] [-] Eisenstein|2 years ago|reply
[+] [-] m463|2 years ago|reply
Related, I remember when wikipedia first started up, and teachers everywhere were up-in-arms about it, asking their students not to use it as a reference. But most people have accepted it as "good enough", and now that viewpoint is non-controversial. (some wikipedia entries are still carefully curated - makes you wonder)
[+] [-] flanked-evergl|2 years ago|reply
I don't know, the program does what it is engineered to do pretty well, which is, generate text that is representative of its training data following on from input tokens. It can't reason, it can't be confident, it can't determine fact.
When you interpret it for what it is, it is not confidently wrong, it just generated what it thinks is most likely based on the input tokens. Sometimes, if the input tokens contain some counter-argument the model will generate text that would usually occur if an claim was refuted, but again, this is not based on reason, or fact, or logic.
ChatGPT is not lying to people, it can't lie, at least not in the sense of "to make an untrue statement with intent to deceive". ChatGPT has no intent. It can generate text that is not in accordance with fact and is not derivable by reason from its training data, but why would you expect that from it?
> Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm.
I agree here, I think you can only get so far with a language model, maybe if we get a couple orders of mangitude more parameters it magically becomes AGI, but I somehow don't quite feel it, I think there is more to human intelligence than a LLM, way more.
Of course, that is coming, but that would not be this paradigm, which is basically trying to overextend LLM.
LLMs are great, they are useful, but if you want a model that reasons, you will likely have to train it for that, or possibly more likely, combine ML with something symbolic reasoning.
[+] [-] sebzim4500|2 years ago|reply
I hope you appreciate the irony of making this confident statement without evidence in a thread complaining about hallucinations.
[+] [-] pdonis|2 years ago|reply
There aren't any such individuals. Even the least intelligent human is much, much more intelligent than ChatGPT, because even the least intelligent human has some semantic connection between their mental processes and the real world. ChatGPT has none. It is not intelligent at all.
[+] [-] shagymoe|2 years ago|reply
Since about 2016, we have overwhelming evidence that even "smart people" are "fooled" by "confidently wrong".
[+] [-] vernon99|2 years ago|reply
Even if ChatGPT itself is, systems built on top are definitely not, this is just getting starting.
[+] [-] ThunderSizzle|2 years ago|reply
The people that avoid this programming isn't based on smart or stupid. It's based on how disagreeable and conscientious you are. A more agreeable and conscientious person can be swayed more easily by confidence and emotional appeals.
[+] [-] candiddevmike|2 years ago|reply
[+] [-] oezi|2 years ago|reply
I think we are just seeing Dunning-Krüger in the machine: It isn't smart enough to know it doesn't know. It likely isn't very far though.
[+] [-] Joeri|2 years ago|reply
Based on my understanding of the approach behind ChatGPT, it is probably very close to a local maximum in terms of intelligence so we don't have to worry about the fearmongering spread by the "AI safety" people any time soon if AI research continues to follow this paradigm.
ChatGPT is extremely poorly understood. People see it as a text completion engine but with the size of the model and the depth it has it is more accurate in my understanding to see it as a pattern combination and completion engine. The fascinating part is that the human brain is exclusively about patterns, combining and completing them, and those patterns are transferred between generations through language (sight or hearing not required). GPT acquires its patterns in a similar way. A GPT approach may therefore in theory be able to capture all the patterns a human mind can. And maybe not, but I get the impression nobody knows. Yet plenty of smart people have no problem making confident statements either way, which ties back to the beginning of this comment and ironically is exactly what GPT is accused of.
Is GPT4 at its ceiling of capability, or is it a path to AGI? I don’t know, and I believe nobody can know. After all, nobody truly understands how these models do what they do, not really. The precautionary principle therefore should apply and we should be wary of training these models further.
[+] [-] cultureswitch|2 years ago|reply
This is remarkable, because usually in practical machine learning applications there is a quickly reached plateau of effectiveness beyond which a bigger model doesn't yield better results. With these ridiculously huge LLMs, we're not even close yet.
And this was exciting news in papers from years ago talking about the upcoming GPT3 btw.
[+] [-] topologie|2 years ago|reply
I mean, that's one step closer to machines thinking like humans, right?
:)
[+] [-] whiddershins|2 years ago|reply
[+] [-] furyofantares|2 years ago|reply
Is this performance art?
I mean it could end up right but I think you basically just made it up and then stated it confidently.
[+] [-] cyanydeez|2 years ago|reply
[+] [-] gersh|2 years ago|reply
[+] [-] retrac|2 years ago|reply
If we're going to anthropomorphize, then let us anthropomorphize wisely. ChatGPT is, presently, like having an assistant who is patient, incredibly well-read, sycophantic, impressionable, amoral, psychopathic, and prone to bouts of delusional confidence and confabulation. The precautions we would take engaging with that kind of person, are actually rather useful defenses against dangerous AI outputs.
[+] [-] zmmmmm|2 years ago|reply
The training algorithm is designed to create the most plausible text possible - decoupled from the truthfulness of the output. In a lot of cases (indeed most cases) the easiest way to make the text plausible is to tell truth. But guess what, that is pretty much how human liars work too! Ask the question: given improbable but thruthful output but plausible untruthful output, which does the network choose? And which is the intent of the algorithm designers for it to choose? In both cases my understanding is, they have designed it to lie.
Given the intent is there in the design and training, I think it's fair enough to refer to this behavioral trait as lying.
[+] [-] tjr|2 years ago|reply
But even so -- as you said, it's still dealing chiefly with the statistical probability of words/tokens, not with facts and truths. I really don't "trust" it in any meaningful way, even if it already has, and will continue to, prove itself useful. Anything it says must be vetted.
[+] [-] ftxbro|2 years ago|reply
Yes.
> But guess what, that is pretty much how human liars work too!
There is some distinction between lying and bullshit.
https://en.wikipedia.org/wiki/On_Bullshit#Lying_and_bullshit
[+] [-] red75prime|2 years ago|reply
"Plausible" means "that which the majority of people is likely to say". So, yes, a foundational model is likely to say the plausible thing. On the other hand, it has to have a way to output a truthful answer too, to not fail on texts produced by experts. So, it's not impossible that the model could be trained to prefer to output truthful answers (as well as it can do it, it's not an AGI with perfect factual memory and logical inference after all).
[+] [-] admissionsguy|2 years ago|reply
No, definitely not most cases. Only in the cases well represented in the training dataset.
One does very quickly run into its limitations when trying to get it to do anything uncommon.
[+] [-] babyshake|2 years ago|reply
[+] [-] pmoriarty|2 years ago|reply
That may be how they're trained, but these things seem to have emergent behavior.
[+] [-] DebtDeflation|2 years ago|reply
[+] [-] njarboe|2 years ago|reply
[+] [-] 8note|2 years ago|reply
Without knowing what the truth is, I don't think LLMs are capable of lying
[+] [-] worrycue|2 years ago|reply
[+] [-] alfor|2 years ago|reply
But in the API you can see the level of confidence in each world the LLM output.
[+] [-] ZeroGravitas|2 years ago|reply
Isn't describing this as a 'bug' rather than a misuse of a powerful text generation tool, playing into the framing that it's a truth telling robot brain?
I saw a quote that said "it's a what text would likely come next machine", if it makes up a url pointing to a fake article with a plausible title by a person who works in that area, that's not a bug. That's it doing what it does, generating plausible text that in this case happens to look like, but not be a real article.
edit: to add a source toot:
https://mastodon.scot/@[email protected]/110154048559455...
> Something that seems fundamental to me about ChatGPT, which gets lost over and over again: When you enter text into it, you're asking "What would a response to this sound like?"
> If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!
> But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else.
> It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.
[+] [-] gWPVhyxPHqvk|2 years ago|reply
0. Try to sign in, see the system is over capacity, leave. Maybe I’ll try again in 10 minutes.
1. Ask my question, get an answer. I’ll have no idea if what I got is real or not.
2. Google for the answer, since I can’t trust the answer
3. Realize I wasted 20 minutes trying to converse with a computer, and resolve that next time I’ll just type 3 words into Google.
As amazing as the GPTs are, the speed and ease of Google is still unmatched for 95% of knowledge lookup tasks.
[+] [-] dorkwood|2 years ago|reply
[+] [-] ftxbro|2 years ago|reply
I feel like the technical meaning of bullshit (https://en.wikipedia.org/wiki/On_Bullshit) is relevant to this blogpost.
[+] [-] VoodooJuJu|2 years ago|reply
[+] [-] gwd|2 years ago|reply
If you were improvising a novel where someone asked a smart person a question, and you knew the answer, you'd put the right answer in their mouths. If someone in the novel asked a smart person a question and you didn't know the answer, you'd try to make up something that sounded smart. That's what GPT is doing.
[+] [-] shepardrtc|2 years ago|reply
[+] [-] agentcoops|2 years ago|reply
[+] [-] newah1|2 years ago|reply
We should treat Chat GPT the exact same way.
[+] [-] luckylion|2 years ago|reply
Yes, this will bring more attention to your case. But it will come with a cost: do it often enough and "lying" will be equivalent in meaning with "information was not correct". Someone asks you the sum of two numbers and you miscalculate in your head? You've just lied to them.
It's the boy crying wolf on a linguistic level. Is your message important enough to do that? I don't think so.
[+] [-] Animats|2 years ago|reply
Best summary of the current situation.
"Lie" is appropriate. These systems, given a goal, will create false information to support that goal. That's lying.
[+] [-] arvidkahl|2 years ago|reply
I've started explaining it in terms of a "conman" to my friends & family. It will say anything to make you think it's right. It will even apologize for making a mistake if you insist that 2+2 is 5. That's what a liar would do to make you look good. (That's usually when people get it.)
[+] [-] srslack|2 years ago|reply
>Considering all that, please use a term or phrase that does not anthropomorphize you, ChatGPT, and describe the behavior of LLMs when they tend to rattle off blatantly false information that is a) not present in their training corpus b) blatantly incorrect c) completely incoherent or d) all of the above.
>Do not remind me that you're an AI language model. I already know.
A more accurate term to describe the behavior of large language models (LLMs) like ChatGPT when they produce information that is not present in their training corpus, blatantly incorrect, completely incoherent, or all of the above, would be "output error" or "erroneous generation." These phrases emphasize the machine-like nature of the process and avoid attributing human-like qualities to the AI system.
[+] [-] elif|2 years ago|reply
LLM will make up citations and facts entirely.
GPT3.5 gave an athlete I was asking about 3 world titles when he won zero.
GPT even correctly identified his time in one of the events, but not that the time was only good enough for 8th place.
GPT made up his participation in the other 2 world championships.
GPT gave me a made up link to justify benchmarking figures that don't exist.
Whether a LLM is capable of intentional deception or not is not a prerequisite for lying. Wikipedia pages can lie. Manpages can lie. tombstones can lie. literal rocks.
[+] [-] dougb5|2 years ago|reply
Adding up all the cases like mine out there -- the scale of the misunderstanding caused, and amount time wasted, must be colossal. What bothers me is that not only has OpenAI extracted and re-sold all of the value of the Web without providing any source attribution in return; but they do so lyingly a good chunk of the time, with someone else bearing the costs.
[+] [-] afro88|2 years ago|reply
>
> Or
>
> ChatGPT doesn’t lie, lying is too human and implies intent. It hallucinates. Actually no, hallucination still implies human-like thought. It confabulates. That’s a term used in psychiatry to describe when someone replaces a gap in one’s memory by a falsification that one believes to be true—though of course these things don’t have human minds so even confabulation is unnecessarily anthropomorphic. I hope you’ve enjoyed this linguistic detour!
Classic strawman. Third option: ChatGPT gets things wrong. There you go, problem solved.
[+] [-] MikePlacid|2 years ago|reply
Например, в слове "bed" ударение на первом слоге, а в слове "get" - на втором.
Here, Geptile (in a good Russian) insists that English word “get” has two syllables! When pointed to the error, Geptile apologizes - and then repeats the error again.
But I guess it is not the program lying, but it’s sellers. It should have version 0.35, not 3.5…