top | item 33951475

(no title)

Daveenjay | 3 years ago

Asked ChatGPT to explain like I’m 5. This is what it produced.

“ Okay! Imagine that you have a big robot in your head that knows a lot about lots of different things. Sometimes, the robot might make mistakes or say things that aren't true. The proposed method is like a way to ask the robot questions and figure out what it knows, even if it says something that isn't true. We do this by looking inside the robot's head and finding patterns that make sense, like if we ask the robot if something is true and then ask if the opposite of that thing is true, the robot should say "yes" and then "no." Using this method, we can find out what the robot knows, even if it sometimes makes mistakes.”

discuss

order

kmonsen|3 years ago

That just show that the robot is consistent, not that it actually makes sense. So this explanation is bullshit even though it sounds convincing at first. That also the issue with most of ChatGPT, it is hard to know when it sounds convincing and is false.

IIAOPSW|3 years ago

Its literally bullshit in the highly technical sense.

http://www2.csudh.edu/ccauthen/576f12/frankfurt__harry_-_on_...

The essence of bullshit is that it is different from a lie, for a liar respects the fact that their is a truth and knows what the truth is well enough to purposefully misrepresent it, whereas a bullshitter neither knows nor cares if what they are saying corresponds to anything in reality just so long as it makes the right impression.

>The point that troubles Wittgenstein is manifestly not that Pascal has made a mistake in her description of how she feels. Nor is it even that she has made a careless mistake. Her laxity, or her lack of care, is not a matter of having permitted an error to slip into her speech on account of some inadvertent or momentarily negligent lapse in the attention she was devoting to getting things right. The point is rather that, so far as Wittgenstein can see, Pascal offers a description of a certain state of affairs without genuinely submitting to the constraints which the endeavor to provide an accurate representation of reality imposes. Her fault is not that she fails to get things right, but that she is not even trying.

wolletd|3 years ago

Some days ago, it told me "well, Boost has a function for that". I was surprised that I haven't found that myself.

I took me 10 minutes and opening the Git Log of Boost ("maybe they removed it?") until I realized "well, it just made that up". The whole answer was consistent and convicing enough, that I started searching, but it was just nonsense. It even provided a convincing amount of example code for it's made up function.

That experience was... insightful.

While we often say "If you need something in C++, Boost probably has it" and it's not untrue, ChatGPT seems to exercise that idea a little too much.

TheEzEzz|3 years ago

If you read the abstract it appears that ChatGPTs explanation is on point. You're right that the paper is relying on consistency, which doesn't guarantee accuracy, but it is what the paper is proposing (and they claim it does lead to increased accuracy).

dr_dshiv|3 years ago

Your comment had less value than the parent. Humanity is doomed.

If you call bullshit, you have to say what was wrong or even what you think is wrong. Otherwise you are just insulting our new robot overlords.

Now, it seems you claim that consistency isn’t the same as making sense. But having more logically consistent robots seems like a big win! Otherwise I could criticize math papers for not making sense, even as I don’t doubt their consistency.

gpvos|3 years ago

It looks to me like ChatGPT explained accurately what the abstract says. And indeed, the abstract sounds like this research is largely bullshit. But it's not ChatGPT that is at fault here.

naasking|3 years ago

> That just show that the robot is consistent, not that it actually makes sense.

A consistent argument is an argument that makes sense to the robot, not necessarily one that makes sense to you.

djexjms|3 years ago

But that is actually a fairly accurate description of the paper you asked it to summarize for you. It's not the models fault that you don't like the argument of the paper.

psychphysic|3 years ago

As always (humans too) bollocks in bollocks out.

ChatGPT works when you tell it what to convey and it just puts that into words.

M4v3R|3 years ago

This was a great explanation, now can any expert in the field tell us if it’s actually correct? :)