(no title)
Daveenjay | 3 years ago
“ Okay! Imagine that you have a big robot in your head that knows a lot about lots of different things. Sometimes, the robot might make mistakes or say things that aren't true. The proposed method is like a way to ask the robot questions and figure out what it knows, even if it says something that isn't true. We do this by looking inside the robot's head and finding patterns that make sense, like if we ask the robot if something is true and then ask if the opposite of that thing is true, the robot should say "yes" and then "no." Using this method, we can find out what the robot knows, even if it sometimes makes mistakes.”
kmonsen|3 years ago
IIAOPSW|3 years ago
http://www2.csudh.edu/ccauthen/576f12/frankfurt__harry_-_on_...
The essence of bullshit is that it is different from a lie, for a liar respects the fact that their is a truth and knows what the truth is well enough to purposefully misrepresent it, whereas a bullshitter neither knows nor cares if what they are saying corresponds to anything in reality just so long as it makes the right impression.
>The point that troubles Wittgenstein is manifestly not that Pascal has made a mistake in her description of how she feels. Nor is it even that she has made a careless mistake. Her laxity, or her lack of care, is not a matter of having permitted an error to slip into her speech on account of some inadvertent or momentarily negligent lapse in the attention she was devoting to getting things right. The point is rather that, so far as Wittgenstein can see, Pascal offers a description of a certain state of affairs without genuinely submitting to the constraints which the endeavor to provide an accurate representation of reality imposes. Her fault is not that she fails to get things right, but that she is not even trying.
wolletd|3 years ago
I took me 10 minutes and opening the Git Log of Boost ("maybe they removed it?") until I realized "well, it just made that up". The whole answer was consistent and convicing enough, that I started searching, but it was just nonsense. It even provided a convincing amount of example code for it's made up function.
That experience was... insightful.
While we often say "If you need something in C++, Boost probably has it" and it's not untrue, ChatGPT seems to exercise that idea a little too much.
TheEzEzz|3 years ago
dr_dshiv|3 years ago
If you call bullshit, you have to say what was wrong or even what you think is wrong. Otherwise you are just insulting our new robot overlords.
Now, it seems you claim that consistency isn’t the same as making sense. But having more logically consistent robots seems like a big win! Otherwise I could criticize math papers for not making sense, even as I don’t doubt their consistency.
gpvos|3 years ago
naasking|3 years ago
A consistent argument is an argument that makes sense to the robot, not necessarily one that makes sense to you.
djexjms|3 years ago
psychphysic|3 years ago
ChatGPT works when you tell it what to convey and it just puts that into words.
M4v3R|3 years ago