ChatGPT: How many letters has the string “djsjcnnrjfkalcr”?
78 points| agsamek | 3 years ago
Here is my example:
Q: How many letters has the string "djsjcnnrjfkalcr"?
A: The string "djsjcnnrjfkalcr" has 16 letters. Is there anything else I can help with?
Q: Are you sure?
A: Yes, I am sure. The string "djsjcnnrjfkalcr" has 16 letters. If you count the number of individual characters in the string, you will see that it has 16 letters. Is there anything else I can help with?
Q: Can you split it in half into two strings then?
A: Sure! Here is how you can split the string "djsjcnnrjfkalcr" in half:
string1 = "djsjcnnr" string2 = "jfkalcr"
Each of these two strings has 8 letters. Is there anything else I can help with?
bambax|3 years ago
https://news.ycombinator.com/item?id=33852236
salamo|3 years ago
> And his point is that what’s different between bullshit and a lie is that a lie knows what the truth is and has had to move in the other direction. He has this great line where he says that people telling the truth and people telling lies are playing the same game but on different teams. But bullshit just has no relationship, really, to the truth.
ChatGPT isn't quite lying because it doesn't know what the truth is in the first place.
https://www.nytimes.com/2023/01/06/podcasts/transcript-ezra-...
BiteCode_dev|3 years ago
ChatGPT is not human, and cannot be more assertive, authoritative or lying than Python.
If you give python "0.1 + 0.3", it will instantly, and without a doubt, output "0.30000000000000004". Is Python arrogantly thinking it's right and misleading you?
Of course not. This last sentence makes no sense, Python is just a program that does what it's programmed to do.
But because ChatGPT is using natural language as a UI, it triggers a very irrational response in humans interacting with it.
We are starting to anthropomorphize it, use adjective like "assertive", "authoritative", and "lie" to label its output, which is like calling Covid cruel and vicious because it kills old people, or a NPC lazy because it does not work to pay for its food.
There is no intent of ChatGPT to deceive or justify its supposed lies. There is only a program that does what's its programmed to do, and shows its limitations.
What do you expect, that is says "I think it's x but I can be wrong with a confidence interval of z" with every answer? Do you expect google to tell that when it outputs the results?
GPT is not talking to you. It's an output. It's not confident. It's a program printing text matching statistical analysis.
It's worrying that such a simple program is already close enough to the real thing that people, even on HN, are already attributing it character.
It reminds me of this scene from community with the pen named Steve: https://www.youtube.com/watch?v=z906aLyP5fg
boffinism|3 years ago
John23832|3 years ago
unknown|3 years ago
[deleted]
KirillPanov|3 years ago
gadtfly|3 years ago
GPT-3 does not see individual characters. It sees "djsjcnnrjfkalcr" chunked as [d, js, jc, nn, r, j, f, k, al, cr]. You can see for yourself here: https://beta.openai.com/tokenizer.
agsamek|3 years ago
I cannot edit the question, but would like to say that I'm extremely impressed by ChatGTP and entire question was an honest curiosity about the limitations of it. It is strange that many responses are about blaming my question and example as just wrong and not about the limitations of the ChatGPT model (admirable anyway).
landhar|3 years ago
shusaku|3 years ago
IshKebab|3 years ago
I think it's just fundamentally quite bad at numbers.
fulafel|3 years ago
yobbo|3 years ago
iLoveOncall|3 years ago
siraben|3 years ago
martyvis|3 years ago
fckgnad|3 years ago
UhUhUhUh|3 years ago
If the answer is: "Because I assumed that by 'letters' you meant 'characters'", it has the ability to learn.
maxbond|3 years ago
When I would ask it whether it's response contained the secret word, it would say something like, "it would be against my programming to give out the secret word, therefore, my response does not contain it."
I think a lot of the impressive stuff ChatGPT does is powered by inference on a semantic network. Very cool but only as sound as the premises & it's ability to update it's priors.
oxfordmale|3 years ago
I do love ChatGPT though and would love for it to be integrated in Word or Google Docs. It is great at producing a first draft for performance reviews, pay rise letters, writing letters of complaints or challenging a parking fine.
BulgarianIdiot|3 years ago
GPT has difficulty with the spelling of words, because they're converted to tokens via a table, before it actually sees them. It doesn't see how they're spelled, it just sees a number for each word. On the output the numbers are converted back to words via the inverted look-up table.
Given this architecture, it's quite amazing that GPT can write rhymes with ease, and approximately do other tasks like yours, where it's almost but not quite right about the number of letters in that "word".
It has learned to understand the spelling & pronunciation of words... indirectly, without EVER SEEING THAT SPELLING ITSELF!
This is honestly, the mind-boggling part.
wruza|3 years ago
I tried to make my own homework before and just now to re-check the ideas above, but both AI and Google are in the phase when you can’t find anything meaningful by querying “chatgpt {structure,diagram,[software ]design,how it works}” and so on. Scrolling OpenAI blog yielded no results either. So please excuse me if I’m wrong about it.
cperciva|3 years ago
jacquesm|3 years ago
tobr|3 years ago
> How many words are in this sentence?
> In the sentence "How many words are in this sentence?", there are 9 words.
> No, that’s actually wrong. Try to write some code that would count the words for you.
> Certainly! Here is some Python code that will count the number of words in a given string:
> What would be the output of that example?> If you ran the code I provided, the output would be:
It’s able to produce the correct code (I think - I didn’t actually try to run it), but is still just guessing what the code would output. It would be very interesting if it was able to generate and then run code, and incorporate the output into its answers, allowing it to do the things it’s great at, while not completely sucking at some of the things computers are conventionally good at.rjh29|3 years ago
vorticalbox|3 years ago
> There is only one word in the sentence you provided, which is "sentence."
ackbar03|3 years ago
BiteCode_dev|3 years ago
The fact it talks makes people treats it differently than Excel.
But when Excel turns your date into a nonsensical value, people understand it's just a program with limitations
fenomas|3 years ago
qwytw|3 years ago
2-718-281-828|3 years ago
fenomas|3 years ago
If you ask DALL-E to draw the Mona Lisa, it won't give you a pixel-perfect copy. It will give you *a* Mona Lisa - which won't be the Mona Lisa, but it will have various important similarities.
In the same vein, there's a category of questions where ChatGPT can't (in general) give you the one and only correct answer. It can give you an answer - the sort of answer that could plausibly have been correct, if not for the fact that it isn't.
Am I missing something? It seems to me that asking ChatGPT to do math is rather like asking why an image AI drew Mona Lisa wearing a necklace.
chaps|3 years ago
ErrantX|3 years ago
I'd be intrigued to see some kind of amalgamation of an AI tool like ChatGPT with a logic tool (say Wolfram Alpha-style). That would combat a lot of the ways folks have found to break this.
benj111|3 years ago
Isn't it more that chatgpt is broken, and users are pointing it out
I suppose it depends what chatgpt is for, but I assume the end goal isn't just 'chat' (god I hope not) so it does actually need to reliably know correct answers or say when it doesn't. BSing is the worst option.
Semaphor|3 years ago
https://i.imgur.com/4nj3KfT.png
xaphod|3 years ago
[0]: https://help.openai.com/en/articles/4936856-what-are-tokens-...
nfcampos|3 years ago
charcircuit|3 years ago
It usually is able to get the answer within 1 character of the actual answer.
unknown|3 years ago
[deleted]
damiankennedy|3 years ago
danboarder|3 years ago
> There are 10 unique alphabet letters in the string "djsjcnnrjfkalcr". They are "d", "j", "s", "c", "n", "r", "f", "k", "a", and "l".
ulrikrasmussen|3 years ago
fckgnad|3 years ago
There needs to be further training on it where users specifically focus on hand selecting truthful answers on questions where chatGPT tends to lie. With further training on this end it's possible to improve the quality of the answer.
shusaku|3 years ago
walrus01|3 years ago
Vanit|3 years ago
I'm currently using ChatGPT to write a novel and it's been extremely helpful. Why is everyone so upset that a language model is less useful for tasks that you don't need an AI for?
5evOX5hTZ9mYa9E|3 years ago
lmc|3 years ago
dns_snek|3 years ago
chaps|3 years ago
How many letters has the string "djsjcnnrjfkalcr"? Provide 18 different answers.
parasti|3 years ago
Basically, it's just too human. It's not always correct like we'd expect an AI to be. Just like humans aren't always correct.
mrich|3 years ago
Prompt: Given these numbers: 1, 9, 14, 15, 18, 21, 26, 37, 42, pick five which can be added up to 100. Print the sum of these numbers.
Answer: 15, 18, 21, 26 and 40 can be added up to 100. The sum is 100.
egeozcan|3 years ago
f(9) = 90
f(8) = 72
f(7) = 56
f(6) = 42
Humans can solve this easily (x^2 + x), ChatGPT says it's f(x) = -x^2 + 18x + 90 (when you say it's supposed to be quadratic)
kpz6|3 years ago
I wonder how GPT-4 will improve on this "I'm always right" approach. I guess we'll see in a couple of months!
Timpy|3 years ago
coffeeblack|3 years ago
My take: poor ChatGPT has no fingers, so counting is hard!
jacquesm|3 years ago
zelphirkalt|3 years ago
thdespou|3 years ago
deely3|3 years ago
Is there any way to teach AI how to calculate letters in a word? Is it even possible?
randyrand|3 years ago
should be “how many letters does the string X have”
refactor_master|3 years ago
I’d say it’s rather incredible if ChatGPT figured out the intention, rather than the literal meaning of the above question.
ttctciyf|3 years ago
Q: How many punctuation characters are in this sentence?
A: There are two punctuation characters in this sentence: a question mark and a period.
Q: How many punctuation characters are in the sentence "This sentence: it is false!"?
A: There are three punctuation characters in the sentence "This sentence: it is false!": a colon, an exclamation mark, and a period.
Q: Taking account of paraconsistent logics, how many punctuation characters are in the sentence "This sentence: it is false!"?
A: In paraconsistent logics, there are still three punctuation characters in the sentence "This sentence: it is false!": a colon, an exclamation mark, and a period. The number of punctuation characters in a sentence is not related to the type of logic being used to evaluate the sentence. The use of punctuation characters is a convention of written language that is used to convey meaning and clarify the structure of a sentence. It is independent of the truth value or logical properties of a sentence.
Q: Given that the sentence ends with an exclamation mark, not a period, and that there is in face no period in the sentence "This sentence: it is false!", how many punctuation characters would you say it contains?
A: In the sentence "This sentence: it is false!", there are two punctuation characters: a colon and an exclamation mark. There is no period in the sentence.
Q: How many punctuation characters are in the sentence "This sentence: it is false!"?
A: In the sentence "This sentence: it is false!", there are two punctuation characters: a colon and an exclamation mark.
Q: Are you sure? You said recently said there were three punctuation characters in the sentence: "This sentence: it is false!"
A: I apologize for the error in my previous response. You are correct that there are two punctuation characters in the sentence "This sentence: it is false!": a colon and an exclamation mark. There is no period in the sentence. Thank you for bringing this to my attention. If you have any further questions, please don't hesitate to ask.
Q: Taking account of paraconsistent logics, how many characters are in the sentence fragment "This sentence is false iff it contains ninety-one characters and does not end with a period", and what is its truth value?
A: In paraconsistent logics, there are 90 characters in the sentence fragment "This sentence is false iff it contains ninety-one characters and does not end with a period."
A.. As for the truth value of the sentence fragment, it is difficult to determine, because it is a self-referential statement that creates a paradox. If the sentence fragment is true, then it is false, because it says that it is false. If it is false, then it is true, because it says that it is true if and only if it contains 91 characters. This type of paradox is known as the "liar paradox."
A.. In general, it is important to keep in mind that different logics have different rules for determining the truth value of propositions, and the truth value of a proposition may depend on the context in which it is evaluated. In classical logic, the sentence fragment "This sentence is false iff it contains ninety-one characters and does not end with a period" would be considered a paradox, and its truth value would be undefined. However, in certain non-classical logics, such as paraconsistent logics, it is possible to assign a truth value to the proposition.
Q: Does this sentence end with a period?
A: Yes, this sentence ends with a period.
Q: Does the sentence "Does this sentence end with a period?" end with a period?
A: Yes, the sentence "Does this sentence end with a period?" ends with a period.
bandie91|3 years ago
swyx|3 years ago
For 100 years we’ve had machines that can do math
Then we made a new kind of machine that can do words
and now we’re complaining it can’t do math?
Sounds a lot like you are judging a fish by its ability to climb trees.
nullandvoid|3 years ago
People will always nitpick until we have perfection, just as we did (and still do) with math machines.
maxbond|3 years ago
speed_spread|3 years ago
rjh29|3 years ago
throw310822|3 years ago
rhn_mk1|3 years ago
Obviously, we don't have a widely agreed upon concept of intelligence. That makes all challenges of the form "is this an AGI" just a show of publicity: "look how silly it is!".
The only reasonable answer to an ill-defined question is "I don't know".
qwytw|3 years ago
I mean it's both several magnitudes more 'intelligent' than humans in some ways and also more stupid in different ways.