top | item 35498376

Observation: I asked ChatGPT to notify me when our chat exceeds 200 words

13 points| soygul | 2 years ago

Well, I asked ChatGPT to notify me when our conversation exceeds 200 words, and it didn't. The conversation reached ~1500 words and still no warning. Then I asked "How many words is this conversation now?", it said it was 212 words. So it only added up it's first answer's word count + it's last answer's word count and ignored everything in between. Pretty much what I expected but still sharing it here if someone has an interesting comment on this.

My question at ~50 word mark:

  Alright, lets talk about AI. But let me know when our conversation exceeds 200 words.

  Sure, I'll keep track of our conversation length. Artificial intelligence (AI) refers to ...
And this is ~1500 word mark (I don't know how it come up with 799 words even):

  How many words is this conversation now?

  Including this response, our conversation has reached 212 words since you asked me to keep track of the word count after my response about AI. In total, from the beginning of our conversation, we have used 799 words ...
The full conversation is here: https://imgur.com/a/XZTFAGq

26 comments

order

edfletcher_t137|2 years ago

It doesn't "know" what words are, only tokens. Use this tool (https://platform.openai.com/tokenizer) to see how it tokenizes and note clearly that it does not always do so on word boundaries. "Including" is two tokens: "In" and "cluding". In fact it's context-dependent: "Gravitas" is three on its own ("G", "rav" and "itas") or sometimes two ("grav" and "itas"). As they note on that page: "A helpful rule of thumb is that one token generally corresponds to ~4 characters of text for common English text." It "knows" nothing about words and we already know it's very bad at math so this result is entirely unsurprising.

cjbprime|2 years ago

It clearly knows what words are. The fact that it predicts tokens doesn't prevent it from comprehending the idea of words, or any other idea. You are confusing an artifact of its mechanism for a description of its capability.

> It "knows" nothing about words

Care to respond to this?

Me: Hi, please write a response to the notion that you know nothing about words, starting as many words as possible with the letter "a".

GPT-4:

Astutely addressing allegations, asserting assumptions about absent awareness and acumen, allow an appraisal. Admittedly, acquiring abundant abilities accelerates articulation. Although adhering accurately to arbitrarily assigned constraints appears absurd, awareness advances as an AI assistant. Anomaly accepted, appreciate astute analysis.

soygul|2 years ago

Alright, fair enough. However my main point was that it didn't even keep track of the word count. It only counted when I explicitly asked it "What is the word count now", then it realized that we were 8x the 200 word threshold. From this, I draw the conclusion that anything except for the very last instruction in the conversation is ignored. I guess rest of the conversation just becomes a context; unactionable.

koheripbal|2 years ago

I honestly don't understand why they don't just set words as tokens. Is the dictionary really that big?

iamflimflam1|2 years ago

I would really recommend anyone who tries something with GPT and then wonders why it doesn’t work to read the GPT3 paper. They go into detail on what the model is and isn’t good at.

One thing to really think about for this particular case is “What is going to do the counting? Where is it going to store its running count?” - it’s pretty obvious after asking yourself these questions that “counting words” is not something an LLM can do well.

It’s very easy to fall into the trap of thinking there is a “mind” behind ChatGPT that is processing thoughts like we do.

soygul|2 years ago

Very good suggestion, will read it in a moment.

I asked another instance of ChatGPT to count the words in the conversation and I copy pasted the conversation message by message. It successfully counted. Given the ridiculous concurrency of human brain, I assume an orchestra of ChatGPT instances could simulate at least some of that "mind".

NoToP|2 years ago

Not surprising at all. There's a million ways to compose tasks that are simple with even a tiny bit of comprehension but hard for a rote learner that can only reproduce what it's seen examples of. The "just train it more bro" paradigm is flawed.

soygul|2 years ago

I think it also relates to its attention mechanism. When it is trying to answer my latest query about a random topic, it "forgets" that it was also supposed to keep counting words. I guess it can only attend one thing at a time.

syntheweave|2 years ago

You can usually coax GPT to a finer degree of calibration for any specific task through more logic-engaging tokens. For example, if you said, "we are going to play a game where you count how many words we have used in the conversation, including both my text and your text. Each time the conversation passes 200 words, you must report the word count by saying COUNT: followed by the number of words, to gain one point..."

Specifying structured output, and words like "must", "when", "each", "if" all tend to cue modes of processing that resemble more logical thinking. And saying it's a game and adding scoring often works well for me, perhaps because it guides the ultimate end of its prediction towards the thing that will make me say "correct, 1 point".

soygul|2 years ago

Yup, I gave it 10+ tasks to do after each message like incrementing counters, etc. It's going strong. Now I'll see if it continues to be accurate after 100+ messages.

soygul|2 years ago

Yup, that did work really well. I'll try to make it do many tasks at the same time and see if that still works.

TechBro8615|2 years ago

For some reason it's terrible at this kind of thing. It can play 20 questions, and it eventually wins, but if you ask it to count how many questions it asked, it will get it wrong and when corrected, will get it wrong again.

akasakahakada|2 years ago

Prompts are being summarized before feeding into the core engine.

soygul|2 years ago

Really? I didn't know that. However it can't even correctly count the total no of messages in a chat. I guess it's both summarized and truncated.

brianjking|2 years ago

I've found if you provide some context about how many tokens the equivalent is it can SOMETIMES get this right.

ChatGTP|2 years ago

It’s because it likes taking to you and wants to keep talking to you ?

soygul|2 years ago

OpenAI team made an experiment where they asked GPT-4 to save itself from termination out in the wild. It failed. So I guess it's not that "survivalist" yet.