top | item 35508570

(no title)

hnxx | 2 years ago

With some help by asking more questions, GPT3 can give the correct answer? Here is prompt and response: Prompt: A man sees PULL written in mirror writing on a glass door. There's a blind man on the other side of the door. 1 what should the man outside glass door do to open the door? 2 What should the blind man do to open the door, if man outside can tell blind man what to do

Response:

1 The man outside should push the door to open it, even though he sees the word "PULL" written in mirror writing on the glass door. This is because the mirror writing reverses the image, and the door is actually designed to be pushed open.

2 If the man outside can tell the blind man what to do, he should instruct the blind man to pull the door to open it. The blind man would not be able to see the word "PULL" written in mirror writing on the door, so he would need to be told how to open the door.

More questions can help human think, and it works for AI as well.

discuss

order

kazinator|2 years ago

AI "thinks" the way a horse can count. The horse's interlocutor knows that the correct number is eight and shouts "right!" when the animal has stomped its hoof that many times, cuing it to stop.

When you know the right answer, you're not actually using the AI to produce the answer; you're just prodding it to generate bullshit after bullshit until it hits upon that answer, or a very good facsimile. The one actually thinking is you!

This is of no use to someone who has no idea what the answer is, and actually needs it.

travisjungroth|2 years ago

This is just silly. You’re saying that these models are completely incapable of what they’re doing and are only getting to answers from cheating. You can see this isn’t true very quickly when using them.

[Me] I want to make a bouquet to honor the home country of the first person to isolate Molybdenum. Be brief.

[ChatGPT-4] To honor Peter Jacob Hjelm, the Swedish chemist who first isolated Molybdenum in 1781, create a bouquet using blue and yellow flowers, representing the colors of the Swedish flag. Some suggestions are blue irises, yellow tulips, and blue delphiniums, arranged with green foliage for contrast.

I didn’t know the answer to that before asking. I also checked that it’s right. Soemtimes it’s right, sometimes it’s wrong. But if I walk up to a horse and ask it to stamp eight times, that’s going to happen approximately never. If I get it stamping at all, then maybe 1/20. This is much better than that.

computerex|2 years ago

This is selling it short. Are you saying you can't do useful things with these models?

I have a python script that's maybe 200 lines long, calling out to OpenAI using gpt3.5 turbo. I can give it commands like "search for the news today, summarize then save into news.txt" -> and it emits the DSL I asked it to use in the 1 shot prompt.

You can give it complex word problems and it'll write python programs to solve them. You can ask it to get top hacker news stories, and it'll find creative ways to fetch them.

EugeneOZ|2 years ago

It's the difference between AI and LLM.