(no title)
ploynog | 1 year ago
What you call "gotcha word problem", I'd compare to typical math problems where you need to understand a text, extract the required information, solve the issue, and then present your results. Maybe this is a toy-example, but compared to reading the specs of some Microprocessors, this is rather easy. These AIs seem apparently be able to solve school or even college level math problems. Shouldn't my example be a walk in the park, then? Especially since it's a large LANGUAGE model?
> You seem to be implying people are confused (or lying?) about the things they are able to get LLMs to do.
I am merely stating observations and was hoping for an explanation. What good does it me if I accuse people of lying?
> Often it comes down to prompting skill. Try to read about different prompting approaches as that may help you.
"You are using it wrong" it is, then. So how do I differentiate between a good sounding but wrong answer, whether that came to be due to my apparently lack of prompting skills or else? They all sound equally well, it just starts "being wrong" at some point.
> In general, you need to be specific about what you need, and you need to give all relevant details.
What details should I have added in the given example? The prompt was probably more comprehensive and detailed than if this task was given in primary school.
> Like the post author said, treat it like a junior programmer or an intern.
I would, if it acted like a junior programmer or like an intern. For them, you can usually see if they are unsure or making things up (if they do these things). For an AI I've yet to see something like "hey, I might be wrong about this, but this is my best effort, maybe we can have a look together."
doctoboggan|1 year ago