There's something you don't know that it may know and you want to see what it knows. This is like just a sentence or maybe a few both input and output. All the other talk about this model vs that model vs agents vs rag vs prompt engineering is all about practitioner worries. Keep in mind the thing is probably wrong as you would with any of them. Or that they are subliminally telling you something wrong you may accidentally repeat in front of someone at some time where it really matters and you're going to let everyone and yourself down. Which is current state of all of these things, so, if you're not building them, or are an NLP specialist working with multidisciplinary researchers on a specific goal of pushing research, then these things all have the same utility at the end of the day. Some of the most short sighted systems advice seems to just spill out of Claude unsolicited, so, whatever the big models are up to isn't entirely helpful in my opinion. Hopefully they'll be pressured to reveal their prompts and other safety measures.
th0ma5|1 year ago