(no title)
ArkhamMirror | 2 months ago
There are LLM limitations on the call to generate hypotheses to return them in a certain format and to return a certain number of them, and that sort of thing, so it's usually in your best interest to use the LLM as more of an assistant to check if you missed anything or for a push to get started looking in different directions more than having the AI doing the whole thing (although if you are being lazy or don't know what to do, you could let the LLM do pretty much everything - I pretty much let the LLM handle everything it could in testing.)
No comments yet.