I have not encountered this problem yet. When I was talking about the format of the answer I meant the following: No matter if you're using Langchain, Llamaindex, something self made, or Instructor (just to get a json back); under the hood there is somewhere the request to the LLM to reply in a structured way, like "answer in the following json format", or "just say 'a', 'b' or 'c'". ChatGPT tends to obey this rather well, most locally running LLMs don't. They answer like:> Sure my friend, here is your requested json:
> ```
> {
> name: "Daniel",
> age: 47
> }
> ```
Unfortunately, the introductory sentence breaks directly parsing the answer, which means extra coding steps, or tweaking your prompt.
int_19h|2 years ago
There are a bunch of libraries for this already, e.g.: https://github.com/outlines-dev/outlines
PeterisP|2 years ago
dongecko|2 years ago