top | item 45347889

(no title)

madethemcry | 5 months ago

That was as great reading, thank you.

I've a related observation. In my experience the amount of hallucinated urls with structured output (think of a field `url` or `link`) is pretty high. Especially compared to the alternative approach, where you let the llm generate text and then use a second llm to convert the text into the desired structured format.

With structured output, it's like the llm is forced to answer in a very specific way. So if there is no url for the given field, it makes up the url.

Here a related quote from the article:

> Structured outputs builds on top of sampling by constraining the model's output to a specific format.

discuss

order

miki123211|5 months ago

What I've found is that it is very important to make structured outputs as easy for the LLM as possible. This means making your schemas LLM-friendly instead of programmer-friendly.

E.g. if the LLM hallucinates non-existing URLs, you may add a boolean "contains_url" field to your entity's JSON schema, placing it before the URL field itself. This way, the URL extraction is split into two simpler steps, checking if the URL is there and actually extracting it. If the URL is missing, the `"contains_url": false` field in the context will strongly urge the LLM to output an empty string there.

This also comes up with quantities a lot. Imagine you're trying to sort job adverts by salary ranges, which you extract via LLm. . These may be expressed as monthly instead of annual (common in some countries), in different currencies, pre / post tax etc.

Instead of having an `annual_pretax_salary_usd` field, which is what you actually want, but which the LLM is extremely ill-equipped to generate, have a detailed schema like `type: monthly|yearly, currency:str, low:float, high:float, tax: pre_tax|post_tax`.

That schema is much easier for an LLM to generate, and you can then convert it to a single number via straight code.

lubujackson|5 months ago

Awesome insight, thanks for this!

hansvm|5 months ago

That's definitely possible.

As you know, (most current) LLMs build text autoregressively. This allows them to generate text with _exactly_ the same distribution as the training data.

When you constrain LLM output at each token, that gives a completely different distribution from letting the LLM generate a full output and then doing something with that (trying again, returning an error, post-processing, etc).

E.g.: Suppose the LLM has a training set of (aa, ab, ab, ba), noting that "ab" appears twice. Suppose your valid grammar is the set (ab, ba). Then your output distributions are:

Baseline: {invalid: 25%, ab: 50%, ba: 25%}

Constrained: {invalid: 0%, ab: 75%, ba: 25%}

Note that _all_ the previously invalid outputs were dumped into the "ab" bucket, skewing the ratio between "ab" and "ba". That skew may or may not be desirable, but assuming the training process was any good it's likely undesirable.

You've observed it in URLs, but I see it in JSON output as well. LLMs like to truncate long strings from time to time, but when they do they're more likely to provide invalid JSON (adding an ellipsis at the end of the fragment and doing nothing else). If that truncation starts to happen in a constrained environment, a period is a valid character in a long string, and eventually the grammar constraint will force a closing quote to appear. The result is still garbage, but instead of a detectable parse failure you have an undetectable corrupt field.

matheist|5 months ago

Why do you think the constrained percentages are 0/75/25 and not eg 0/66/33? (ie same relative likelihood for valid outputs)

anentropic|5 months ago

> let the llm generate text and then use a second llm to convert the text into the desired structured format

this sounds similar to what they discussed in the article with regards to "thinking" models, i.e. let them generate their <think>blah blah</think> preamble first before starting to constrain the output to structured format