top | item 43249267

(no title)

yompal | 1 year ago

That approach does a lot better, but LLMs still have positional bias problem baked into the transformer architecture (https://arxiv.org/html/2406.07791v1). This is where the LLM biases selecting information earlier in the prompt than later, which is unfortunate for tool selection accuracy.

Since 2 steps are required anyways, might as well use a dedicated semantic search for tools like in agents.json.

discuss

order

paradite|1 year ago

Interesting. This is the first time I am hearing about intrinsic positional bias for LLM. I had some intuition on this but nothing concrete.