top | item 44585534

(no title)

nezaj | 7 months ago

Thank you for the kind words on the rules/documentation! It was definitely an iterative process to figure out how to get good results.

We have an llms.txt and llms-full.txt (~9k lines) which contains all our documentation. Feeding these to the claude didn't get great results, it was just too much information.

We manually compressed our llms-full.txt into a rules file (~1.5k lines) which declared the API upfront and provided snippets of how to do different things with callouts to common examples. This condensed version did better but would cause Claude to make subtle mistakes.

Looking at the kind of mistakes Claude made, it seemed like a human could make those mistakes too (very useful feedback for us to improve our API ). We thought “what's one of the smallest fully contained examples we can make that packs a bunch of info on how to use Instant?” That would probably be useful for both a human and an agent. And indeed it seemed to be the case.

discuss

order

arscan|7 months ago

> Looking at the kind of mistakes Claude made, it seemed like a human could make those mistakes too (very useful feedback for us to improve our API ).

This is something we've found for our API -- just having LLMs attempt to use it helps us identify things that we haven't documented well or placed enough emphasis on (for things that are critical but are non-obvious or may be drowned out by other less important information). Improvements that help the LLM tend to be good for developers too.

stopachka|7 months ago

Yes. Fun fact, Instant got the `create` method because of how many times LLMs hallucinated it.