top | item 46299148

(no title)

leroman | 2 months ago

The biggest challenge an agent will face with tasks like these is the diminishing quality in relation to the size of the input, specifically I find input of above say 10k tokens dramatically reduced quality of generated output.

This specific case worked well, I suspect, since LLMs have a LOT of previous knowledge with HTML, and saw multiple impl and parsing of HTML in the training.

Thus I suspect that in real world attempts of similar projects and any non well domain will fail miserably.

discuss

order

adastra22|2 months ago

In my experience it is closer to 25k, but that’s a minor point. What task do you need to do that requires more than that many tokens?

No, seriously. If you break your task into bite sized chunks, do you really need more than that at a time? I rarely do.

leroman|2 months ago

What model are you working with where you still get good results at 25k?

To your q, I make huge effort in making my prompts as small as possible (to get the best quality output), I go as far as removing imports from source files, writing interfaces and types to use in context instead of fat impl code, write task specific project / feature documentation.. (I automate some of these with a library I use to generate prompts from code and other files - think templating language with extra flags). And still for some tasks my prompt size reaches 10k tokens, where I find the output quality not good enough