(no title)
gronky_ | 6 months ago
Building multiple attempts into your agent is stretching the rules, even if technically it’s acceptable
gronky_ | 6 months ago
Building multiple attempts into your agent is stretching the rules, even if technically it’s acceptable
terminalshort|6 months ago
mcintyre1994|6 months ago
radarsat1|6 months ago
It's interesting to think about what the trade-offs are. Assuming the system can properly classify a task as easy or hard (big "if" but I guess there are ways), there is nonetheless more to think about, depending on your pricing plan.
For subscription pricing, I guess you don't really care which model runs and in fact it's hard to find a reason to ever run the smaller model, so choosing between the models is more in the provider's interests for cost efficiency.
But for pay-per-use pricing, But if you have a bigger model that can get the answer right 80% of the time, and a smaller model that can handle smaller changes and get things right 60% of the time but correct its mistakes, then the system should try to run it on as many tasks as possible to save you money.. but in the end if ends up having to make a lot of corrections, then maybe you end up needing more total requests than the larger model. In that case maybe it's actually cheaper to run the larger model, if it takes fewer requests.
So I wonder how that kind of trade-off could be effectively calculated. I guess if you can figure out when "retries" happen you can count them and do some statistics on which model is more likely to work out in fewer shots. It's pretty complicated though, when you start to think about it in detail.
I do wonder if even having BOTH the smaller and bigger model make hypotheses, and try the smaller model's idea first, then if it fails, try the bigger model's idea, might be the way to go.
gronky_|6 months ago
def make_pass@1_agent(agent, n):
whymauri|6 months ago
unknown|6 months ago
[deleted]