top | item 45857280

(no title)

shanev | 3 months ago

This is solvable at the level of an individual developer. Write your own benchmark for code problems that you've solved. Verify tests pass and that it satisfies your metrics like tok/s and TTFT. Create a harness that works with API keys or local models (if you're going that route).

discuss

order

hamdingers|3 months ago

At the developer level all my LLM use is in the context of agentic wrappers, so my benchmark is fairly trivial:

Configure aider or claude code to use the new model, try to do some work. The benchmark is pass/fail, if after a little while I feel the performance is better than the last model I was using it's a pass, otherwise it's a fail and I go back.

Building your own evaluations makes sense if you're serving an LLM up to customers and want to know how it performs, but if you are the user... use it and see how it goes. It's all subjective anyway.

embedding-shape|3 months ago

> Building your own evaluations makes sense if you're serving an LLM up to customers and want to know how it performs, but if you are the user... use it and see how it goes. It's all subjective anyway.

I'd really caution against this approach, mainly because humans suck at removing emotions and other "human" factors when judging how well something works, but also because comparing across models gets a lot easier when you can see 77/100 vs 91/100 as a percentage score, over your own tasks that you actually use the LLMs for. Just don't share this benchmark publicly once you're using it for measurements.

motoboi|3 months ago

Well, openai github is open to write evaluations. Just add your there and guaranteed that the next model will perform better on them.

j45|3 months ago

We have to keep in mind that "solving" might mean having the LLM recognize the pattern of solving something.

davedx|3 months ago

That’s called evals and yes any serious AI project uses them