top | item 41870675

Saving $750K on AI inference with one line of code and no quality loss

8 points| t5-notdiamond | 1 year ago |notdiamond.ai

2 comments

order

pinkbeanz|1 year ago

This is neat -- how would you think about evaluating the quality loss as you change to more efficient models? I saw you did an analysis on the number of messages, but wondering if there's more robust methods?

t5-notdiamond|1 year ago

In offline training of our router, we run extensive cross-domain evaluations to determine when a smaller model can handle a request without any quality loss relative to more powerful models. In an online setting like our chat app, there's probably some more rigorous post-hoc analysis we could do on response quality—could make for a good follow-up post.