top | item 39362260

(no title)

enonimal | 2 years ago

AFAICT, this is a more advanced way of using Embeddings (which can encode for the vibes similarity (not an official term) of prompts) to determine where you get the most "bang for your buck" in terms of testing.

For instance, if there are three conversations that you can use to test if your AI is working correctly:

(1) HUMAN: "Please say hello"

    AI: "Hello!"
(2) HUMAN: "Please say goodbye"

    AI: "Goodbye!"

(3) HUMAN: "What is 2 + 2?"

    AI: "4!"


Let's say you can only pick two conversations to evaluate how good your AI is. Would you pick 1 & 2? Probably not. You'd pick 1 & 3, or 2 & 3.

Because Embeddings allow us to determine how similar in vibes things are, we have a tool with which we can automatically search over our dataset for things that have very different vibes, meaning that each evaluation run is more likely to return new information about how well the model is doing.

My question to the OP was mostly about whether or not this "vibe differentiated dataset" was constructed prior to the evaluation run, or populated gradually, based on each individual test case result.

so anyway it's just vibes man

discuss

order

abhgh|2 years ago

That's probably the intent, but I don't know if this actually achieves this (I have another comment that's about the use of bayesopt here). But even if it did, bayesopt operates sequentially (it's a Sequential Model-based Optimizer or SMBO) and so the trajectory of queries different LLMs evaluate would be different. Unless there is something to correct this cascading bias I don't know if you could use this to compare LLMs. Or obtain a score that's comparable to standard reported numbers.

On a different note, if all we want is a diverse set of representative samples (based on embeddings), there are algorithms like DivRank that do that quite well.