(no title)
enonimal | 2 years ago
For instance, if there are three conversations that you can use to test if your AI is working correctly:
(1) HUMAN: "Please say hello"
AI: "Hello!"
(2) HUMAN: "Please say goodbye" AI: "Goodbye!"
(3) HUMAN: "What is 2 + 2?" AI: "4!"
Let's say you can only pick two conversations to evaluate how good your AI is. Would you pick 1 & 2? Probably not. You'd pick 1 & 3, or 2 & 3.Because Embeddings allow us to determine how similar in vibes things are, we have a tool with which we can automatically search over our dataset for things that have very different vibes, meaning that each evaluation run is more likely to return new information about how well the model is doing.
My question to the OP was mostly about whether or not this "vibe differentiated dataset" was constructed prior to the evaluation run, or populated gradually, based on each individual test case result.
so anyway it's just vibes man
abhgh|2 years ago
On a different note, if all we want is a diverse set of representative samples (based on embeddings), there are algorithms like DivRank that do that quite well.
ShamelessC|2 years ago