(no title)
kaushik92 | 1 year ago
We incorporate user prompts to generate the outputs and provide diagnostics and feedback for improvement, rather than eval metrics. So you can plug your low scored queries provided by Ragas, your prompt and context. FiddleCube can provide the root cause and the ideal response.
This is an alternative to manual auditing and testing, where an auditor works on curating the ideal dataset.
michalf6|1 year ago
kaushik92|1 year ago
Our goal is to focus on datasets and make it very easy to create and manage data.
In our next release, we will be launching a way to do this using a UI.
mikeqq2024|1 year ago
neha_n|1 year ago
While we call LLMs(internal and external, based on instruction type), the output generated by LLMs can't be taken as ground truths unless we do rigorous evaluations. We have our own metrics when it comes to what could be called a ground truth, based on the user's seed information and business logic. Accuracy & preciseness needs also differ from use-case to use case. Function calling adds in another layer.
Another value add is type of instructions that we can generate. We expose 7 currently, and are working on exposing more instruction types. The challenge is to create ground truth of wide variety of cases that a given user can ask for a business including guardrailing.
We have built internal tools and agents to solve for those, and are internally discussing the ideal way to expose it to users, and whether it would be beneficial for the community. Any thoughts on that would be appreciated.
Automation took a significant amount of time for us as well, so at scale, even a reliable automated CI/CD pipeline is indeed a value add in itself.
Lmk if I can add more details to answer the question.
kaushik92|1 year ago
Apart from this, we generate a diverse set of questions including complex reasoning and chain of thought.
We also generate domain specific unsafe questions - questions that violate TnC of the particular LLM to test the model guardrails.