Show HN: High-performance GenAI engine now open source
22 points| fryz | 10 months ago |github.com
After one too many customer firedrills regarding hallucinating or insecure AI models, we built a system to catch these issues before they reached production. The Arthur Engine has been running in Fortune 100 to AI Native Start-Ups over the past two years, putting security controls around more than 10 billion tokens in production every month. We're now opening up this service to developers, enabling you to leverage enterprise-grade solutions to provide guardrails and evals as a service, all for free.
Get it on Github (https://github.com/arthur-ai/arthur-engine) to start evaluating your models today
Highlights of Arthur's Engine include:
* Built for speed and scale: It performs well with p90 latencies of sub-second well over 100+ RPS
* Made for full lifecycle support: Ideal for pre-production validation, real-time guardrails, and post-production monitoring.
* Ease of use: It is designed to be easy for anyone to run and deploy whether you're working on it locally during development, or you're deploying it within a horizontally-scaling architecture for large-scale workloads.
* Unification of generative and traditional AI: The Arthur AI Engine can be used to evaluate a diverse range of models from LLMs and Agentic AI systems to binary classifiers, regression models, recommender systems, forecasting models, and more.
* Content-specific guardrail and detection features: Ranging from toxicity and hallucination detection to sensitive data (like PII, keyword/regex and custom rules) and prompt injection.
* Customizability: Plug in your own models or integrate with other model or guardrail providers with ease, and tailor the system to match your specific needs.
Having been first-hand witnesses to the lack of adequate AI monitoring tools and the general under delivery of Gen AI systems in production, we believe that such a capability shouldn't be exclusive to big-budget organizations. Our mission is to make AI better, for everyone, and we believe by opening up this tool we can help more people get to that goal.
Check out our GitHub repo for examples and directions on how to use the Arthur AI Engine for various purposes such as validation during development, real-time guardrails or performance troubleshooting using enriched logging data. (https://github.com/arthur-ai/engine-examples)
We can’t wait to see what you build
— Zach and Team Arthur
serguei|10 months ago
Thanks for open sourcing and sharing, excited to try this out!!
fryz|10 months ago
We think we stand out from our competitors in the space because we built first for the enterprise case, with consideration for things like data governance, acceptable use, and data privacy and information security that can be deployed in managed easily and reliably in customer-managed environments.
A lot of the products today have similar evaluations and metrics, but they either offer a SAAS solution or require some onerous integration into your application stack.
Because we started w/ the enterprise first, our goal was to get to value as quickly and as easily as possible (to avoid shoulder-surfing over zoom calls because we don't have access to the service), and think this plays out well with our product.
kacperek0|10 months ago
Lupita___|10 months ago
iabouhashish|10 months ago
pierniki|10 months ago
madeleinelane|10 months ago
Gabriel_h|10 months ago
jdbtech|10 months ago
fryz|10 months ago
We based our hallucination detection on "groundedness" on a claim-by-claim basis, which evaluates whether the LLM response can be cited in provided context (eg: message history, tool calls, retrieved context from a vector DB, etc.)
We split the response into multiple claims, determine if a claim needs to be evaluated (eg: and isn't just some boilerplate) and then check to see if the claim is referenced in the context.
iabouhashish|10 months ago
[deleted]
vparekh1995|10 months ago
cipherchain111|10 months ago
saintjcob|10 months ago
[deleted]