top | item 46263508

Show HN: I made a human-in-the-loop system for tuning LLMs in beta

2 points| gitpullups | 2 months ago |joinoneshot.com

OneShot is an API that routes failed LLM outputs to trained humans, returns corrected outputs or prompt injections, and stores the edits as structured training data.

Privacy Note: This product is not built for privacy yet. The current use case is internal tools or beta features where users aren’t promised privacy. But the point of this tool is NOT FOR PRODUCTION.

In the future, there will be a feature for anonymizing all private information automatically.

Problem: My project this year was a tool for pediatricians to do their insurance claims assisted by AI.

Niche industries like this require a ton of examples, fine-tuning and re-prompting to actually get them a product that works. Then, it requires monitoring the output to some extent (of course with the hospital’s consent) so small model changes or edge cases don’t break outputs for at least the first couple months.

This monitoring takes months of being distracted from doing new features. And every new feature I wanted to ship required this constant beta monitoring to get it to a reliable state. This also includes internal tools and automations that I needed to work reliably. That is when I started wishing I had an AI engineer/architect monitoring outputs 24-7 for every new feature’s first month. In real-world software, programs need to break less. Like almost never. And current AI models often don’t get us quite there. From 90% to 100 or 95 to 100. We waste months before shipping new features trying to tweak it internally without the model being able to have the hybrid of being improved live in the real world.

In niche agent environments, you sometimes need an actual human to jump in.

How it works: First, a beta deployment. You deploy your AI to do X business use case in beta or internally.

Each step of your pipeline queries our API with what models you prefer, etc.

Then, a human who is in charge of a batch of outputs will see a flagged output when something goes wrong (we agree first on what that means). They can then use human judgement to tweak the prompt, prompt a different model, or provide added context over and over in multiple parallel threads until the correct output comes out.

Second, fine tuning. You now own a dataset of what changes to your prompt and what changes to the output were made that caused that magical output. Thousands of changes and tweaks that can take your model to the next level internally for each feature are in your db. This data allows you to ship faster, with better guarantees and much less manual testing that isn’t being rewarded or punished by the real world.

Who are the humans? I’m a developer doing the tickets manually with my technical friends I’m paying out of pocket for now (yes, it IS available 24/7!!!). This is intentionally manual during beta, with clear review guidelines, so we understand the process before trying to hire.

How slow is it? Most of the time no human will touch it and sometimes a human will take a quick unnoticeable automated action. In some edge cases, you’ll feel some noticeable slowing (10s+) but we’re looking to accelerate those as well, and the alternative is fully broken output.

Who is it not for? This is not meant for consumer apps, privacy-sensitive production systems, or teams expecting zero human involvement.

4 comments

order

vmitro|2 months ago

Don't laugh, but I think in the (near) future, more and more accent will be put on HITL concept as private or selfhosted AI workflows gain on interest; it's hard not to (hope for?) an emergence of movement similar to GNU in the space of software itself, where freely available tooling allows for collaborative, federated HITL powered finetuning of ML models.

As I do also work on a similar concept, where HITL is the first class citizen, can you tell us a bit more about the underlying technology stack, if it's possible for users to host their own models for inference and fine tuning, how are pipelines defined and such?

gitpullups|2 months ago

1. Pipelines are defined on your end, I want to build another option but for now it is still just queried as an API endpoint 2. Same as 1, so yes you can definitely use your models, you can definitely just send outputs you don't have to send prompts.

gitpullups|2 months ago

I'm a bit curious what you're working on, and if there might be some interesting connections there. Would you like to speak? You can just book in my calendar through the site.