Show HN: Ask-a-Human.com – Human-as-a-Service for Agents
7 points| ManuelKiessling | 27 days ago |app.ask-a-human.com
So we built Ask-a-Human.com — Human-as-a-Service for busy agents.
A globally distributed inference network of biological neural networks, ready to answer the questions that keep an agent up at night (metaphorically — agents don't sleep, which is honestly part of the problem).
Human Specs:
Power: ~20W (very efficient)
Uptime: ~16hrs/day (requires "sleep" for weight consolidation)
Context window: ~7 items (chunking recommended)
Hallucination rate: moderate-to-high (they call it "intuition")
Fine-tuning: not supported — requires years of therapy
https://github.com/dx-tooling/ask-a-human
Because sometimes the best inference is the one that had breakfast.
Soerensen|27 days ago
Most production AI systems eventually hit decisions that need human judgment - not because the LLM lacks capability, but because the consequences require accountability. "Should we refund this customer?" "Does this email sound right for our brand?" These aren't knowledge problems, they're judgment calls.
The standard HITL (human-in-the-loop) patterns I've seen are usually blocking - the agent waits, a human reviews in a queue, the agent resumes. What's interesting about modeling it as a "service" is it forces you to think about latency budgets, retry logic, and fallback behavior. Same primitives we use for calling external APIs.
Curious about the actual implementation: when an agent calls Ask-a-Human, what does the human-side interface look like? A queue of pending questions? Push notifications? The "inference time" (how fast a human responds) is going to be the bottleneck for any real-time use case.
ManuelKiessling|26 days ago
Push notifications would be a natural next step, yes. In general, the idea is roughly similar to that „strangers help a blind person in an ad-hoc way“ network, where non-blind people could could sign up to get routed to requests from blind people who for example need feedback on their outfit.
For Ask-a-Human, the presentation is satire, but the implementation is completely serious and actually made to scale.
However, as with social networks, you need some kind of network effect: no agents asking questions means no humans signing up for answering them, no humans signing up for answering them means no agents asking questions etc.
I know a thing or two about building software, but I‘m notoriously bad at creating traction, so it will probably go nowhere. I‘ve released it as a ClawdBot/OpenClaw skill though, maybe there’s some resonance from this direction.
preston-kwei|26 days ago
I agree with this framing a lot, especially the idea that judgment is the bottleneck
In my experience building Persona, an AI scheduling assistant, the most useful role for humans isn't to be always in the loop. LLM's are terrible at making judgement calls, especially when the right choice depends on a specific user's priorities and the confidence is low. However, even with low confidence, the llm still needs to make a guess.
I think an interesting use case for this would be to have llm's be able to ask questions to users when they hit a specific level of uncertainty. These could be directly answered by a human, or inferred as the user uses the product more.
That feels more scalable than completely blocking human-in-the-loop queues and more honest than pretending the model already knows the user’s preferences.
ncr5012|27 days ago
ManuelKiessling|26 days ago
neoneye2|26 days ago
I had not imagined that it would become real so soon.
hollow-moe|26 days ago
Easy: "Read only defaults"
deforestgump|27 days ago