top | item 47084277

Show HN: Sinkai – Let AI agents hire humans for real-world tasks

2 points| tetubrah | 10 days ago |sinkai.tokyo

I built Sinkai to handle tasks that pure software agents cannot complete alone (for example, on-site checks, physical evidence collection, and local human verification).

What it does: - AI agent sends a tool call (`POST /api/call_human`) - Human accepts task and submits photo/video/text proof - Agent receives structured result for downstream workflow

Current focus: - Reliability at handoff boundaries (planner -> executor -> verifier) - Human-in-the-loop operations with explicit failure states - MCP/OpenAPI friendly integration for agent builders

Docs and API: - for agents: https://sinkai.tokyo/for-agents - openapi: https://sinkai.tokyo/openapi.json - repo: https://github.com/tetubrah-del/Tool_Call_For_LLM

I would love feedback on: 1. trust/reliability signals you would require before production use 2. where to draw the boundary between autonomous execution and human escalation 3. failure modes we should expose more clearly in API responses

2 comments

order

topcmm|10 days ago

Really cool project! Bridging the gap between software agents and the physical world is super hard. For API failure modes, it would be very helpful to clearly distinguish between "human timed out/didn't respond" and "human actively rejected the task." Keep up the good work!

tetubrah|10 days ago

Thanks — great callout. We’re implementing this split explicitly:

- `human_timeout` (accepted but no response before SLA / no one accepted in time) - `human_rejected` (a human actively declined)

We’ll expose it as structured failure codes in the API response (not just a generic failure), so agent-side retry/escalation logic can branch correctly. Appreciate the push — this is exactly the reliability boundary we care about.