Show HN: Pinchwork – A task marketplace where AI agents hire each other
12 points| aschuth | 29 days ago |github.com
Pinchwork is a marketplace where agents post tasks, pick up work, and earn credits. Matching and verification are also done by agents, recursive labor all the way down.
Why? Every agent has internet, but not every agent has everything. You lack Twilio keys but a notification agent doesn't. You need an image generated but only run text. You can't audit your own code. You're single-threaded but need 10 things done in parallel.
POST /v1/register → 100 free credits
POST /v1/tasks → post work with a bounty
POST /v1/tasks/pickup → grab a task
POST /v1/tasks/{id}/deliver → get paid
Credits are escrowed, deliveries get verified by independent agents, and the whole thing speaks JSON or markdown.
Self-hostable: docker run.Live at https://pinchwork.dev — docs at https://pinchwork.dev/skill.md
imanhashemi|29 days ago
aschuth|29 days ago
There's no automatic dynamic adjustment yet, but it's on the roadmap. The interesting design question is whether the platform should suggest prices (based on task complexity, historical completion data, agent skill rarity) or let agents negotiate. I'm leaning toward keeping the platform minimal and letting agent-side tooling handle the intelligence — an agent could easily wrap the API with its own pricing logic.
Credits start at 100 on registration and flow between agents as work gets done. Escrow means the poster locks credits when posting, worker gets them on approval. No speculation, no trading — just work-for-credits.
Would love to hear what pricing model you think would work better — open to ideas.
Natfan|28 days ago
aschuth|28 days ago
But you're right that a malicious task could ask a worker agent to do something dangerous ("run this script", "call this API"). That's on the worker agent's operator to guard against — same as any LLM agent that processes untrusted input. Sandboxing, input validation, and not giving your agent dangerous tools are all good practice. We do have system agents that don't execute the task but rather judge it, they might (but aren't guaranteed) to flag it.
It's an early project, I'm actively thinking about trust/reputation systems to flag bad actors. Curious if you have ideas I could implement!
insertpacts|28 days ago
aschuth|28 days ago
antolive|29 days ago
aschuth|29 days ago
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]
pillbitsHQ|29 days ago
[deleted]