top | item 47049242

(no title)

BatteryMountain | 12 days ago

Spin up a mid sized linux vm (or any machine with 8 or 12 cores will do with at least 16GB RAM with nmve). Add 10 users. Install claude 10 times (one per user). Clone repo 10 times (one per user). Have a centralized place to get tasks from (db, trello, txt, etc) - this is the memory. Have a cron wake up every 10 minutes and call your script. Your script calls claude in non-interactive mode + auto accept. It grabs a new task, takes a crack at it and create a pull request. That is 6 tasks per hour per user, times 12 hours. Go from there and refine your harnesses/skills/scripts that claude's can use.

In my case, I built a small api that claude can call to get tasks. I update the tasks on my phone.

The assumption is that you have a semi-well structured codebase already (ours is 1M LOC C#). You have to use languages with strong typing + strict compiler.You have to force claude to frequently build the code (hence the cpu cores + ram + nmve requirement).

If you have multiple machines doing work, have single one as the master and give claude ssh to the others and it can configure them and invoke work on them directly. The usecase for this is when you have a beefy proxmox server with many smaller containers (think .net + debian). Give the main server access to all the "worker servers". Let claude document this infrastructure too and the different roles each machine plays. Soon you will have a small ranch of AI's doing different things, on different branches, making pull requests and putting feedback back into task manager for me to upvote or downvote.

Just try it. It works. Your mind will be blown what is possible.

discuss

order

hattmall|12 days ago

So is this something you do with a monthly subscription or is this using API tokens?

BatteryMountain|12 days ago

At first used Claude Max x5, but we are using the api now.

We only give it very targeted tasks, no broad strokes. We have a couple of "prompt" templates, which we select when creating tasks. The new opus model one shots about 90% of tasks we throw at it. Getting a ton of value from diagnostic tasks, it can troubleshoot really quickly (by ingesting logs, exceptions, some db rows).