top | item 45547344

Superpowers: How I'm using coding agents in October 2025

435 points| Ch00k | 4 months ago |blog.fsck.com

231 comments

order

simonw|4 months ago

I can't recommend this post strongly enough. The way Jesse is using these tools is wildly more ambitious than most other people.

Spend some time digging around in his https://github.com/obra/Superpowers repo.

I wrote some notes on this last night: https://simonwillison.net/2025/Oct/10/superpowers/

csar|4 months ago

I’m curious how you think this compares to the Research -> Plan -> Implement method and prompts from the “Advanced Context Engineering from Agents” video when it comes to actual coding performance on large codebases. I think picking up skills is useful for broadening agents abilities, but I’m not sure I’d that’s the right thing for actual development.

The packaged collection is very cool and so is the idea of automatically adding new abilities, but I’m not fully convinced that this concept of skills is that much better than having custom commands+sub-agents. I’ll have to play around with it these next few days and compare.

smrtinsert|4 months ago

Curious what you think of sub agents, don't they still consume a massive amount of tokens compared to simply running in main context? I'm skeptical of any process that starts massively delegating to sub agents. I'm on Pro and don't think its worth upgrading to 200 a month just to not pollute main context.

d_sem|4 months ago

This article left me wishing it was "How I'm using coding agents to do <x> task better"

I've been exploring AI for two years now. It's certainly upgraded itself from the toy classification to a basic utility. However, I increasingly run into its limitations and find reverting to pre-LLM ways of working more robust, faster, and more mentally sustainable.

Does someone have concrete examples of integrating LLM in a workflow that pushes state-of-the-art development practices & value creation further?

jvanderbot|4 months ago

My impression is we're still in the tinkering phase. The metrics are coming.

benrutter|4 months ago

I'm so curious around what people's median experience is of AI coding tools.

I've tried agents every now and then, recently for something very simple- add an option to request csb format in a data api.

The results were, well, not good. . . I ended up undoing literally all changes because writing from scratch was a lot easier than trying to refactor the total mess it has made from what I'd have thought was a trivial feature.

I haven't done loads of prompt engineering etc, in all honesty it seems a lot of work when I haven't seen promise yet in the tool.

I see articles like this, and I always wonder, am I the outlier or is the writer? My experience of agentic AI is so hugely different to what some people are finding.

aydyn|4 months ago

Think of this: whats the likelihood that what you are asking for would be found in some public github repo? If its high then you are good to go.

sfn42|4 months ago

As someone who has been fairly negative towards AI until recently, the problem is how you use it.

If you just tell it some vague feature to make, it's gonna do whatever it's gonna do and maybe it will be good, maybe it won't. It probably won't. The more specific you are the better it will do.

Instead of trying to 100x or 1000x your effort, try to just 2x or 3x it. Give it small specific tasks and check the work thoroughly, use it as an extension of yourself rather than a separate "agent".

I can tell it to write a function and it'll do pretty well. I can ask it to fix things if it doesn't do it the way I want. This is all easy. Maybe I can even get it to write a whole class at once or maybe I can get it to write a class in a few iterations.

The key here is I'm in control, I'm doing the design, I'm making the decisions. I can ask it how I should approach a problem and often it'll have great suggestions. I can ask it to improve a function I've written and it'll do pretty well. Some times really well.

The point is I'm using it as a tool I'm not using it to do my job for me. I use it to help me think I don't use it to think for me. I don't let it run away from me and edit a whole bunch of files etc, I keep it on a tight leash.

I'm sold now. I am, indisputably, a better software developer with LLMs in my toolbelt. They help me write better code, faster, while learning things faster and easier, it's really good. Reliability isn't a problem when I keep a close eye on it. It's only a problem if you try to get it to do a whole big task on it's own.

sothatsit|4 months ago

Agent performance depends massively on the work you do.

For example, I have found Claude Code and Codex to be tremendously helpful for my web development work. But my results for writing Zig are much worse. The gap in usefulness of agents between tasks is very big.

The skill ceiling for using agents is also surprisingly high. Planning before coding, learning agent capabilities, environment setup, and context engineering can make a pretty massive difference to results. This can all be a big time sink though, and I'm not sure if it's really worth it if agents don't already work decently well for the work you do.

But with the performance gaps between domains, and the skill curve, I can definitely understand why there is such a divide between people claiming agents are ridiculously overhyped, and people who claim coding is fundamentally changing.

danielbarla|4 months ago

I think a lot if comes down to the domain, language and frameworks, your expectations, as well as prompt engineering. Having said that, I have had a number of excellent experiences in the past few weeks:

- Case 1 was troubleshooting what turned out to be a complex and messy dependency injection issue. I got pulled in to unblock a team member, who was struggling with the issue. My efforts were a dead-end, but Claude (Code) managed to spot a very odd configuration issue. The codebase is a large, legacy one.

- Case 2 was the same codebase, I again got pulled in to unblock a team mate, investigating why some integration tests were running individually, but not when run as a group. Clearly there was a pretty obvious smoking gun, and I managed to isolate the issue after about 15-30 minutes of debugging. I had set Claude on the goose chase as well, and as I closed the call with my teammate, I noticed it had found the same exact two lines that were causing the issue.

Clearly, it occasionally does insane stuff, or lies its little pants off. The number of times where it "got me" are fairly low, however, and its usefulness to me is extreme. In the cases above, it out-did a teammate who has at least 10 years of experience, and equalled me in the one case and outdid me in the other, with over 25 years now. I have a similar wonderment to your situation, but the opposite: "how are people NOT finding value in this?".

x0x0|4 months ago

I'm the same, with the same question if it's me.

I've had success with eg spitting out templated html; sometimes with css; sometimes with writing tests where I'm very specific about what I want (set up these structures, test this condition), etc. It's mediocre (good start, very far from production) with writing screens in react native. It does slightly better on rails, but far from production ready.

After that, it kinda works, but my effort level to turn the output into working code is higher than just writing it myself.

lazarus01|4 months ago

AI coding works amazingly well

But only on micro tasks, coming with explicit instructions, inside a very well documented architecture.

Give AI freedom of expression and they will never find first principals in their training data. You will receive code that is not performant and when analyzing the output, AI will try to convince you that it is. If the task goes beyond your domain, you may believe the wrong principals are ok.

vijucat|4 months ago

They're great at creating test cases out of code and/or log file excerpts. They're good at run-of-the-mill tasks whose answer one can reasonably expect to find on StackOverflow. I'm using GPT-4.1 and Clause Sonnet Thinking 3.7 with vscode + GitHub Copilot

cosmodust|4 months ago

It's very use case specific, I find them really good in simple repetitive tasks as long as you guide them at low level. Although you do need to keep a close eye as they easily spoil your existing work.

ekidd|4 months ago

> I'm so curious around what people's median experience is of AI coding tools.

My experience is coding agents work best for either absolute beginners, or for lead engineers who have experience building and training teams. Getting good results out of coding agents is a lot like getting good results out of interns: You need to explain clearly what you want, ask them to explain what they plan to do, give feedback on the plan, and then very carefully review the results. You need to write up your preferred coding style, you need a document that explains "how to work on this project", you need to establish rigorous automated quality checks, etc. Using a coding agent heavily is a lot like being promoted to "technical lead", with all the tradeoffs that entails.

Here's a recent discussion of a good blog post on the subject: https://news.ycombinator.com/item?id=45503867

I have gotten some very nice results out of Sonnet 4.5 this past week. But it required using my "technical management" skills very heavily. And it required lots extremely careful code review. Clear documentation, robust QA, and code review are the main bottlenecks.

I mean, the time I spent writing AGENTS.md wasn't wasted. I'm writing down a lot of stuff I used to teach in pairing sessions.

preommr|4 months ago

> It made sense to me that the persuasion principles I learned in Robert Cialdini's Influence would work when applied to LLMs. And I was pleased that they did.

No, no. Stop.

What is this? What're we doing here?

This goes past developping with AI into something completely different.

Just because AI coding is a radical shift doesn't mean everything has changed. There needs to be some semblance of structure and design. Instead what we're getting is straight up vodoo nonsense.

w10-1|4 months ago

> what we're getting is straight up voodoo nonsense

Maybe not in this case.

For the AI to create a solution, it has to come up with a vector for your intention and goals. It makes some sense for an AI trained on human persuasion materials (basically, everything has a rhetorical aspect) to also track human persuasion features for intentions.

However, results will vary. Just as people trying to deploy rhetorical techniques (and ridiculous power stances) often come off as foolish, I believe trying to hack your intention vector with all-caps and super-superlatives won't always work as intended (pun intended).

Still, if you find yourself not getting what you want, and you check your prompt and find some persuasion feature missing (e.g., authority), I think it's worth trying to add something on point.

imiric|4 months ago

> Instead what we're getting is straight up vodoo nonsense.

It always has been. Starting with the term "AI" itself.

Articles like these read the same way to me as any OpenAI announcement from the past 5 years. A bunch of technical mumbo jumbo laced with hyperbole, grand promises of how the technology is changing the world, and similar platitudes. I've learned to filter most of it out.

Occasionally I'll stumble upon an actually useful and practical tidbit of information which I can apply in my own workflow, which does involve LLMs, but most of the time it's just noise.

tcdent|4 months ago

This style of prompting, where you set up a dire scenario in order to try to evoke some "emotional" response from the agent, is already dated. At some point, putting words like IMPORTANT in all uppercase had some measurable impact, but at the present time, models just follow instructions.

Save yourself the experience of having to write and maintain prompts like this.

bcoates|4 months ago

Also the persuasion paper he links isn't at all about what he's talking about.

That paper is about using persuasion prompts to overcome trained in "safety" refusals, not to improve prompt conformance.

kasey_junk|4 months ago

What’s irritating is that the llms haven’t learned this as bout themselves yet. If you ask an llm to improve its instructions those sort of improvements are what it will suggest.

It is the thing I find most irritating about working with llms and agents. They seem forever a generation behind in capabilities that are self referential.

intended|4 months ago

This isnt science, or engineering.

This is voodoo.

It likely works - but knowing that YAGNI is a thing, means at some level you are invoking a cultural touchstone for a very specific group of humans.

Edit -

I dug into the superpowers and skills for a bit. Definitely learned from it.

There’s stuff that doesn’t make sense to me on a conceptual basis. For example in the skill to preserve productive tensions. There’s a part that goes :

> The trade-off is real and won't disappear with clever engineering

There’s no dimension for “valid” or prediction for tradeoff.

I can guess that if the preceding context already outlines tradeoffs clearly, or somehow encodes that there is no clever solution that threads the needle - then this section can work.

Just imagining what dimensions must be encoding some of this suggests that it’s … it won’t work for situations where the example wasn’t already encoded in the training. (Not sure how to phrase it)

clusterhacks|4 months ago

> This isnt science, or engineering. > This is voodoo.

I was struggling to find the exact reason this type of article bugs me so much, and I think "voodoo" is precisely the correct phrase to sum up my feelings.

I don't mean that as a judgement on the utility of LLMs or that reading about what different users have tried out to increase that utility isn't valuable. But if someone asked me how to most effectively get started with coding agents, my instinct is to answer (a) carefully and (b) probably every approach works somewhat.

3eb7988a1663|4 months ago

I am only on the first page and saw this blurb and was immediately annoyed.

  @/Users/jesse/.claude/plugins/cache/Superpowers/...
The XDG spec has been out for decades now. Why are new applications still polluting my HOME? Also seems weird that real data would be put under a cache/ location, but whatever.

simonw|4 months ago

It's in the cache location because it's a copy of a plugin that was installed from a GitHub repository, so that's not the original point of truth for that file.

hoechst|4 months ago

documents like https://github.com/obra/superpowers/blob/main/skills/testing... are very confusing to read as a human. "skills" in this project generally don't seem to follow set format and just look like what you would get when prompting an LLM to "write a markdown doc that step by step describes how to do X" (which is what actually happened according to the blog post).

idk, but if you already assume that the LLM knows what TDD is (it probably ingested ~100 whole books about it), why are we feeding a short (and imo confusing) version of that back to it before the actual prompt?

i feel like a lot of projects like this that are supposed to give LLMs "superpowers" or whatever by prompt engineering are operating on the wrong assumption that LLMs are self-learning and can be made 10x smarter just by adding a bit of magic text that the LLM itself produced before the actual prompt.

ofc context matters and if i have a repetitive tasks, i write down my constraints and requirements and paste that in before every prompt that fits this task. but that's just part of the specific context of what i'm trying to do. it's not giving the LLM superpowers, it's just providing context.

i've read a few posts like this now, but what i am always missing is actual examples of how it produces objectively better results compared to just prompting without the whole "you have skill X" thing.

Footprint0521|4 months ago

I fully agree. I’ve been running codex with GPT Pro (5o-codex-high) for a few weeks now, and it really just boils down to context.

I’ve found the most helpful things for me is just voice to Whisper to LLMs, managing token usage effectively and restarting chats when necessary, and giving it quantified ways to check when its work is done (say, AI-Unit-Tests with apis or playwright tests.) Also, every file I own is markdown haha.

And obviously having different AI chats for specialized tasks (the way the math works on these models makes this have much better results!)

All of this has allowed me to still be in the PM role like he said, but without burning down a needless forest on having it reevaluate things in its training set lol. But why would we go back to vendor lock in with Claude? Not to mention how much more powerful 5o-codex-high is, it’s not even close

The good thing about what he said is getting AI to work with AI, I have found this to be incredibly useful in promoting, and segmenting out roles

redhale|4 months ago

Everything is just context, of course. Every time I see a blog post on "the nine types of agentic memory" or some such I have a similar reaction.

I would say that systems like this are about getting the agent to correctly choose the precisely correct context snippet for the exact subtask it's doing at a given point within a larger workflow. Obviously you could also do that manually, but that doesn't scale to running many agents in parallel, or running automomously for longer durations.

jmull|4 months ago

> <EXTREMELY_IMPORTANT>…*RIGHT NOW, go read…

I don’t like the looks of that. If I used this, how soon before those instructions would be in conflict with my actual priorities?

Not everything can be the first law.

therealdrag0|4 months ago

Seems like maintaining a bashrc file. Sometimes you have to go tweak it.

apwell23|4 months ago

don't llm tell you not to give them instructions like that these days

jackblemming|4 months ago

Seems cute, but ultimately not very valuable without benchmarks or some kind of evaluation. For all I know, this could make Claude worse.

jelling|4 months ago

Same. We've all fooled ourselves into believing that an LLM / stochastic process was finally solved based on a good result. But the sample size is always to low to be meaningful.

anuramat|4 months ago

even if it works as described, I'm assuming it's extremely model dependent (eg book prerequisites), so you'd have to re-run this for every model you use, this is basically poor man's finetuning;

maybe explicit support from providers would make it feasible?

Avicebron|4 months ago

I often feel these types of blogposts would be more helpful if they demonstrated someone using the tools to build something non-trivial.

Is Claude really "learning new skills" when you feed it a book, or does it present it like that because you're prompting encourages that sort of response-behavior. I feel like it has to demo Claude with the new skills and Claude without.

Maybe I'm a curmudgeon but most of these types of blogs feel like marketing pieces with the important bit is that so much is left unsaid and not shown, that it comes off like a kid trying to hype up their own work without the benefit of nuance or depth.

causal|4 months ago

Using LLMs for coding complex projects at scale over a long time is really challenging! This is partly because defining requirements alone is much more challenging than most people want to believe. LLMs accelerate any move in the wrong direction.

khaledh|4 months ago

Agreed. The methodology needed here is something like an A/B test, with quantifiable metrics that demonstrate the effectiveness of the tool. And to do it not just once, but many times under different scenarios so that it demonstrates statistical significance.

The most challenging part when working with coding agents is that they seem to do well initially on a small code base with low complexity. Once the codebase gets bigger with lots of non-trivial connections and patterns, they almost always experience tunnel vision when asked to do anything non-trivial, leading to increased tech debt.

spankibalt|4 months ago

> "Maybe I'm a curmudgeon but most of these types of blogs feel like marketing pieces with the important bit is that so much is left unsaid and not shown, that it comes off like a kid trying to hype up their own work without the benefit of nuance or depth."

C'mon, such self-congratulatory "Look at My Potency: How I'm using Nicknack.exe" fluffies always were and always will be a staple of the IT industry.

danielmarkbruce|4 months ago

Why not just use claude code and come to your own conclusion?

coolKid721|4 months ago

Yeah I was reading this seeing if there was something he'd actually show that would be useful, what pain point he is solving, but it's just slop.

theptip|4 months ago

> some of the ones I've played with come from telling Claude "Here's my copy of programming book. Please read the book and pull out reusable skills that weren't obvious to you before you started reading

This is actually a really cool idea. I think a lot of the good scaffolding right now is things like “use TDD” bit if you link citations to the book, then it can perhaps extract more relevant wisdom and context (just like I would by reading the book), weather than using the generic averaged interpretation of TDD derived from the internet.

I do like the idea of giving your Claude a reading list and some spare tokens on the weekend where you’re not working, and having it explore new ideas and techniques to bring back to your common CLAUDE.md.

daemontus|4 months ago

Maybe this is a naive question, but how are "skills" different from just adding a bunch od examples of good/bad behavior into the prompt? As far as I can tell, each skill file is a bunch of good/bad examples of something. Is the difference that the model chooses when to load a certain skill into context?

simonw|4 months ago

I think that's one of the key things: skills don't take up any of the model context until the model actively seeks out and uses them.

Jesse on Bluesky: https://bsky.app/profile/s.ly/post/3m2srmkergc2p

> The core of it is VERY token light. It pulls in one doc of fewer than 2k tokens. As it needs bits of the process, it runs a shell script to search for them. The long end to end chat for the planning and implementation process for that todo list app was 100k tokens.

> It uses subagents to manage token-heavy stuff, including all the actual implementation.

nrjames|4 months ago

I think it just gives you the ability to easily do that with slash command, like using "/brainstorm database schema" or something instead of needing to define what "brainstorm" means each time you want to do it.

hackernewds|4 months ago

what you are suggesting is 1-shot, 2-shot, 5-shot etc prompting which is so effective that it's how benchmarks were presented for a while

JaggerFoo|4 months ago

I don't see any code. Where are the examples of use on real code?

meander_water|4 months ago

The problem with stuff like this is that it's hard to evaluate. You don't even know when the agent is using a skill, or if the skill even made a difference. Using tools lets you at least instrument tool calls, and control what gets executed.

redhale|4 months ago

I agree, I think traceability will be extremely important in evolving and improving a system like this. Since scripting is involved in searching for and managing skills, I feel like there is probably a way to achieve some kind of use tracing, but I'm not quite sure. Seems like this, if implemented, could also be fed back into the system for self improvement.

herval|4 months ago

Fascinating write-up. I loved this bit of debugging:

> The first time we played this game, Claude told me that the subagents had gotten a perfect score. After a bit of prodding, I discovered that Claude was quizzing the subagents like they were on a gameshow. This was less than useful. I asked to switch to realistic scenarios that put pressure on the agents, to better simulate what they might actually do.

Also his Claude says shit a lot

jvanderbot|4 months ago

This is so interesting but it reads like satire. I'm sure folks who love persuading and teaching and marshalling groups are going to do very well in SWEng.

According to this, we'll all be reading the feelings journals of our LLM children and scolding them for cheating on our carefully crafted exams instead of, you know, making things. We'll read psychology books, apparently.

I like reading and tinkering directly. If this is real, the field is going to leave that behind.

sunir|4 months ago

We certainly will; they can’t replace humans in most language tasks without having a human like emotional model. I have a whole therapy set of agents to debug neurotic long lived agents with memory.

imiric|4 months ago

And here I am in October 2025 still using "AI" tools via a chat UI in Emacs, like a caveman. I've written some code to help me with managing context and such, but the tools are there when I need them, and otherwise stay out of my way.

I have no interest in trying to understand the thought process of people who write and work like this. They're more interested in chasing the latest overhyped trends produced by tech companies and influencers, than actually producing quality software that solves real-world problems. It's some weird product of the tech and social media echo chambers they perpetually live in, which I find difficult to describe.

But apparently I have to learn about "skills" and "superpowers" now... Give me a break.

amelius|4 months ago

It's not a superpower if everybody has that same power.

cantor_S_drug|4 months ago

Everyone is better off with mobile phones. We can solve more diverse problems faster. Similarly we can combine our diverse superpowers (as they show in kids cartoons)

spprashant|4 months ago

I am not ashamed to admit this whole agentic coding movement has moved beyond me.

Not only do I have know everything about the code, data and domain, but now I need to understand this whole AI system which is a meta skill of its own.

I fear I may never be able catch up till someone comes along and simplifies it for pleb consumption.

philbo|4 months ago

I think this and other recent posts here hugely overcomplicate matters. I notice none of them provides an A/B test for each item of complexity they introduce, there's just a handwavy "this has proved to work over time".

I've found that a single CLAUDE.md does really well at guiding it how I want it to behave. For me that's making it take small steps and stop to ask me questions frequently, so it's more like we're pairing than I'm sending it off solo to work on a task. I'm sure that's not to everyone's taste but it works for me (and I say this as someone who was an agent-sceptic until quite recently).

Fwiw my ~/.claude/CLAUDE.md is 2.2K / 49 lines.

cruffle_duffle|4 months ago

I’ve personally decided that cursor agent mode is good enough. A single foreground instance of cursor doing its thing is plenty enough to babysit. Based upon that experience I am highly highly skeptical people are actually creating things of value with these multi-agent-running-in-the-background setups. Way to much babysitting and honestly writing docs and specs for them is more work than just writing parts of the code myself and letting the LLM do the tedious bits like finishing what I started.

No matter what you are told, there is no silver bullet. Precisely defining the problem is always the hard part. And the best way to precisely define a problem and its solution is code.

I’ll let other people fight swarms of bots building… well who knows what. Maybe someday it will deliver useful stuff, but I’m highly skeptical.

hoechst|4 months ago

Much of it is just "put this magic string before your prompt to make the LLM 10x better" voodoo, similar to the SEO voodoo common in the 2000s.

just remember that it works the same for everyone: you input text, magic happens, text comes out.

if you can properly explain a software engineering problem in plain language, you're an expert in using LLMs. everything on top of that people experimenting or trying to build the next big thing.

gdulli|4 months ago

It's also possible to put in enough hours of real coding to get to the point where coding really isn't that hard anymore, at least not hard enough to justify switching from those stable/solid fundamental skills to a constantly revolving ecosystem of ephemeral tools, models, model versions, best practices, lessons from trial and error, etc. Then you could bypass all of this distraction.

Admittedly that stance is easiest to take if you were old enough, experienced enough already by the time this era hit.

evanmoran|4 months ago

To give you a process that might help:

I’ve found you have to use Claude Code to do something small. And as you do it iterate on the CLAUDE.md input prompt to refine what it does by default. As it doesn't do it your way, change it to see if you can fix how it works. The agent is then equivalent to calling chatgpt / sonnet 1000 times a hour. So these refinements (skills in the post are a meta approach) are all about how to tune the workflow to be more accurate for your project and fit your mental model. So as you tune the md file you’ll start to feel what is possible and understand agent capabilities much better.

So short story you have to try it, but long story its the iteration of the meta prompt approach that teaches you whats possible.

lcnPylGDnU4H9OF|4 months ago

I haven't really done much of it but my plan is just to practice. This seems like a powerful thing to start with.

benhurmarcel|4 months ago

> till someone comes along and simplifies it for pleb consumption

Just give it a few months. If some technics really work, it’ll get streamlined.

zahlman|4 months ago

> It also bakes in the brainstorm -> plan -> implement workflow I've already written about. The biggest change is that you no longer need to run a command or paste in a prompt. If Claude thinks you're trying to start a project or task, it should default into talking through a plan with you before it starts down the path of implementation.

... So, we're refactoring the process of prompting?

> As Claude and I build new skills, one of the things I ask it to do is to "test" the skills on a set of subagents to ensure that the skills were comprehensible, complete, and that the subagents would comply with them. (Claude now thinks of this as TDD for skills and uses its RED/GREEN TDD skill as part of the skill creation skill.)

> The first time we played this game, Claude told me that the subagents had gotten a perfect score. After a bit of prodding, I discovered that Claude was quizzing the subagents like they were on a gameshow. This was less than useful. I asked to switch to realistic scenarios that put pressure on the agents, to better simulate what they might actually do.

... and debugging it?

... How many other basic techniques of SWEng will be rediscovered for the English programming language?

novoreorx|4 months ago

To me, this kind of stuff is like bloated boilerplates such as "full-stack e-commerce SaaS NextJS boilerplate." I never use them because I want more control and fewer unpredictabilities. They seem to save you some time, but you will pay a lot more for it later when you encounter deep bugs or need to refactor. For this reason, I won't use prompt templates for agentic coding tools either. There have been enough suggestions to write your own AGENTS.md and not overcomplicate the prompts.

d4rkp4ttern|4 months ago

A big issue working with code agents is what I call context-recall: restoring context when working on a new feature or fix, that builds on recent work.

Meaning, the previous work may have involved multiple CLI sessions, summaries dumped to various markdown files like documentation files, plan files, issue files, PR-descriptions etc. Then when starting new work with a code agent you have to hunt down all of this scattered context from various md files and session logs to fill in background for the code-agent about what was recently done.

I see many workflows that help with working on a fresh feature or fix, but nothing that addresses context-recall. But maybe the OP workflow or others do that, I haven’t dug too deep into them.

d4rkp4ttern|4 months ago

(Just released the OP blog actually does address exactly this)

Aloisius|4 months ago

What's up with people (or I suppose AI) including copyright licenses in AI generated code?

At least it's an MIT license, but since AI output isn't copyrightable, I'm unsure what the point is since people can legally ignore the license.

hugh-avherald|4 months ago

^ (not legal advice -- far from it)

kreyenborgi|4 months ago

Anyone else get the feeling like CLAUDE.md fiddling is the new dotemacs fiddling?

iamjfu|4 months ago

I am interested by this link: https://blog.fsck.com/blog/2025/superpowers/superpowers-demo...

``` Claude Code v2.0.13 Sonnet 4.5 (with 1M token context) Claude Max /Users/jesse/tmp/new-tool/.worktrees/todo-cli ```

How does this person have access to Sonnet 4.5 with 1m token context? I don't see this referenced anywhere when I search or when I ask Claude about it.

d4rkp4ttern|4 months ago

It’s a limited release beta feature not available to all. You can try to activate it by doing: /model sonnet[1m] And it accepts it but the at the next API call it may fail and say “this beta model is not available with your subscription”.

I haven’t gotten access yet.

One of the nice things about Codex (GPT-5) is the supposed 400k token context (although performance starts to deteriorate when you get to 80% context usage).

dwb|4 months ago

Honestly, if the LLM/agent can't do what I want with a simple, shortish prompt that I understand, augmented by some well-chosen tool calls, I'm not interested. These incantations may or may not work, but I just don't want them. Reams of vague twiddling of an unknowable black box. I want the amount of mystery kept at an absolute minimum when I'm programming.

StapleHorse|4 months ago

A little bit off topic. I love how AI is advancing so fast that the usual title: "How i'm using XX in 20NN" is not specific enough, now we need the month.

tobbe2064|4 months ago

Is it possible to set up this kind of workflow with the plug in that comes bundled with vs code, given that you have an enterprise github copilot account that includes Claude?

AlexCoventry|4 months ago

I think this is cool, but some performance benchmarks would really help to sell it.

throw-10-13|4 months ago

“Here is a collection of arcane incantations and humiliating prostrations I use to get my AI homunculus to serve me.”

Having to beg and emotionally manipulate an agent into doing what you want goes so far beyond black-box that I find it difficult to believe these people actually get useful work done using these tools.

I generally consider myself pro-ai in the workplace, but this nonsense is starting to change my mind.

tobbe2064|4 months ago

What's the cost of running with agents like this?

dbbk|4 months ago

Claude Max is fixed cost

4b11b4|4 months ago

I'm not sure exactly what I just read...

Is this just someone who has tingly feelings about Claude reiterating stuff back to them? cuz that's what an LLM does/can do

lerp-io|4 months ago

take #73895 on how to fix ur prompt to make ur slop better.

apwell23|4 months ago

yeah none them can actually prove or even explain it in words why thier own golden prompting technique is superior. its all vibes. so annoying, i want to slap these ppl lol.

anuramat|4 months ago

is better slop a bad thing somehow?

yoyohello13|4 months ago

The post reads like the someone throwing bones and reading their fortune. That part where Claude did its own journaling was so cringe it was hilarious. The tone of the journal entry was exactly like the blog author, which suggests to me Claude is reflecting back what the author wants to hear. I feel like Jesse is consumed in a tornado of llm sycophancy.

saaaaaam|4 months ago

Claude has never once said “oh shit” or “holy crap” to me. I must be doing something horribly wrong.

cynicalsecurity|4 months ago

Superpower: AI slop.

echelon|4 months ago

I'm sure the horse whip manufacturers had similar things to say about steam powered horses. We just don't think about them much anymore.

The whole world is changing around us and nothing is secure. I would not gamble that the market for our engineering careers is safe with so much disruption happening.

Tools like Lovable are going to put lots of pressure on technical web designers.

Business processes may conform to the new shape and channels for information delivery, causing more consolidation and less duplication.

Or perhaps the barrier to entry for new engineers, in a worldwide marketplace, lowers dramatically. We have accessible new tools to teach, new tools to translate, new tools to coordinate...

And that's just the bear case where nothing improves from what we have today.

jstummbillig|4 months ago

How are skills different from tools? Looks like another layer of abstraction. What for?

zkmon|4 months ago

<Homer Simpson mode>Oh yeah? If prompting is such damn cool hard thing, why can't I ask my AI slave to do all this prompting mumbo jumbo for me?</Homer Simpson mode>

gjm11|4 months ago

Has anyone ever seen an instance in which the automated "How" removal actually improves an article title on HN rather than just making them wrong?

(There probably are some. Most likely I notice the bad ones more than the good ones. But it does seem like I notice a lot of bad ones, and never any good ones.)

[EDITED to add:] For context, the actual article title begins "Superpowers: How I'm using ..." and it has been auto-rewritten to "Superpowers: I'm using ...", which completely changes what "Superpowers" is understood as applying to. (The actual intention: superpowers for LLM coding agents. The meaning after the change: LLM coding agents as superpowers for humans.)

add-sub-mul-div|4 months ago

I agree, I'm sure I've seen instances of where it's worked but the problem is that when it messes it up it's much more annoying than any benefit it brings when it does work. Some of us don't want to be reminded that tech is full of hubris, overconfidence, poor judgment, and failure about what can/should be abstracted and automated.

bryanrasmussen|4 months ago

I've had it happen with me a few times where it was reasonable, sometimes where it was debatable, and if it was just wrong I edit it to add the How back in.

dvfjsdhgfv|4 months ago

Yeah, to the point I can recall several examples where the title stuck out as dumb on HN and only when visiting the original page it started to make sense, but not a single case where I could say the automated removal really did a good job.

apwell23|4 months ago

[deleted]

dang|4 months ago

That's far from true. Also, please don't cross into personal attack on this site.

https://news.ycombinator.com/newsguidelines.html

(We detached this subthread from https://news.ycombinator.com/item?id=45549522.)

Edit: your account has unfortunately been doing this repeatedly (https://news.ycombinator.com/item?id=45551198), and you've been breaking the site guidelines in other ways as well (e.g. https://news.ycombinator.com/item?id=45527456). We ban accounts that do this, so if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, that would be good.

GOD_Over_Djinn|4 months ago

The past few years have taught me that these are the people that rise to the top of society (much to my chagrin).

The average person doesn’t want to hear from thoughtful intellectuals presenting nuanced opinions. They want to hear from those who brashly and boastfully present themselves as authority figures, and then bolster the listeners preconceived ideas with violently exaggerated language. Shallow but sensational is what sells.

I think that Elons bombastic claims about self driving have really popularized this approach. But you can now see it everywhere in tech: bitcoin going to $1B and nocoiners will be peasants, AI is going to turn us all in to paperclips, and on and on…

simonw|4 months ago

Here's a counter-example for you from the another day: https://simonwillison.net/2025/Oct/8/claude-datasette-plugin...

> This isn’t necessarily surprising, but it’s worth noting anyway. Claude Sonnet 4.5 is capable of building a full Datasette plugin now.

I do worry a bit about how often I use positive adjectives. If something isn't notable I won't write about it though. In this particle case Jesse's prompting / skills stuff really does deserve the superlatives IMO.

b_e_n_t_o_n|4 months ago

I'm far from an AI enthusiast but I really appreciate Simon for his articles and takes on AI. He's enthusiastic and optimistic but that doesn't make him a hype man.