top | item 47025935

(no title)

linsomniac | 14 days ago

>What is there to learn, honestly?

With all due respect, that answer shows that you don't know enough about agentic coding to form an opinion on this.

Things to learn:

    - What agent are you going to use?
    - What skills are you going to use?
    - What MCPs are you going to use?
    - What artifacts are you going to provide beyond the prompt?
    - How are you going to structure it so the tooling can succeed without human interaction?
    - Are you going to use agent orchestration and if so which?
    - Are you going to have it "ultrathink" or not?
    - Are you going to use a PRD or a checklist or the toolings own planning?
    - Which model or combination of models are you going to use today?  (Yes, that changes)
    - Do you have the basic English (or whatever) skills to communicate with the model, or do you need to develop them?   (I'm seeing some correlations between people with poor communication skills and those struggling with AI)
Those are a few off the top of my head. "Plz no mistakes" is not even a thing.

discuss

order

written-beyond|14 days ago

I can bet that a single standard instance of existing tool like codex and Claude Code to do whatever someone with a convoluted setup like that can. It could be marginally slower if you but it's all literally just English language text files.

I use codex almost everyday, none of that is necessary unless you're trying to flatten up your resume.

It's micro services all over again, a concept useful for some very select organisations, that should've been used carefully turned into a fad every engineer had to try shoe horn into their stack.

linsomniac|14 days ago

>I can bet that a single standard instance of existing tool like codex and Claude Code

This is a perfect example of what I'm saying. You'd bet that, because you don't have enough experience with the tooling to know when you need more than a "standard instance of existing tool"

Here's a real-world case: Take some 20 year old Python code and have it convert "% format" strings to "f strings". Give that problem to a generic Claude Code setup and it will produce some subtle bugs. Now set up a "skill" that understands how to set up and test the resulting f-string against the %-format, and it will be able to detect and correct errors it introduces, automatically. And it can do that without inflating the main context.

Many of those items I mention are at their core about managing context. If you are finding Claude Code ends up "off in the weeds", this can often be due to you not managing the context windows correctly.

Just knowing when to clear context, when to compact context, and when to preserve context is a core component of successfully using the AI tooling.

kaffekaka|14 days ago

To me it seems you and many others are lost in the weeds of constantly evolving tooling and strategies.

A pretty basic Claude Code or Codex setup and being mindful of context handling goes a long way. Definitely long enough to be able to use AI productively while not spending much time on configuring the setup.

Staying on top of all details is not necessary but in fact counter productive, trust me.

linsomniac|13 days ago

I don't need to trust you, I've done my own testing and using newer tooling features is dramatically better than not. One of the things about the AI tooling is that it's very inexpensive to run experiments (this week I've had it build a particular tool in Python, Go, Rust, and Zig, for example).

Using skills, multiple models, MCPs and agent teams is significantly improving the results I'm seeing in real world problems.

You haven't really given me any reason why I should trust you, but I'll tell you it's going to be hard for me to trust advice that contradicts my test results.