top | item 47198221

(no title)

Klaster_1 | 1 day ago

The article very much resonates with my experience past several months.

The project I work on has been steadily growing for years, but the amount of engineers taking care of it stayed same or even declined a bit. Most of features are isolated and left untouched for months unless something comes up.

So far, I managed growing scope by relying on tests more and more. Then I switched to exclusively developing against a simulator. Checking changes with real system become rare and more involved - when you have to check, it's usually the gnarliest parts.

Last year's, I noticed I can no longer answer questions about several features because despite working on those for a couple of months and reviewing PRs, I barely hold the details in my head soon afterwards. And this all even before coding agents penetrated deep into our process.

With agents, I noticed exactly what article talks about. Reviewing PR feels even more implicit, I have to exert deliberate effort because tacit knowledge of context didn't form yet and you have to review more than before - the stuff goes into one ear and out of another. My team mates report similar experience.

Currently, we are trying various approaches to deal with that, it it's still too early to tell. We now commit agent plans alongside code to maybe not lose insights gained during development. Tasks with vague requirements we'd implicitly understand most of previously are now a bottleneck because when you type requirements to an agent for planning immediately surface various issues you'd think of during backlog grooming. Skill MDs are often tacit knowledge dumps we previously kept distributed in less formal ways. Agents are forcing us to up our process game and discipline, real people benefit from that too. As article mentioned, I am looking forward to tools picking some of that slack.

One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate. It's as if the concept was alien to them or they could comprehend that other people handle that at different capacity than them.

discuss

order

datsci_est_2015|1 day ago

> One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate.

Engineering managers in my experience (even in ones with deep technical backgrounds) often miss the trees for the forest. The best ones go to bat for you, especially once verifying that they can do something to unblock or support you. But that’s still different than being in the terminal or IDE all day.

Offloading cognitive load is pretty much their entire role.

matsemann|1 day ago

Learning has always been to write things down. Just reading it seldom sticks.

RealityVoid|1 day ago

Absolutely not. Learning has been to experiment with the things until you form a effective mental model of the thing. Writing things does ab-so-luetely nothing except make you feel good in the moment. Just like listening to a lecture without engaging with the subject matter deeper.

Writing things down is important for organisational persistence of information but that is something else.

0wis|1 day ago

Not sure humanity learned nothing before the last 8000 years. It was just very slow. Maybe we will need new ways to learn

bluegatty|1 day ago

We don't have the right abstractions in place to support true AI driven work. We replaced ourselves but we don't have the tools to do '1 layer up'.

enknee1|13 hours ago

Nailed it.

We desperately need a new set of abstractions for human- and AI-based knowledge.

I prefer humans-as-a-network of abstractions piloting an organic robot perspective. Sans mathematical framework, this is an unsatisfying claim, I know... But just hear me out.

This allows for extreme complexity between individuals and for language to act as a standard serial com channel with high dimensional abstractions embedded across words - a network of abstractions unto itself. Models of this network are embedded in books and 'live' in oral history.

LLMs, then, are just a much better model of the abstraction networks that span people through language (and often thought).

Notice that they're NOT people. And that we are actively developing network science to accommodate the complexities of inherent in examining both the real world and modeled versions of these networks.

As an example, the tools to layer up can be envisioned as more networks on top of these networks: reasoning and cognitive patterns are captured in recursive transformer-based LLMs. So a metacognative model might actively generate LoRA for each prompt.

Again, much math and research needed. But it's been a very useful set of abstractions this far.

nsvd2|1 day ago

I think that recording dialog with the agent (prompt, the agent's plan, and agent's report after implementation) will become increasingly important in the future.

slashdev|1 day ago

I have this at the bottom of my AGENTS.md:

You will also add a markdown file to the changelog directory named with the current date and time `date -u +"%Y-%m-%dT%H-%M-%SZ"`, record the prompt, and a brief summary of what changes you made, this should be the same summary you gave the developer in the chat.

From that I get the prompt and the summary for each change. It's not perfect but it at least adds some context around the commit.

Klaster_1|1 day ago

Agree, but current agents don't help with that. I use Copilot, and you can't even dump it preserving complete context, including images, tool call results and subagent outputs. And even if you could, you'd immediately blow up the context trying to ingest that. This needs some supporting tooling, like in today's submission where agent accesses terabytes of CI logs via ClickHouse.