top | item 44616774

(no title)

nirvanatikku | 7 months ago

This article is spot on.

I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.

This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.

The async approach to coding has been most fascinating, too.

I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.

I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.

discuss

order

m_fayer|7 months ago

Because I can never focus on just one thing, I have a philosophy degree. I’ve worked with product teams and spent lots of time with stakeholders. I’ve written tons of docs because I was the only one on the team who enjoyed it.

I’ve always bemoaned my distractibility as an impediment to deep expertise, but at least it taught me to write well, for all kinds of audiences.

Boy do I feel lucky now.

ruthvik947|7 months ago

I have a philosophy degree, have worked in product teams, and have had very similar observations. I could've written this comment!

roxolotl|7 months ago

The challenge is that clearly stating things is and always has been the hard part. It’s awesome that we have tools which can translate clear natural language instructions into code but even if we get AGI you’ll still have to do that. Maybe you can save some time in the process by not having to fight with code as much but you’re still going to have to create really clear specs which, again, is the hard part.

nosianu|7 months ago

Anecdote

Many years ago, in another millennium, before I even went to university but still was an apprentice (the German system, in a large factory), I wrote my first professional software, in assembler. I got stuck on a hard part. Fortunately there was another quite intelligent apprentice colleague with me (now a hard-science Ph.D.), and I delegated that task to him.

He still needed an explanation since he didn't have any of my context, so I bit the bullet and explained the task to him as well as I could. When I was done I noticed that I had just created exactly the algorithm that I needed. I just wrote it down easily myself in less than half an hour after that.

bryanrasmussen|7 months ago

in my experience only a limited part of software can be done with just really clear specs, also at times in my career I have worked on things that became more "clear" what was really needed as time went on the more we worked on it, and in those cases really clear specs would have produced worse outcomes.

amy214|7 months ago

>The challenge is that clearly stating things is and always has been the hard part.

I state things crystal clear in real life on the internets. Seems like most of the time, nobody has any idea what I'm saying. My direct reports too.

Anyway, my point is, if human confusion and lack of clarity is the training set for these things, what do you expect

Mtinie|7 months ago

Excellent. That’s what we should be doing, with or without AI. It’s hard, but it’s critical.

dclowd9901|7 months ago

I think about this a lot. Early on, as a self taught engineer, I spent a lot of time simply learning the vernacular of the software engineering world so that I could explain what it was that I wanted to do.

dustincoates|7 months ago

Repl.it is so hit or miss for me, and that's that is so frustrating. Like, it can knock out something in minutes that would have taken me an afternoon. That's amazing.

Then other times, I go to create something that is suggested _by them below the prompt box_ and it can't do it properly.

baxter001|7 months ago

The fact that you think it was suggested _by_ them is I think where your mental model is misleading you.

LLMs can be thought of metaphorically as a process of decompression, if you can give it a compressed form for your scenario 1 it'll go great - you're actually doing a lot of mental work to arrive at that 'compressed' request, checking technical feasibility, thinking about interactions, hinting at solutions.

If you feed it back it's own suggestion it's no so guaranteed to work.

jacobr1|7 months ago

I've found LLMs to be a key tool in helping me articulate something clearly. I write down a few half-vague notes, maybe some hard rules, and my overall intent and ask it to articulate a spec, and then ask to for suggestions, feedback, questions to clarify from a variety of perspectives. This gives me enough material to clarify my actual requirements and then ask for that be broken down into a task list. All along the way I'm both refining my mental model and written material to more clearly communicate my intent to both machines and humans.

Increasingly I've also just ben YOLOing single shot throw-away systems to explore the design space - it is easier to refine the ideas with partially working systems than just abstract prose.