top | item 47143044

(no title)

hansmayer | 5 days ago

> Most people suck at playing the piano. Most people suck at prompting coding agents. If you practice either of those things you'll get better at them.

It would be funny, if by now I weren't convinced you are pushing these false analogies on purpose. The key difference between a piano and LLMs being, the piano will produce the same sounds to a same sequence of keys. Every single time. A piano is deterministic. The LLMs are not, and you know it, which makes your constant comparison of deterministic with non-deterministic tools sound a bit dishonest. So please stop using these very weak analogies.

> I really don't understand the "stop telling me I'm holding it wrong" argument. You probably are holding it wrong!

Right, another weak argument. Writing English language paragraphs is not a science you seem to imply it is. You're not the only person using the LLMs intensively for the last years, and it's not like there this huge secret to using them - after all they use natural language as their primary interface. But that's besides the point. We're not discussing if they are hard or easy to use or whatever. We are discussing if I should replace the magnificent supercomputer already placed in my head by mother nature or God or Aliens or whatever you believe in, for a very shitty, downgraded version 0.0.1 of it sitting in someone's datacenter, all for the sake of sometimes cutting some corners by getting that quick awk/sed oneliner or some boilerplate code? I don't think that's a worthy tradeoff, especially when the relevant reports indicate an objective slowdown, which probably also explains the so-called LLM-fatigue.

> Is this born out of some weird belief that "AI" is meant to be science fiction technology that you don't ever need to learn how to use?

No, actually it is born out of the weird belief which your sponsors have been either explicitly or implicitly promoting, now for the 4th year, in various intensities and frequencies, that the LLM technology will be equal to a "country of PhDs in a datacenter". All of this based on the super weird transhumanist ideology a lot of the people directly or indirectly sponsoring your writing actively believe in. And whether you like it or not, even if you have never implied the same, you have been a useful helper by providing a more "rational" sounding voice, commenting on the supposed incremental improvements and progress and what not.

discuss

order

simonw|5 days ago

Fine, if you don't like the piano analogy:

Most people suck at falconry. If you practice at falconry you'll get better at it.

Falcons certainly aren't deterministic.

> it's not like there this huge secret to using them - after all they use natural language as their primary interface

That's what makes them hard to use! A programming language has like ~30 keywords and does what you tell it to do. An LLM accepts input in 100+ human languages and, as you've already pointed out many times, responds in non-deterministic ways. That makes figuring out how to use them effectively really difficult.

> We are discussing if I should replace the magnificent supercomputer already placed in my head by mother nature or God or Aliens or whatever you believe in, for a very shitty, downgraded version 0.0.1 of it sitting in someone's datacenter

We really aren't. I consistently argue for LLMs as tools that augment and amplify human expertise, not as tools that replace it.

I never repeat the "country of PhDs" stuff because I think it's over-hyped nonsense. I talk about what LLMs can actually do.

hansmayer|5 days ago

> Falcons certainly aren't deterministic.

Well falcons are not deterministic and are trained to do something in the art of falconry, yes. Still I fail to see an analogy here as it is the falcon gets trained to execute a few specific tasks triggered by specific commands. Much like a dog. The human more or less needs to remember those few commands. We don't teach dogs and falcons to do everything do we ? Although we do teach specific dogs do to specific tasks in various domains. But no one ever claimed Fido was superintelligent and that we needed to figure him out better.

> That's what makes them hard to use! A programming language has like ~30 keywords and does what you tell it to do. An LLM accepts input in 100+ human languages and, as you've already pointed out many times, responds in non-deterministic ways. That makes figuring out how to use them effectively really difficult.

Well yes and no. The problem with figuring out how to use them (LLMs) effectively is exactly caused by their inherent un-predictability, which is a feature of their architecture further exacerbated by whatever datasets they were trained on. And so since we have no f*ing clue as to what the glorified slot machines might pop out next, and it is not even sure as recently measured, that they make us more productive, the logical question is - why should we, as you propose in your latest blog, bend our minds to try and "figure them out" ? If they are un-predictable, that means effectively that we do not control them, so what good is our effort in "figuring them out"? How can you figure out a slot machine? And why the hell should we use it for anything else other than a shittier replacement for pre-2019 Google? In this state they are neither augmentation nor amplification. They are a drag on productivity and it shows, hint - AWS December outage. How is that amplifying anything other than toil and work for the humans?