top | item 47204544

(no title)

coppsilgold | 21 hours ago

At the moment LLM's tend to work well when you constrain them, and you can craft the constraints with the help of the same LLM in a different session. Then you can verify if the outputted code obeys the constraints in yet another session, and make it adjust the code to obey the constraints. If one of the constraints was to yield highly functional code, you can start refining function by function as well. There is a pattern here.

If you are a good engineer you can dictate data structures to it too. It then performs even better.

I believe the writing is on the wall a this point, it does a very adequate job if I invest enough time in writing and refining the specs and give it the data structures (&/| database schemas) I want it to use. And there is no comparison in the number of hours I spend wrangling it and the number of the hours it would take me to do the code myself.

This is the worst it's going to be and it's already quite good, it wasn't that good a mere three months ago.

The main pitfall is trying to get an LLM to read your mind, in doing so you are putting too much load on whatever passes for their intelligence quotient. That isn't how you get good results or get a good measure of their capabilities.

discuss

order

0xcafefood|9 hours ago

Windows in 1998: this is the worst it's ever going to be.

Uber is 2010: this is the worst it's ever going to be.

There's some triumphalism here. What happens when training data becomes scarcer because open source as a paradigm was killed? What happens when investor cash flows elsewhere and training and inference need to become profitable on their own?

Wobbles42|10 hours ago

I keep hearing "this is the worst it's going to be" as if we can expect a monotonic increase in quality and value generation.

Meanwhile, search was better in the past and is at this point the best it's going to be.

Enshittification comes for all things.