top | item 44077854

(no title)

thor_molecules | 9 months ago

I think there is a bit of cognitive dissonance that comes with trying to build stuff with LLM technology.

LLM’s are inherently non-deterministic. In my anecdotal experience, most software boils down to an attempt to codify some sort of descision tree into automation that can produce a reliable result. So the “reliable” part isn’t there yet (and may never be?).

Then you have the problem of motivation. Where is the motivation to get better at what you do when your manager just wants you to babysit copilot and skim over diffs as quickly as possible?

Not a great epoch for being a tech worker right imo.

discuss

order

000ooo000|9 months ago

>LLM’s are inherently non-deterministic.

I'm not an ML guy but I was curious about this recently. There is some parameter that can be tuned to produce determinism but currently it also produces worse results. Big [citation needed], but worth a google if it's of interest. Otherwise in agreement with your post.