I guess my thinking is if a parrot bites you, you have an opportunity to learn not to trust both the parrot and yourself… a good developer with a cognitive bias towards using a bad crutch is, eventually (I hope) a better developer with no crutch and awareness of a lot of subtle failure states that other not-as-good-as-we-thought developers have found themselves in.
Really I keep thinking of LLMs as unreliable search engines for local modes (in the arithmetic sense)… we get back the most commonly given answer from the training data, subtly and stochastically shifted, and if it makes us individually worse then it is showing us we were collectively not all that great to begin with.
yawpitch|1 year ago
Really I keep thinking of LLMs as unreliable search engines for local modes (in the arithmetic sense)… we get back the most commonly given answer from the training data, subtly and stochastically shifted, and if it makes us individually worse then it is showing us we were collectively not all that great to begin with.