top | item 47150123

(no title)

Wowfunhappy | 4 days ago

The shapes of clouds and positions of stars are essentially random, and yet humans derive meaning from both. I agree you could have gotten the same results via /dev/random, or probably by increasing the temperature on the model, but I suspect doing one of those things is important.

discuss

order

the_af|4 days ago

The LLM cannot derive meaning in a human sense.

The shapes of clouds and positions of stars aren't completely random; there is useful information in them, to varying degrees (e.g. some clouds do look like, say, a rabbit, enough that a majority of people will agree). The mechanism at play here with the LLM is completely different; the connection between two dog-inputs and the resulting game barely exists, if at all. Maybe the only signal is "some input was entered, therefore the user wants a game".

If you could have gotten the same result with any input, or with /dev/random, then effectively no useful information was encoded in the input. The initial prompt and the scaffolding do encode useful information, however, and are the ones doing the heavy lifting; the article admits as much.

Wowfunhappy|4 days ago

> If you could have gotten the same result with any input, or with /dev/random, then effectively no useful information was encoded in the input.

It's not that the input contained useful information—obviously it does not—it's that it's causing the output to be more random, and thus more "creative".

Without the gibberish, "generate a random game" would likely repeatedly surface high-probability concepts—platformers, space shooters, tower defense—whatever sits near the top of the model's prior distribution for "game." The gibberish causes the model to land on concepts like "frog" that it would almost never reach otherwise.