Didn't someone back in the day write a library that let you import an arbitrary Python function from Github by name only? It obviously was meant as a joke, but with AIcolytes everywhere you can't really tell anymore...
Flask also started as an April 1st joke, in response to bottle.py but ever so slightly more sane. It gathered so much positive response, that mitsuhiko basically had to make it into a real thing, and later regretted the API choices (like global variables proxying per-request objects).
If you use a deterministic sampling strategy for the next token (e.g., always output the token with the highest probability) then a traditional LLM should be deterministic on the same hardware/software stack.
anilakar|9 months ago
__alexs|9 months ago
atoav|9 months ago
rollcat|9 months ago
tilne|9 months ago
userbinator|9 months ago
unknown|9 months ago
[deleted]
extraduder_ire|9 months ago
tibbar|9 months ago
3abiton|9 months ago
emporas|9 months ago
dheera|9 months ago
Like self-driving cars and human drivers, there will be a point in the future when LLM-generated code is less buggy than human-generated code.
AlotOfReading|9 months ago