(no title)
jonas21 | 11 days ago
If you say: "Generate a strong password using Python", then Claude will write code using the `secrets` module, execute it, and report the result, and you'll actually get a strong password.
To get good results out of an LLM, it's helpful to spend a few minutes understanding how they (currently) work. This is a good example because it's so simple.
aix1|11 days ago
Given that Claude already has the ability to write and execute code, it's not obvious to me why it should, in principle, need an explicit nudge. Surely it could just fulfil the first request exactly like it fulfils the second.
plagiarist|11 days ago
Maybe in the future companies making the models will train them specifically on when to require a source of true randomness and they might start writing code for it.
lm28469|11 days ago
unknown|11 days ago
[deleted]