2.0 flash is not what I'd call current, that's two versions old if we count 2.5
Also, very few people will care about what slop you got an ai to generate, it's more a feature than a bug, as much as people want to believe it's the latter, which makes most slop very uninteresting
neoneye2|1 month ago
For this I used gemini-2.0-flash-lite. Doing 150 LLM invocations.
Code is here https://github.com/neoneye/PlanExe
I'm on Discord https://neoneye.github.io/PlanExe-web/discord.html
verdverm|1 month ago
Also, very few people will care about what slop you got an ai to generate, it's more a feature than a bug, as much as people want to believe it's the latter, which makes most slop very uninteresting