(no title)
jhrmnn | 7 months ago
I believe that LLMs can be very useful to identify this stuff in our processes. The solution shouldn't be then to fill them with LLMs but strip them entirely away. I tend to think the same about everyone freaking out about LLM misuse in education.
Al-Khwarizmi|7 months ago
The problem is the bureaucracy. And if it asks for useless fluff, I'm happy to feed it with LLMs.
cturner|7 months ago
Could LLM participation be blowing holes in good-governance measures that were only weakly effective, and therefore a good thing in the long-term? Could the rise in the practice drive grants arrangements to better governance?
Majromax|7 months ago
You don't need language models to identify useless processes. The problem, however, is that people tend to be more comfortable with a process that exists whose product is ignored rather than no process at all.
For example, in the case of the grants here it's easier to imagine giving money to someone with a Gantt chart – even if that chart will never really represent reality – rather than someone who says 'trust us to use the money effectively.'
For an alternative view, a lot of the information supplied in such processes isn't related to the happy path, but rather it creates a paper trail for blame when things go wrong.
> I tend to think the same about everyone freaking out about LLM misuse in education.
The difference for education is that students need to practice, so the repetition is the point. The AI might ultimately be better at writing the book report, particularly compared to a student in 6th grade, but there's few other ways to train skills of reading comprehension and analysis.