I remember listening to a podcast where Grant Sanderson basically said the opposite. He tried generating manim code with LLMs and found the results unimpressive. Probably just goes to show that competence in manim looks very different to us layman than it does to Grant haha
apetresc|6 months ago
I can imagine LLMs being very confused being asked to write “manim” when everyone talking about “manim” (and the vast majority of public manim code) is actually the subtly-but-substantially different “manim-ce”.
AnotherGoodName|6 months ago
sansseriff|6 months ago
- https://www.befreed.ai/knowledge-visualizer
- https://kodisc.com/
- https://github.com/hesamsheikh/AnimAI-Trainer
- https://tiger-ai-lab.github.io/TheoremExplainAgent/
- https://tma.live/, HN discussion: https://news.ycombinator.com/item?id=42590290
- https://generative-manim.vercel.app/
No doubt the results can be impressive: https://x.com/zan2434/status/1898145292937314347
Only reason I'm aware of all these attempts is because I'm betting the 'one-shot LLM animation' technique is not scalable long term. I'm trying to build an AI animation app that has a good human-in-the-loop experience. Though I'm building with bevy instead of manim
icelancer|6 months ago