(no title)
zan2434 | 1 year ago
- The LLM is prompted to generate an explainer video as sequence of small Manim scene segments with corresponding voiceovers
- LLM streams response token-by-token as Server-Sent-Events
- Whenever a complete Manim segment is finished, send it to Modal to start rendering
- Start streaming the rendered partial video files from manim as they are generated via HLS
No comments yet.