(no title)
real-hacker | 3 months ago
To run it locally, simply follow the instructions in the README. I use Gemini as LLM (requires a Google AI API Key). The TTS service supports Minimax (requires a API Key) or a local model for inference (SuperTonic TTS). The local model needs to be downloaded beforehand from the settings panel (approximately 300MB) and currently only supports English.
When generating a course, you can choose from three difficulty levels (ELI5, General, Professional). The LLM will adapt its language style and content depth based on the selected difficulty. Generated courses can be viewed and replayed in the Library. The text and audio for tutorials are cached after the initial generation, so repeated playback doesn't require API calls. However, determining whether a user's answers to questions are correct still requires calling the LLM API.
No comments yet.