I'm not a primary user. Just cleaned up the existing codebase to make it open source. But you could use this to visualise attentions and debug the model.
For an example if you're working on a Q&A model, you can check which tokens in the prompt contributed to the output. It's possible to detect issues like output not paying attention to any important part of the prompt.
The issue with the ambiguity of usage plague lots of OSS projects. Guides/Tutorials will always help drive usage much more, just look at the usage of GPT-3 vs ChatGPT (which is GPT-3.5 with WebUI slapped on top of it).
lakshith-403|1 year ago
For an example if you're working on a Q&A model, you can check which tokens in the prompt contributed to the output. It's possible to detect issues like output not paying attention to any important part of the prompt.
3abiton|1 year ago