(no title)
mayukhdeb | 1 year ago
The explanation for the original title is this plot from our publication in ICLR 2025: https://toponets.github.io/webpage_assets/FigureEfficiencyNa...
You can find more details on the website: https://toponets.github.io (see section: "Toponets deliver sparse, parameter-efficient language models")
We find out that inducing topographic structure in the weights of GPTs made them compressible (during inference) without losing out on performance.
I encourage you to revert the name if you find it justified after looking into the evidence I've shown here. Thanks.
No comments yet.