top | item 45755525

Brumby-14B-Base: The Strongest Attention-Free Base Model

7 points| cgel | 4 months ago |manifestai.com

1 comment

order

cgel|4 months ago

We have trained a completely attention-free LLM whose performance is competitive with state-of-the-art models. This model, which we call Brumby-14B-Base, has a familiar Transformer-style architecture, except it uses power retention layers instead of attention layers. It is available on Huggingface.