top | item 39839504

(no title)

mochomocha | 1 year ago

It's a MoE model, so it offers a different memory/compute latency trade-off than standard dense models. Quoting the blog post:

> DBRX uses only 36 billion parameters at any given time. But the model itself is 132 billion parameters, letting you have your cake and eat it too in terms of speed (tokens/second) vs performance (quality).

discuss

order

hexomancer|1 year ago

Mixtral is also a MoE model, hence the name: mixtral.

sangnoir|1 year ago

Despite both being MoEs, thr architectures are different. DBRX has double the number of experts in the pool (16 vs 8 for Mixtral), and doubles the active experts (4 vs 2)