top | item 42780737 (no title) DominikPeters | 1 year ago Indeed, for each of the words it got it right. discuss order hn newest bt1a|1 year ago How excellent for a quantized 27GB model (the Q6_K_L GGUF quantization type uses 8 bits per weight in the embedding and output layers since they're sensitize to quantization)
bt1a|1 year ago How excellent for a quantized 27GB model (the Q6_K_L GGUF quantization type uses 8 bits per weight in the embedding and output layers since they're sensitize to quantization)
bt1a|1 year ago