top | item 36816497

(no title)

Oranguru | 2 years ago

Useless for what? Are you comparing the base model with chat-tuned models?

Chat-tuned derivatives of LLaMa 2 are already appearing. Given that the base LLaMa 2 model is more efficient than LLaMa 1, it is reasonable to expect that these more refined chat-tuned versions of the chat-tuned versions will outperform the ones you mention.

discuss

order

No comments yet.