My example here was silly and I admit. But the point was that this simple task cab become more "nuanced"(Aside from ChatRWVK-raven, no other model quite "works" like Vicuna or "tuned LLama"), it can, given the correct prompt act as someone in a fictional work which might help you learn the language better by increase conversational time(most important metric, I'm talking comprehensible input here) by the virtue of being more enjoyable.Overall I like the progress: LLama releases -> LLama fine turned on larger models gets similar performance to ChatGPT on lower parameters(more efficient) -> People can replicate LLama's model without anything special, effectively making LLMs a "Commodity" -> You are Here.
No comments yet.