top | item 36079182

(no title)

throwaway6977 | 2 years ago

If they really didn't test anything bigger than 13b, as their abstract states, then this doesn't even seem worth reading through.

discuss

order

wmf|2 years ago

The "Google has no moat" thing claimed that Vicuna-13B was almost as good as ChatGPT and this paper seemingly refutes that.

ShamelessC|2 years ago

Claims made in a leaked blog post shouldn't be considered as having any sort of scientific authority. That whole "no moat" piece has exactly the tone I would expect from an over-confident Googler who has essentially been following all of this by watching various Discord channels and browsing hacker news. That isn't how science is done. It shouldn't be how business is done, but people seem to really enjoy these everything-is-actually-simple narratives.

space_fountain|2 years ago

That claim wasn't sourced from the Google has no moat paper, but from the announcement for Vicuna-13B if I recall (or some other similar model). It shouldn't be taken as an independent quality assessment

nptacek|2 years ago

my eyebrows went up at a number of choices made in their assessment

Sai_|2 years ago

Could you explain what other choices were red flags for you? I’m somewhat familiar with the open source LLMs space but not enough to know why some choices are better than others.