I have. I found it to more or less be trash in comparison to GPT-4. Bard made up commands that didn't even exist, so I can't imagine how much more it would make up that isn't true.
GPT-4 will also happily invent python libraries that doesn't exist to enable some functionality in the code it produces.
I don't think this is a Bard only feature.
It's not as bad as the 65B llama model I run on my (amd) PC though, especially with quantized weights it tends to stop coding at some point and repeat the last line over and over. The 30B unquantized model seems better in this particular thing.
> Bard made up commands that didn’t even exist, so I can’t imagine how much more it would make up that isn’t true.
It’s unsurprising that Bard is particularly bad at something that Google says up front that it categorically cannot do, but I think that’s probably a bad thing to use to evaluate its capabilities outside of that domain.
Kranar|2 years ago
Infact it's just bad to the point of not really worth using. It gets basic facts wrong and often times misunderstands what I'm trying to ask it.
throwaway1851|2 years ago
In haven’t tried Bard, but I’ve tried ChatGPT extensively and this sounds like a very good description of it.
AviationAtom|2 years ago
tyfon|2 years ago
I don't think this is a Bard only feature.
It's not as bad as the 65B llama model I run on my (amd) PC though, especially with quantized weights it tends to stop coding at some point and repeat the last line over and over. The 30B unquantized model seems better in this particular thing.
dragonwriter|2 years ago
It’s unsurprising that Bard is particularly bad at something that Google says up front that it categorically cannot do, but I think that’s probably a bad thing to use to evaluate its capabilities outside of that domain.
DennisAleynikov|2 years ago