Your claim was so far from the truth of reality, that now, it's incumbent upon you to go back through the chain of faulty reasoning. You took it for granted that a conspiracy theory about suppressing information was true, when actually, the same Gemma model was already open-weighted by the same conspirators who you accuse of keeping Gemma out of regular peoples' reach
This is about these tools being blatantly flawed and unreliable.
In legal terms, marketing such a product is called "negligence" or "libel".
Lots of software is flawed and unreliable but this is typically addressed in the terms of service. This may not be possible with AI because the "liability" can extend well beyond just the user.
It is wrong to release something unreliable even while acknowledging it is unreliable? The product performs as advertised. If people want accurate information an LLM is the wrong tool for the job.
From the Gemma 3 readme on huggingface:
"Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements."
notavalleyman|3 months ago
Here is more info and links to the models, so you can interrogate them about Senatorial scandals on your hardware at home.
https://huggingface.co/blog/gemma
Your claim was so far from the truth of reality, that now, it's incumbent upon you to go back through the chain of faulty reasoning. You took it for granted that a conspiracy theory about suppressing information was true, when actually, the same Gemma model was already open-weighted by the same conspirators who you accuse of keeping Gemma out of regular peoples' reach
jqpabc123|4 months ago
In legal terms, marketing such a product is called "negligence" or "libel".
Lots of software is flawed and unreliable but this is typically addressed in the terms of service. This may not be possible with AI because the "liability" can extend well beyond just the user.
jwitthuhn|3 months ago
From the Gemma 3 readme on huggingface: "Models generate responses based on information they learned from their training datasets, but they are not knowledge bases. They may generate incorrect or outdated factual statements."
flufluflufluffy|3 months ago