top | item 38901512

(no title)

preciz | 2 years ago

> A model that possesses the entire collective knowledge of our civilization is useless if it can't directly quote its sources.

That's a strong and baseless statement.

discuss

order

throwaway4aday|2 years ago

Useless is probably not the right word but it's a good way of summing up a lot of the current problems. If the model can clearly identify when something is an exact quote and also know the source then its output could be trusted for the most part and much more easily verified. It would certainly elevate the output of the model from "random blog post or forum chat" to "academic paper or official report" levels of trustworthiness. Citing sources is hugely important for validation, cited text allows an immediate lookup and simple equality check for verification after which you can use it as context to validate the rest of the claims. Like I said, it's a standard we apply to humans who have an equal propensity for hallucination, mistakes, and deception because it's a tried and true method for the reader to check the claims being made.

js8|2 years ago

I, for one, agree with the original statement. I think the hallmark of enlightenment (for example, in the scientific method) is that we are able to externalize the expert knowledge, that is, experts are usually required to provide reasoning behind their claims, and not just judgements. This is because we learned that experts cannot be 100% trusted, only if we can verify what they say we can somewhat reach what is truth (although expertise still provides a convenient shortcut).

So not demanding this (and more) from an AI (an artificial expert) is a regression. AI should be capable of wholly explaining its reasoning, if we are to consider its statements to be taken seriously. It is understandable that humans have only limited capability to do that, since we didn't construct human brain. But we have control over what AI brains do, so we should be able to provide such an explanation.

It is somewhat ironic that you yourself do not provide any argument in favor of your disagreement.

jimberlage|2 years ago

This isn’t meant to totally disagree with your point (there’s some stuff I agree with in here) but I’m having trouble seeing the point about regressions.

To use another example, a new NoSQL DB not having joins is a regression. Does that mean no one is justified in releasing a new NoSQL DB?

ryanklee|2 years ago

Providing reasoning and providing citations are not the same thing. Reasons can be provided without citations; citations can be provided without reasons.

LLMs have astounding utility citations notwithstanding.

Ferret7446|2 years ago

And also patently false. Knowledge is knowledge, it's useful without source citations.

Is the knowledge of how to do CPR somehow ineffective because I can't cite whether I studied the knowledge from website A or book B? Is reality a video game where skills only activate if you speak the magic words beforehand?

njgingrich|2 years ago

Well sure, it's easy to make a statement look bad if you only include half of it.

rpdillon|2 years ago

The statement is equally hyperbolic both as quoted and in the original context. LLMs often can't quote sources, and those models are nevertheless useful to lots of people. Makes it hard for me to take the rest of the comment seriously.

ryanklee|2 years ago

That was the whole statement. It doesn't have qualifiers left out