top | item 45801484

(no title)

swivelmaster | 3 months ago

At some point we have to be willing to call out, at a societal level, that LLMs have been fundamentally oversold. The response to "It made defamatory facts up" of "You're using it wrong" is only going to fly for so long.

Yes, I understand that this was not the intended use. But at some point if a consumer product can be abused so badly and is so easy to use outside of its intended purposes, it's a problem for the business to solve and not for the consumer.

discuss

order

TZubiri|3 months ago

Maybe someone else actually made up the defamatory fact up, and it was just parroted.

But fundamentally the reason ChatGPT became so popular as opposed to its incumbents like Google or Wikipedia, is that it dispensed with the idea of attributing quotes to sources. Even if 90% of the things it says can be attributed, it's by design that it can say novel stuff.

The other side of the coin is that for things that are not novel, it attributes the quote to itself rather than sharing the credit with sources, which is what made the thing so popular in the first place, as if it were some kind of magic trick.

These are obviously not fixable, but part of the design. I have the theory that the liabilities will be equivalent if not greater to the revenue recouped by OpenAI, but the liabilities will just take a lot longer to realize, considering not only the length of trials, but the length of case law and even new legislation to be created.

In 10 years, Sama will be fighting to make the thing an NFP again and have the government bail it out of all the lawsuits that it will accrue.

Maybe you can't just do things

im3w1l|3 months ago

Businesses can't just wave a magic wand and make the models perfect. It's early days with many open questions. As these models are a net positive I think we should focus on mitigating the harms rather than some zero tolerance stance. We shouldn't allow the businesses to be neglectful, but I don't see evidence of that.

mindslight|3 months ago

> We shouldn't allow the businesses to be neglectful, but I don't see evidence of that.

Calling it "AI", shoving it into many existing workflows as if it's competently answering questions, and generally treating it like an oracle IS being neglectful.

derbOac|3 months ago

Here on HN we talk about models, and rightfully so. Elsewhere though people talk about AI, which has a different set of assumptions.

It's worth noting too that how we talk about and use AI models is very different from how we talk about other types of models. So maybe it's not surprising people don't understand them as models.

vrighter|3 months ago

Even if they had a magic wand, they still couldn't make them perfect. Because they are by nature, imperfect statistical machines. That imperfection IS their main feature.

HacklesRaised|3 months ago

It can't be perfect right? I mean the models require some level of entropy?

watwut|3 months ago

Businesses should be able to not lie. In fact, they should be punished for lying and exaggersting much more often - both by being criticised and loosing contracts and legally.

ares623|3 months ago

> As these models are a net positive

Uhhh… net positive for who exactly?