top | item 47066636

(no title)

BeetleB | 11 days ago

> should be entitled to damages from the company authoring the model.

1. How will you know it's a bot?

2. How will you know the model?

Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

> These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Ouch. Throw due process out the door!

> Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this.

This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

discuss

order

almostdeadguy|10 days ago

> 1. How will you know it's a bot? > 2. How will you know the model?

Sounds like a problem for the platforms and model vendors to figure out!

> Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

I mean providers are obviously my primary concern as the people selling something to the public, but sure, why not both.

> Ouch. Throw due process out the door!

There's lots of prior art for this, let's not pretend like this is something new. The NLRB adjudicates labor complaints and disputes, the DoT adjudicates complaints about airlines, etc.

> This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Once again, sounds like a problem for the platforms to figure out! How do they handle spammers and abusers today? Throw up their hands? Guess they won't be able to do that for long!

> Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

Sounds like a diplomatic problem, if it actually is a problem. In reality the social harms of AI may exceed any supposed benefits. The optimistic case seems to be that AI becomes so powerful it causes a massive hemorrhaging of jobs in knowledge work (and later other forms of work). Still waiting to see any social benefits!

BeetleB|10 days ago

> Sounds like a problem for the platforms and model vendors to figure out!

> sounds like a problem for the platforms to figure out!

You'd have to fundamentally change how the Internet works to be able to figure these things out. To achieve this, you'd need cooperation from everybody, not just LLM providers.