He should have lost his license after keeping up. There should be a serious fine (at least 10x the time of other people they have wasted) and loss of professional licenses upon the first repeat of the offense. You want to use AI to discover similar cases? You want to use it for ideas on how to structure your line of defense? Good. The moment you submit it's output under your name though, expect repercussions.
A person should spend decades of their life studying law and practicing it and then have their career ended because they didn't realize that this new tool everyone said was so amazing was fundamentally and conceptually different from every search engine he'd seen before and (unlike what he otherwise would have used, Westlaw) comes up with random made up but very plausible sounding precedents when you ask it for legal research?
Embarrassed, called out, fined? Of course. But that's not a serious argument. Unless you think someone should be legally prohibited from ever being paid as a programmer for the rest of their life for blowing up a server or failing to maintain a backup system or something
This is a mistake, and a pretty easy to understand one. It's not stealing client's money or tampering with a witness or something.
Intractable model error that's elemental to the approach won't get you any funding though.
Anthropomorphizing statistical learning is how you build a hype machine to cash out people with zero handle on the subject. See the comment below about "AI judges" and "true justice". Just like early electricity, all people see is magic.
No, it's good that the public understands that AIs are wrong so regularly that we need a special word dedicated to this one specific manner in which they're wrong.
Generative AI output is becoming inextricably associated with this word, and that's not a bad thing to keep people aware of.
Any recommendations? The public seems to actually understand what this means although it’s just more anthropomorphization of a random bullshit generator.
There was a similar incident last year [0], but in that case the lawyer actually doubled down on the fake cases and, when pressed to produce evidence of the fake cases, submitted screenshots of them asking ChatGPT to confirm that the cases were real.
At least this one had the good sense to apologize instead!
I am not sure how is this news. If a lawyer submits filings with non-factual things in them, will he get fined? - Yes. How should this be different when using AI tools? Are AI tools statistical in nature and will produce incorrect answers sometimes? - For sure. Therefore, proceed with caution and verify the entire filing, as you are the one signing it.
Congrats for having judges who actually read the citations. As a lawyer in Germany, I find that most judges will barely read the body of a brief, let alone look up citations.
I found it astounding that the judge accepted the lawyer's excuse that two recent law graduates were responsible for the AI citations. No recent graduate would use AI in this manner. Everyone deserves a second chance, but I would have increased the fine to $10,000 for this clear failure to accept responsibility.
Why would no recent graduate use AI in this manner?
I'd imagine that a lot of law schools have started warning students against it, but in other fields there's often quite a pronounced lag between changes in the field and changes in the curriculum. Even if the best schools have adapted, I'd be shocked if there aren't a bunch that haven't.
Fines for lawyers are set at slap-on-the-wrist levels. Violating legal ethics almost always carries higher expected benefit than expected penalty, which is why most lawyers are crooks (even ones who don't want to be). It's impossible to be a competitive lawyer in many domains while still following legal ethics.
In this case, 10% odds of getting caught · $2000 = $200 per violation, generously, so less than one billable hour.
[+] [-] mateo1|2 years ago|reply
[+] [-] solfox|2 years ago|reply
[+] [-] CPLX|2 years ago|reply
A person should spend decades of their life studying law and practicing it and then have their career ended because they didn't realize that this new tool everyone said was so amazing was fundamentally and conceptually different from every search engine he'd seen before and (unlike what he otherwise would have used, Westlaw) comes up with random made up but very plausible sounding precedents when you ask it for legal research?
Embarrassed, called out, fined? Of course. But that's not a serious argument. Unless you think someone should be legally prohibited from ever being paid as a programmer for the rest of their life for blowing up a server or failing to maintain a backup system or something
This is a mistake, and a pretty easy to understand one. It's not stealing client's money or tampering with a witness or something.
[+] [-] BlueTemplar|2 years ago|reply
The second one allegedly only quickly checked the examples provided by his associates.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] m_eiman|2 years ago|reply
[+] [-] pxmpxm|2 years ago|reply
Anthropomorphizing statistical learning is how you build a hype machine to cash out people with zero handle on the subject. See the comment below about "AI judges" and "true justice". Just like early electricity, all people see is magic.
[+] [-] add-sub-mul-div|2 years ago|reply
Generative AI output is becoming inextricably associated with this word, and that's not a bad thing to keep people aware of.
[+] [-] linkjuice4all|2 years ago|reply
[+] [-] neilv|2 years ago|reply
"Wargames" (1983): https://www.youtube.com/watch?v=71k7-dGhNFQ&t=4m8s
[+] [-] lolinder|2 years ago|reply
At least this one had the good sense to apologize instead!
[0] https://news.ycombinator.com/item?id=36097900
[+] [-] g42gregory|2 years ago|reply
[+] [-] cranberryturkey|2 years ago|reply
[+] [-] leobg|2 years ago|reply
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] yieldcrv|2 years ago|reply
[+] [-] danielfoster|2 years ago|reply
[+] [-] lolinder|2 years ago|reply
I'd imagine that a lot of law schools have started warning students against it, but in other fields there's often quite a pronounced lag between changes in the field and changes in the curriculum. Even if the best schools have adapted, I'd be shocked if there aren't a bunch that haven't.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] frognumber|2 years ago|reply
In this case, 10% odds of getting caught · $2000 = $200 per violation, generously, so less than one billable hour.
[+] [-] solfox|2 years ago|reply
[+] [-] ysofunny|2 years ago|reply
"in the kingdom of heaven the judge is a perfect AI, lawyers do not exist, and true justice does happen normally"
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] jprete|2 years ago|reply
[+] [-] zer00eyz|2 years ago|reply
After googles recent "alignment" debacle never.