>The ultimate goal, according to Browder, is to democratize legal representation by making it free for those who can't afford it, in some cases eliminating the need for pricey attorneys.
this is the core issue I have with companies and tech like this: they try to solve social issues with technology and business. the issue of accessible defense is an issue of institutional failure that rests on the shoulders of the state. tech i think may bandage the problem, but just like how Tesla won't solve green travel this won't fix the underlying issue
I disagree. Technology is the perfect solution for problems like these. So many societal issues today arise because systems developed centuries ago cannot keep up with a population that has grown exponentially since then. Tech's core feature is being able to scale. There will be more and more systemic reliance on automation and AI as time goes on, since that's the only sensible way out.
It could also make things much worse, due to Jevon's paradox. The way rich societies throw away tons of food because it isn't scarce, or how modern software is slower than ever because hardware is too cheap. If half-baked AI's lower *some* of the costs of law practice, we might end up wasting more resources than ever before fighting against the proliferating volume of it.
See, the sword cuts both ways. Prosecutors' office would run more efficiently, would pursue much larger caseloads (lower priority ones) with fewer employees and less time. Civil lawsuits could become much cheaper to file; trolling could expand from niches (like patent law) to much broader, more socially-impactful forms of warfare. The financial risk/reward equation is what deters this, and lowering the costs means eliminating the deterrence. And imagine regulatory compliance costs, when regulators feel emboldened to design regimes so elaborate they themselves don't understand them, because they have AI assistants capable of parsing them.
This is the core issue I have with comments like this: they refuse to try solving social issues with technology. I can understand why not business, but every time a solution involving tech pops up, there's immediately people dismissing it with a comment saying you can't solve societal issues with tech, with no proposed alternative solutions either. Ok, you don't want to solve it with tech, how then?
- safe transport for women at night: Find My Friends / Uber+Lyft
- access to government data: opendata initiatives, a plethora of cadastral map services
I'm going to say the obvious thing everyone is thinking. Everyone who has ever said "this is the core issue: you're trying to solve a social issue with technology" has never solved a social issue _or_ a technological issue and has no idea how to do it.
0: obviously I must clarify for this audience that "solved" in this context means that it has reduced the prevalence of some problem below some low base rate
Nitpick: From a glimpse at the article, I think the defendant will represent himself, the "AI lawyer" running on the defendants smartphone will just give him hints. Not sure how it can be acceptable to have a third party listen into court proceedings in real time through a smartphone, but ok.
True for wealthy countries, but in really poor countries access to things like medical care or legal services isn't a policy issue, there is a legitimate scarcity. Barring foreign aid, the only two alternatives are no access to these services, versus access to an automated version that's worse than a human but better than nothing. So we're not dealing in a world of perfect solutions, just less bad solutions, which is fine. Note that I'm speaking generally, not about the particular service this article is about.
With technology or a product that people will voluntarily pay for, you can make a specific, measurable, concrete difference in a bounded timeframe. If you fail, you can be sure to do so without causing immense suffering across a whole society in the way that government actions sometimes do when they go wrong. If you could "fix the underlying issue" in a permanent way without unintended consequences, that would be better, but no one knows how to do that reliably using politics, and the failure modes of airbnb, uber, and tesla are gentler than the failure modes of politics.
I think it’s exactly the opposite. People broadly overestimate the effect of social and political solutions while overlooking the impact of technological and economic changes. For example, why did gender roles suddenly change in the western world in the mid 20th century? Was it because people wrote books and passed laws? Or was it because the economy shifted from physically demanding agricultural and industrial work to a services and knowledge work economy? I’d argue that the typical course is for social change to follow technological and economic change.
Similarly, the the problem with representation of defendants in the legal system is technological and economic, not social. It’s expensive to have a credentialed person spend lots of time working on behalf of someone else. That’s true for everything from auto repair to law. The social and legal system is simply responding to that economic reality. The public is willing to spend so much money to protect the rights of the accused and there is only so much you can accomplish by haranguing them into wanting to spend more money. By contrast, if you dramatically reduce the cost of providing the accused with a defense, you dramatically change the landscape of the social and political issues.
How is this a social problem? Lawyers are expensive, because education takes time, motivation and competence. So it's an economical problem, something the state can't easily solve. Technology can make an impact here and sidestep or even ignore those problems.
Working within the existing system, though, is typically the fastest way to address the grievance. AI is probably the most scalable approach to competent counsel, far easier (given ChatGPT) than training new lawyers and convincing them to become public defenders.
Tesla was a "forcing function" that caused other car manufacturers to build EVs. AI lawyer technology may be the catalyst needed to trigger reform of the legal system. It's clear that up until now there has been no reform forthcoming.
> this is the core issue I have with companies and tech like this: they try to solve social issues with technology and business. the issue of accessible defense is an issue of institutional failure that rests on the shoulders of the state.
Who's to say DoNotPay won't eventually sell this service to the state, in service of the public interest in accord with the Sixth Amendment? In that light, it may be a tool worth developing
You can represent yourself in court in many cases, the problem is that you will likely fail as you won't be very good at case law, abiding by all sorts of conventions and therefore lose.
If AI can help you make properly referenced arguments it might well solve a societal problem to a degree.
Its a state created problem and you expect them to solve it? Private entities are the way these problems are always solved efficiently. Cant wait for lawyers to be obsolete personally.
The legal system has never been so easy.
Thanks to e-Trial from Cinco, it's fun!
All you have to do is enter your plea, choose an e-Jury, and submit your evidence. Then, sit back and watch as each e-Trial automatically generates a legally binding verdict. Plus, the trial is never wrong! Cinco e-Trial is perfect for handling cases such as fraud, divorce, assault, rape, murder, tax evasion, petty larceny, destruction of property, and more.
> the company's AI-creation runs on a smartphone, listens to court arguments and formulates responses for the defendant. The AI lawyer tells the defendant what to say in real-time, through headphones.
Based on AI’s propensity for being confidently incorrect, I’m not sure I’d want to be repeating AI output in a court of law and representing it as truth.
But this xan be be a. ery good complement to an actual layer.
The layer would spend less time preparing (so cheaper) and will basically direct the IA and act as safeguard when the AI gets so confidently incorrect.
The upside of this is that it could prevent self-represented defendants from trying to apply legal arguments or concepts in the wrong context. Pro se defendants do this all the time and it really pisses off judges. ChatGPT has a reputation for being "confidently wrong", but in my experience it's less likely to make that specific category of error.
The downside is that I hire a traffic ticket attorney because they've been handling a cases similar to mine, in the same court circuit, almost every day for years. Legal work isn't about knowing the law. It's about knowing how the system works in a particular place and time. ChatGPT was trained on many thousands of various legal cases. But it can't ever really understand how a given court works unless many people have written about that place and posted it to the internet.
I feel like it would be much better for people to just type their case into the ChatGPT prompt and ask "What argument should I use?", and then take that into the courtroom. Ask it what counter arguments to expect, what the standards of evidence are, etc. You can even prompt it with "Pretend you're a traffic court judge. How would you respond to this argument?" Essentially use it as a research tool for people who are brave or foolish enough to represent themselves. Not sure how much benefit there is in having that in the room with you.
FWIW, I tried DoNotPay to get out of a street cleaning parking ticket and it didn’t work. To DNP’s credit, they refunded my money after providing a PDF response from SFMTA.
I remember discussing ideas for law automation ca 2016 that weren't even quite as radical as this. Kind of shows that we tend to overestimate change in the short term and underestimate it in the long term. I don't know how legit this specific product is, but the likes of GPT have definitely put this into the realm of the imaginable.
"I was apprehended under a level three predictive crime directive. My AI lawyer failed to cite the relevant statutes. Now I've been remanded to a medium security behavioral modification center. After incorrectly answering the AI managed multiple choice psychological evaluation questionnaire, I was denied parole. When released I'll have to start all over again with a social credit score of zero. My mother always said I would run afoul of the algorithm. If only I had listened..."
While a pure ChatGPT-style bot would be prone to hallucinations and overconfidence, isn't law actually a fertile field for working out grounded chat LLMS?
The lawsuits I've been involved in typically hinged on pretty arcane stuff which I highly doubt is going to work well without an entity that can quite literally think on its feet. Just showing up is 90%, but what you do when you show up is the other 10 and if you botch that you may still lose a case.
'Sorry, that wasn't in my training set' won't cut it if the penalty is >> the price of your legal fees, an extended stay in jail or in some countries the death penalty. Obviously for criminal cases you'd be fairly mad to try this kind of construct but that's exactly the ones where the legal fees are absolutely crippling.
I've yet to have a case that I brought that went to trial end up under 50K in costs and that was with what I thought was fairly simple stuff with relatively unsympathetic defendants.
Attorney here, I think it’s too early stages to run with this tech. It’s not just the confidentially incorrectness people have mentioned. If you ask it for legal citations for an argument, it makes them up because it has no concept of truth to filter on. Make all the jokes you want about lawyers stretching the truth, but citing State v. Johnson, (Minn. 2007) when that doesn’t exist is going to be bad for the individual. The average person also likely isn’t going to be able to recover from such a flub if they are relying on the AI to tell them what to say. They may double down because “that’s what the AI says” which will make things worse for them.
I’ll also throw out the libertarian posturing of DoNotPay seems shortsighted. Sure, it might be all fine for awhile as individuals move faster than the system and gain an upper hand, but when the Comcast Customer Service Rep is replaced with the tech, that benefits only the shareholders. If you thought customer service calls were insufferable before, imagine an indefatigable, perpetually nice adversary out to screw you over.
> imagine an indefatigable, perpetually nice adversary out to screw you over.
That's already the case on a lot of calls. In particular, the systems that are configured to not hand you over to a human no matter what unless the system itself detects there is an error in its handling of the situation. I've just defaulted for some time now to saying the word "gibberish" when prompted, and usually that gets you connected directly to a human after no more than 3 times saying it because the system believes that it can't understand you. You can not ask the system to please do that for you no matter how you phrase the question because it was designed specifically to stop you from reaching a human.
Any insight on how the planes keep the Wi-Fi going? If the cell service internet provider’s handoff speed is too slow, does that mean they’re doing satellite internet in-flight? It was mentioned in the article that the tech has won some legal arguments regarding in-flight Wi-Fi refunds. (When the paid-for service didn’t work.)
I could see an AI lawyer being a great assistant for human lawyers. I dislike the idea of an AI lawyer being appointed to me if I cannot afford my own representation, especially as the county NN isn't going to have as much resources as Kirkland and Ellis's NN that's been trained on all the cases they downloaded from Lexus Nexus.
Once this tech gets good, and everyone has access to a high-powered legal team for pennies, I am a little concerned that anyone will be able to DDoS the legal system.
It's already possible for giant corporations to bury smaller actors in paperwork. They can and do crush people with this.
The legal system is already under DDOS if you want a speedy trial. There are people in jail for years without being found guilty for anything simply because the courts don’t have enough capacity to manage their current criminal trial load.
Call me old fashioned, but an entrepreneur like Browder or Bankman-Fried, the type of guys that don‘t know what a workout is and that fail to clean their rooms when Vice stops by: Why are the top VCs investing in them?
Isn‘t the room or fitness of someone a reflection of their self and their mental health? Sure, messiness is easily solved by hiring someone to clean: But isn‘t it telling about their reflection capabilities and their handling of stress?
The problem is that half the working class wants those loopholes to stay in place because they have this delusional notion that one day, they'll be the ones making use of them as well.
Since legal firms are in no hurry to lower billable hours by replacing all of their paralegal researchers with models that can guide towards relevant case law, public defenders are the perfect market for these tools.
If I were the CEO of this company making an AI-powered lawyer product, I would also try and market this to the lawyers that are already representing this underserved class but generally stretched too thin.
Some work a lawyer does is menial word churning. Some work a lawyer does requires much more mental effort.
If I could feed thousands of documents to an algorithm and then ask him stuff like "in which pages the tax returns of the defendant are discussed?" or "how many times is the defendant's wife talked about in a negative connotation?" it would make work much easier.
I call BS on this. Once it stops consistently landing its users in prison, the "AI" attorney will be just as expensive as a real one. And in the unlikely case that it ever becomes better than a human lawyer, it will also be more expensive.
[+] [-] ausbah|3 years ago|reply
this is the core issue I have with companies and tech like this: they try to solve social issues with technology and business. the issue of accessible defense is an issue of institutional failure that rests on the shoulders of the state. tech i think may bandage the problem, but just like how Tesla won't solve green travel this won't fix the underlying issue
[+] [-] paxys|3 years ago|reply
[+] [-] perihelions|3 years ago|reply
See, the sword cuts both ways. Prosecutors' office would run more efficiently, would pursue much larger caseloads (lower priority ones) with fewer employees and less time. Civil lawsuits could become much cheaper to file; trolling could expand from niches (like patent law) to much broader, more socially-impactful forms of warfare. The financial risk/reward equation is what deters this, and lowering the costs means eliminating the deterrence. And imagine regulatory compliance costs, when regulators feel emboldened to design regimes so elaborate they themselves don't understand them, because they have AI assistants capable of parsing them.
[+] [-] birracerveza|3 years ago|reply
[+] [-] renewiltord|3 years ago|reply
- access to general knowledge: Wikipedia
- safe transport for women at night: Find My Friends / Uber+Lyft
- access to government data: opendata initiatives, a plethora of cadastral map services
I'm going to say the obvious thing everyone is thinking. Everyone who has ever said "this is the core issue: you're trying to solve a social issue with technology" has never solved a social issue _or_ a technological issue and has no idea how to do it.
0: obviously I must clarify for this audience that "solved" in this context means that it has reduced the prevalence of some problem below some low base rate
[+] [-] hef19898|3 years ago|reply
[+] [-] hackerlight|3 years ago|reply
[+] [-] randallsquared|3 years ago|reply
[+] [-] rayiner|3 years ago|reply
Similarly, the the problem with representation of defendants in the legal system is technological and economic, not social. It’s expensive to have a credentialed person spend lots of time working on behalf of someone else. That’s true for everything from auto repair to law. The social and legal system is simply responding to that economic reality. The public is willing to spend so much money to protect the rights of the accused and there is only so much you can accomplish by haranguing them into wanting to spend more money. By contrast, if you dramatically reduce the cost of providing the accused with a defense, you dramatically change the landscape of the social and political issues.
[+] [-] PurpleRamen|3 years ago|reply
[+] [-] brendanashworth|3 years ago|reply
[+] [-] vt100|3 years ago|reply
[+] [-] hammock|3 years ago|reply
Who's to say DoNotPay won't eventually sell this service to the state, in service of the public interest in accord with the Sixth Amendment? In that light, it may be a tool worth developing
[+] [-] jojobas|3 years ago|reply
[+] [-] pibechorro|3 years ago|reply
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] nathias|3 years ago|reply
[+] [-] jamesdwilson|3 years ago|reply
[+] [-] awb|3 years ago|reply
Based on AI’s propensity for being confidently incorrect, I’m not sure I’d want to be repeating AI output in a court of law and representing it as truth.
[+] [-] levesque|3 years ago|reply
[+] [-] leobg|3 years ago|reply
[+] [-] ww520|3 years ago|reply
[+] [-] Hithredin|3 years ago|reply
The layer would spend less time preparing (so cheaper) and will basically direct the IA and act as safeguard when the AI gets so confidently incorrect.
[+] [-] hooande|3 years ago|reply
The downside is that I hire a traffic ticket attorney because they've been handling a cases similar to mine, in the same court circuit, almost every day for years. Legal work isn't about knowing the law. It's about knowing how the system works in a particular place and time. ChatGPT was trained on many thousands of various legal cases. But it can't ever really understand how a given court works unless many people have written about that place and posted it to the internet.
I feel like it would be much better for people to just type their case into the ChatGPT prompt and ask "What argument should I use?", and then take that into the courtroom. Ask it what counter arguments to expect, what the standards of evidence are, etc. You can even prompt it with "Pretend you're a traffic court judge. How would you respond to this argument?" Essentially use it as a research tool for people who are brave or foolish enough to represent themselves. Not sure how much benefit there is in having that in the room with you.
[+] [-] nipponese|3 years ago|reply
[+] [-] c7b|3 years ago|reply
[+] [-] aww_dang|3 years ago|reply
[+] [-] oigursh|3 years ago|reply
[+] [-] heyitsguay|3 years ago|reply
[+] [-] jacquesm|3 years ago|reply
'Sorry, that wasn't in my training set' won't cut it if the penalty is >> the price of your legal fees, an extended stay in jail or in some countries the death penalty. Obviously for criminal cases you'd be fairly mad to try this kind of construct but that's exactly the ones where the legal fees are absolutely crippling.
I've yet to have a case that I brought that went to trial end up under 50K in costs and that was with what I thought was fairly simple stuff with relatively unsympathetic defendants.
[+] [-] headsoup|3 years ago|reply
[+] [-] elicksaur|3 years ago|reply
I’ll also throw out the libertarian posturing of DoNotPay seems shortsighted. Sure, it might be all fine for awhile as individuals move faster than the system and gain an upper hand, but when the Comcast Customer Service Rep is replaced with the tech, that benefits only the shareholders. If you thought customer service calls were insufferable before, imagine an indefatigable, perpetually nice adversary out to screw you over.
[+] [-] apaprocki|3 years ago|reply
That's already the case on a lot of calls. In particular, the systems that are configured to not hand you over to a human no matter what unless the system itself detects there is an error in its handling of the situation. I've just defaulted for some time now to saying the word "gibberish" when prompted, and usually that gets you connected directly to a human after no more than 3 times saying it because the system believes that it can't understand you. You can not ask the system to please do that for you no matter how you phrase the question because it was designed specifically to stop you from reaching a human.
[+] [-] comfypotato|3 years ago|reply
[+] [-] ourmandave|3 years ago|reply
So you've met my ex.
[+] [-] undersuit|3 years ago|reply
[+] [-] gamegoblin|3 years ago|reply
It's already possible for giant corporations to bury smaller actors in paperwork. They can and do crush people with this.
What happens when everyone has this ability?
[+] [-] SamoyedFurFluff|3 years ago|reply
[+] [-] mdorazio|3 years ago|reply
[+] [-] spuz|3 years ago|reply
(Yes I know in this case the defendant is acting pro se and therefore couldn't appeal on that basis but it's a fun idea to think about).
[+] [-] unknown|3 years ago|reply
[deleted]
[+] [-] timdaub|3 years ago|reply
Isn‘t the room or fitness of someone a reflection of their self and their mental health? Sure, messiness is easily solved by hiring someone to clean: But isn‘t it telling about their reflection capabilities and their handling of stress?
- Vice: https://youtu.be/4ywSt641A58
[+] [-] INTPenis|3 years ago|reply
Because that's something Ian Banks would smile at, from wherever he is rest his soul.
[+] [-] notRobot|3 years ago|reply
Those are not exactly a secret. The entire reason for those loopholes to exist is for the rich to be able to exploit/use them.
[+] [-] Sohcahtoa82|3 years ago|reply
The problem is that half the working class wants those loopholes to stay in place because they have this delusional notion that one day, they'll be the ones making use of them as well.
[+] [-] williamcotton|3 years ago|reply
If I were the CEO of this company making an AI-powered lawyer product, I would also try and market this to the lawyers that are already representing this underserved class but generally stretched too thin.
[+] [-] AstixAndBelix|3 years ago|reply
If I could feed thousands of documents to an algorithm and then ask him stuff like "in which pages the tax returns of the defendant are discussed?" or "how many times is the defendant's wife talked about in a negative connotation?" it would make work much easier.
[+] [-] classified|3 years ago|reply
I call BS on this. Once it stops consistently landing its users in prison, the "AI" attorney will be just as expensive as a real one. And in the unlikely case that it ever becomes better than a human lawyer, it will also be more expensive.