top | item 43098006

83% skip agreements, 61% get "ripped off". AI – risk analysis, would you use?

1 points| zane12580 | 1 year ago

Hi,

I'm building an open - source tool (codename TermGuard) to combat "protocol dark patterns". The core addresses a counter - intuitive problem: Why has the ability of humans in the digital age to read agreements regressed to the level of apes?

Technical solution

Use RAG + an adversarial - trained LLM (based on Llama 3) to dissect agreements in real - time and mark high - risk clauses (such as auto - renewal traps).

The browser plugin automatically grabs agreement updates in Gmail/App and compares clause changes in a Git Diff style.

Trump card: When "group - victim clauses" are detected, launch a class - action lawsuit heat map (referencing Swarm Intelligence).

The MVP functions have been implemented.

Parse mainstream agreements such as Spotify/Netflix within 15 seconds (demonstration video).

Clause risk radar chart + one - click generation of rights protection templates.

Localized processing (privacy - sensitive data is not uploaded).

I'd like to hear the community's sharp feedback:

In what scenarios are you most likely to use such a tool? (For example, before subscribing to a service? After being charged?)

Where does the existing solution (such as TOSDR) make you unhappy? Is it the update delay? Or too mild?

If the tool is open - source and allows crowdsourced model training, would you be willing to contribute your own protocol cases where you were "ripped off"?

What's your biggest concern? (Such as legal reliability? Privacy risk?)

Controversial assumptions (welcome to refute):

The engineering community needs protocol parsing more because you are more aware that "the devil is in the details".

Protocol transparency should become a new moral standard following code open - source.

The deterrence of class - action lawsuits is much higher than GDPR fines.

2 comments

order

JohnFen|1 year ago

I wouldn't use something like this because I don't trust the output from LLMs enough to rely on them for anything remotely important. If I used a tool like this, I'd still have to confirm that the output was complete and correct. If I have to do that, I may as well cut out the AI middleman and save a bit of effort.

eesmith|1 year ago

> Why has the ability of humans in the digital age to read agreements regressed to the level of apes?

Humans have never had that ability. 50 years ago most people rarely needed to read agreements.

In my experience it started with software products in the 1980s, and shrink-wrap contracts that used copyright law to get around the first-sale doctrine.

This practice spread and spread so that now everyone is expected to "voluntarily" agree to license upon license upon license, without the training to understand it.

The agreements are created by professional lawyers who know how to confess everything while saying nothing. "We work with our 523 advertising partners to collect anonymous data" tells the truth, while not saying that gives them the right to use tracking software, build individualized profiles, and use real-time bidding to sell supposedly anonymized data to others, including data brokers who then aggregate the data to re-identify you and sell it to the police, who justify using the data to track you because it was somehow all voluntary because you agreed to the terms.

If the tool's interpretation of a license says Windows 11 adds non-consensual spyware and advertising widgets, then what do you you think most people will do?

> The browser plugin automatically grabs agreement updates in Gmail/App and compares clause changes in a Git Diff style.

My bank sends me a diff when they change the terms. They take contracts seriously.

My local clinic is the opposite. I'm sick and tired of their online interface because it wants me to read and authorize the terms of service every time I use it. And it changes every few months, not as a diff, but wholesale. As if the chain which runs the clinic has a lawyer who justifies getting paid by creating a new contact all the time.

An automated diff wouldn't help.

We know people don't read these contracts. When the clinic advertises "contact your doctor online - it's easy!", it shows they don't expect anyone to actually read the contract, much less complain.

I sent a complaint that it took me 10 minutes to re-read the contract every time, and could they have some way to remember the last time I confirmed it, and only highlight the changes.

The answer was something like "Thank you for your suggestion."

The only solution I see is very strong consumer protection laws, with common rules for services (you don't sign a legal contract to buy a hamburger at the store, why should you need one to use their app?), and prohibitions on personalized advertisement and tracking.

We need strong anti-monopoly laws and we need portability laws so we can move services to a competitor as easily as we can move a phone number from one company to another.

Individualized tools to help identify dark patterns won't help. My lawyer, who is trained in contract law, is still stuck with agreeing to whatever Microsoft says because the entire court system works on Microsoft, and he has to have a Microsoft 365 subscription.