top | item 42596040

(no title)

tomalaci | 1 year ago

You know what AI profiles I want more of? Clueless grandma/grandpa that accepts scammer calls and waste their time. Such as UK's O2 Daisy: https://news.virginmediao2.co.uk/o2-unveils-daisy-the-ai-gra...

discuss

order

crmd|1 year ago

There is another real-world version of this in the US healthcare system, where doctor offices are using domain-specific LLMs to craft multi-page medical approval requests for procedures that cover every known loophole insurer’s use to deny, which are then being reviewed by ML-powered algorithms at the insurance company looking for any way to deny and delay the claim approval.

In other words we have a bona fide AI arms race between doctors and insurers with patient outcomes and profits in play. Wild stuff and nothing I could have ever imagined would be an applied use of ML research from earlier in my career.

dataviz1000|1 year ago

Interesting. My next door neighbor ten years ago was a lawyer a couple years out of law school. He discovered that he could pour through hundreds of medical charts a day and find cases where the doctor under billed the insurance company. He would then sue the insurance company, settle, and split the profits with the doctor. More or less he was mining the charts.

He would sometimes pull up next door with a half dozen tote boxes overflowing with medical records. He would say "hey, dataviz1000, can you help me get these into the house?" He once asked me if I wanted a new job helping him go through all the charts. I don't get involved with illegal activities and I was earning more not breaking the law elsewhere. He did hire a young woman who graduated law school and was still working on passing the bar. Since they have married and started a family.

Yes, HIPAA laws got broken! Yes, this guy made 10s of millions in a few short years.

There are no good guys in this story.

Probably would make a good start up using LLM and bringing the process into compliance with HIPAA. There is probably several billion dollars in insurance companies that have been under billed.

CoastalCoder|1 year ago

What really grinds my gears is the CO2 emissions and power-grid load that's being used for this stupid arms race.

I realize that there's no magic solution to perverse systems like this, but it really bothers me nonetheless.

z3c0|1 year ago

It will take me some time to dig up with all the noise around AI, but this reminds me of a paper published around 2018 or so that explored the possibility of two such AI forming an accidental trust by optimizing around each other. For example, if the denying AI used frequency of denied claims as a heuristic for success, and the AI drafting claims used the claim amount for the same, then the two bots may unknowingly strike a deal where the AI drafting claims lets smaller claims get frequently denied to increase the odds of larger claims.

Note: not saying these metrics are what would be used, just giving examples of antithetical heuristics that could play together.

jiggawatts|1 year ago

It’s so bizarre to me that this uniquely US phenomenon of for-profit-middlemen inserted into the healthcare system has resulted in an adverserial relationship between the sick person and the “healthcare provider”.

I put that air quotes because insurance companies don’t actually provide health care. They provide insurance. That’s a financial product, not a medical one.

rscho|1 year ago

Well, docs have seen this coming from miles away. I don't think anyone having substantial experience in clinical medicine is surprised by those developments, unfortunately. But it doesn't stop here. Insurance companies will be (are) building models to overcome legal barriers. Imagine: you're 20 and healthy, but located somewhere suggesting a higher risk of developing some chronic disease in the future ? Then no insurance covering this particular condition, for you specifically. A real-world application of the 'fuck you in particular' meme. This of course extends to all sorts of sensitive matters, such as your ethnicity, sexual preferences, etc.

Now this is a really scary application of AI, but you won't hear those wanting AI regulation such as Musk complain about that, right?

fallingknife|1 year ago

The year is 2035. To cut costs, both insurance companies and providers removed the human from the loop long ago setting off an adversarial process between the LLMs on both sides. Medical insurance claims are now written in an ever changing format that resembles no human language. United Healthcare has just announced a $10 billion project including a multi gigawatt data center to train its own foundational model to keep ahead in the arms race. UNH stock is up 5% on the announcement.

vjk800|1 year ago

The real fun starts when they start writing the insurance contracts that are only meant to be readable by ML algorithms. Imagine thousands (millions?) of contract pages written in practically incomprehensible language, designed by an ML algorithm to contain clever loopholes that are difficult to detect by an adversarial algorithm.

dpflan|1 year ago

Interesting. Do you have any examples to share?

lfmunoz4|1 year ago

[deleted]

IG_Semmelweiss|1 year ago

This is coming and there's a very simple fix.

Make healthcare insurance be actual insurance: as in, not a gateway to treatment conditions that are entirely lifestyle driven.

Once patients are responsible for the bill and the large middle layer admin crud is taken off the table, medical inflation almost disappears. Take this example of a for-profit facility vs non-profit hospitals [1]

Ideally this happens once environmental factors are fixed or drastically reduced so diet and lack of time are "choice-driven" instead of "needs driven" as health determinants (you do have subsidies at the lowest end, but that cannot go on forever).

https://www.openhealthpolicy.com/p/cash-providers-cheaper-su...

amelius|1 year ago

Or AI profiles that pen-test the real grandmas/grandpas.

seizethecheese|1 year ago

My father has fallen for one fraud after another these last few years. It’s disgusting. Anything in the direction of solving this would be doing the lord’s work.

leobg|1 year ago

What country are you in?

I’m in DE and have filed a criminal complaint about a company that runs fake personal ads targeting the elderly:

When an elderly person calls, they schedule an appointment at the person’s home. Then, over several hours, they talk them into signing a 3,000 EUR contract for an objectively useless service (getting contact data of 7 or so random people over a period of several weeks).

The guy running this scheme has been doing it since the 1990s. We know they did > 40 Million Euros in the last 10 years alone.

We filed the complaint a year ago and police and district attorney have done nothing so far. At the same time, the criminal himself has sued journalists who have covered the story multiple times.

Seems like the criminals are more resourceful than those who are getting paid to stop them.

throwaway48476|1 year ago

The solution is for all foreign wire transfers to be insured and reversible which would drive up the cost of doing business with countries home to scammers.

TheSpiceIsLife|1 year ago

Power of attorney, and hold all his cards etc for him.

Requires the person to want to do that though.

leshenka|1 year ago

I once tried calling my relatives in Russia and was instead connected with exactly this kind of bot.

I guess this has something to do with my phone number starting with +38 and that nobody actually calls their relatives by mobile phone anymore

__MatrixMan__|1 year ago

It's a funny thought on the surface, but the people working these scams are typically slaves, more or less. I'd rather go after their slavers than waste electricity to waste their time.

tsimionescu|1 year ago

This is not attacking the people on the phone, it's attacking the whole operation. The person on the phone is going to be on the phone regardless of whether they're talking to an AI or a victim. The AI is merely talking on the phone, not abusing the caller in any way (other than perhaps eating into their commission).

I also think it's extremely simplistic to call the people making the calls "slaves". A lot of the time, they are in facr the perpetrators. Even when they are part of a more organized operation, they (1) are likely paid per successful scam, so they are co-interested in hurting you, and (2) fully aware they are scaring and stealing from someone.

So I wouldn't call these people slaves, I'd call them low-level criminals.

AnarchismIsCool|1 year ago

If there's no money in slavery there won't be any slaves. Yes, they'll get moved onto other things but this is currently the most profitable slavery operation available so that seems like a good place to start.

bambax|1 year ago

It's sad you're downvoted, because you're right. So called "anti-scammers" who make a fortune on Youtube or that Reddit apparently considers heroes, are in effect preying on the poor. The real culprits are the bosses, not the ones doing the phone calls.

cute_boi|1 year ago

Imagine AI calling AI and wasting each others time :D

barbazoo|1 year ago

And wasting resources too. We’ve peaked as a species.

pkkkzip|1 year ago

This is actually a hilarious scenario. Anthropomorphize TTS with Indian accents to entrap the other AI agent into thinking they are a real human. DDOS their o1 API calls by soft jail breaking prompts using complex programming questions disguised as typical Microsoft support issues.

CodeBot: Word tables blank sometimes. Hmm.

SupportBot: What version? Try a repair.

CodeBot: Memory issue maybe? Bad alloc?

SupportBot: Rare. Repair is next.

CodeBot: Threading problem sar? Data races?

SupportBot: Try repair, new doc please sar.

drdrey|1 year ago

really wasting each other’s energy