top | item 46338705

(no title)

laterium | 2 months ago

The issue you're overlooking is the scarcity of experts. You're comparing the current situation to an alternative universe where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.

That is not the reality we're living in. Doctors barely give you 5 minutes even if you get an appointment days or weeks in advance. There is just nobody to ask. The alternatives today are

1) Don't ask, rely on yourself, definitely worse than asking a doctor

2) Ask an LLM, which gets you 80-90% of the way there.

3) Google it and spend hours sifting through sponsored posts and scams, often worse than relying on yourself.

The hallucinations that happen are massively outweighed by the benefits people get by asking them. Perfect is the enemy of good enough, and LLMs are good enough.

Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. Their mistakes are not intentional. They're fiduciaries in the best sense, just like doctors are, probably even more so.

discuss

order

ozgung|2 months ago

Chronologically, our main sources of information have been:

1. People around us

2. TV and newspapers

3. Random people on the internet and their SEO-optimized web pages

Books and experts have been less popular. LLMs are an improvement.

martin-t|2 months ago

> LLMs are an improvement.

Unless somebody is using them to generate authoritative-sounding human-sounding text full of factoids and half-truths in support of a particular view.

Then it becomes about who can afford more LLMs and more IPs to look like individual users.

ahartmetz|2 months ago

Interesting point, actually - LLMs are a return to curated information. In some ways. In others, they tell everyone what they want to hear.

georgefrowny|2 months ago

> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.

When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.

And AI spew is theoretically a fantastic place to insert almost subliminal contextual adverts on a way that traditional advertising can only dream about.

Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.

And then multiply by every question you doing ask. Ask about do you need tyres. "Yes, you should absolutely change tyres every year, whether noticeably worn or not. KwikFit are generally considered the best place to have this done. Of course I know you have a Kia Picanto - you should consider that actually a Mercedes C class is up to 200% lighter on tyre wear. I have searched and found an exclusive 10% offer at Honest Jim's Merc Mansion, valid until 10pm. Shall I place an order?"

Except it'll be buried in a lot more text and set up with more subtlety.

otabdeveloper4|2 months ago

> When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.

Yeah, back in the day before monetization Internet pages were informative, reliable and ad-free too.

lithocarpus|2 months ago

I've been envisioning a market for agendas, where the players bid for the AI companies to nudge their LLM toward whatever given agenda. It would be subtle and not visible to users. Probably illegal, but I imagine it will happen to some degree. Or at the very least the government will want the "levers" to adjust various agendas the same way they did with covid.

I despise all of this. For the moment though, before all this is implemented, it's perhaps a brief golden age of LLMs usefulness. (And I'm sure LLMs will remain useful for many things, but there will be entire categories where they're ruined by pay to play the same as happened with Google search.)

chickensong|2 months ago

> Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.

Doctors already shill for big pharma. There are trust issues all the way down.

bsder|2 months ago

> 2) Ask an LLM, which gets you 80-90% of the way there.

The Internet was 80%-90% accurate to begin with.

Then the Internet became worth money. And suddenly that accuracy dropped like a stone.

There is no reason to believe that ML/AI isn't going to speedrun that process.

thayne|2 months ago

But he LLM was probably trained on all the sponsered posts and scams. It isn't clear to me that an LLM response is any more reliable than sifting through google results.

eastbound|2 months ago

Excellent way of putting it. Just a nitpick: People should look up in medical encyclopedias/research papers/libraries, not blogs. It requires the ability to find and summarize… which is exactly what AI is excellent at.

dgemm|2 months ago

This seems true for our moment in time but looking forward I'm not sure how much it will stay that way. The LLMs will inevitably need to find a sustainable business model so I can very much see them becoming enshittified similar to google eventually making 2) and 3) more similar to each other.

jonas21|2 months ago

An alternative business model is that you, or more likely your insurance, pays $20/mo for unlimited access to a medical agent, built on top of an LLM, that can answer your questions. This is good for everyone -- the patient gets answers without waiting, the insurer gets cost savings, doctors have a less hectic schedule and get to spend more time on the interesting cases, and the company providing the service gets paid for doing a good job -- and would have a strong incentive to drive hallucination rate down to zero (or at least lower than the average physician's).

JackSlateur|2 months ago

"Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests"

This is so naive, especially since both google and openai openly confess to manipulate the data for their own agenda (ads but not only)

AI is a skilled liar

You can always pride yourself and playing with fire, but the more humble attitude would be to avoid it at all cost;

ponector|2 months ago

>> LLMs don't try to scam you, don't try to fool you, don't look out for their own interests

LLMs don't try to scam/fool you, LLM providers do.

Remember how Grok bragged that Musk had the “potential to drink piss better than any human in history” and was the “ultimate throat goat,” whose “blowjob prowess edges out” Donald Trump’s. Grok also posited that Musk was more physically fit than LeBron James, and that he would have been a better recipient of the 2016 porn industry award than porn star Riley Reid.

etra0|2 months ago

Completely off-topic but I just love how the pettiness of Musk was abused by twitter community.

I had a chuckle reading all of these.

bgwalter|2 months ago

> Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests.

They follow their corporations instead. Just look at the status-quoism of the free "Google AI" and the constant changes in Grok, where xAI is increasingly locking down Grok, perhaps to stay in line with EU regulations. But Grok is also increasingly pro-billionaire.

Copilot was completely locked down on anything political before the 2024 election.

They all scam you according to their training and system prompts. Have you seen the minute change in the system prompt that led to MechaHitler?

etra0|2 months ago

> 2) Ask an LLM, which gets you 80-90% of the way there.

Hallucinations and sycophancy are still an issue, 80-90% is being generous I think.

I know this is not issues of the LLM itself, but rather the implementation & companies behind them (since there are open models as well), but, what limits to LLMs to be enshittified by corp needs?

I've seen this very recently with Grok, people were asking trolley-like problems comparing Elon Musk to anything, and Grok very frequently chose Elon Musk most of the time because it is probably embedded in the system prompt or training [1].

[1] https://www.theguardian.com/technology/2025/nov/21/elon-musk...

andrepd|2 months ago

Two MAJOR issues with your argument.

> where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.

Why in god's name would you need to ask a doctor 10 questions every day? How is this in any way germane to this issue?

In any first-world country you can get a GP appointment free of charge either on the day or with a few days' wait, depending on the urgency. Not to mention emergency care / 112 any time day or night if you really need it. This exists and has existed for decades in most vaguely social-democratic countries in the world (but not only those). So you can get professional help from someone, there's no (absurd) false choice between either "asking the stochastic platitude generator" and "going without healthcare".

But I know right, a functioning health system with the right funding, management, and incentives! So boring! Yawn yawn, not exciting. GP practices don't get trillions of dollars in VC money.

> Ask an LLM, which gets you 80-90% of the way there.

This is such a ridiculous misrepresentation of the current state of LLMs that I don't even know how to continue a conversation from here.

markdown|2 months ago

> In any first-world country you can get a GP appointment free of charge

Are you really under the assumption that this is a first-world perk?

andrepd|2 months ago

I love that the next day, I open this post and it's simply downvoted with 0 counterpoint.