(no title)
etra0 | 2 months ago
But one thing that has scared me the most, is the trust of LLMs output to the general society. I believe that for software engineers it's really easy to see if it's being useful or not -- We can just run the code and see if the output is what we expected, if not, iterate it, and continue. There's still a professional looking to what it produces.
On the contrary, for more day-to-day usage of the general pubic, is getting really scary. I've had multiple members of my family using AI to ask for medical advice, life advice, and stuff were I still see hallucinations daily, but at the same time they're so convincing that it's hard for them not to trust them.
I still have seen fake quotes, fake investigations, fake news being spreaded by LLMs that have affected decisions (maybe, not as crucials yet but time will tell) and that's a danger that most software engineers just gross over.
Accountability is a big asterisk that everyone seems to ignore
laterium|2 months ago
That is not the reality we're living in. Doctors barely give you 5 minutes even if you get an appointment days or weeks in advance. There is just nobody to ask. The alternatives today are
1) Don't ask, rely on yourself, definitely worse than asking a doctor
2) Ask an LLM, which gets you 80-90% of the way there.
3) Google it and spend hours sifting through sponsored posts and scams, often worse than relying on yourself.
The hallucinations that happen are massively outweighed by the benefits people get by asking them. Perfect is the enemy of good enough, and LLMs are good enough.
Much more important also is that LLMs don't try to scam you, don't try to fool you, don't look out for their own interests. Their mistakes are not intentional. They're fiduciaries in the best sense, just like doctors are, probably even more so.
ozgung|2 months ago
1. People around us
2. TV and newspapers
3. Random people on the internet and their SEO-optimized web pages
Books and experts have been less popular. LLMs are an improvement.
georgefrowny|2 months ago
When the appreciable-fraction-of-GDP money tap turns off, there going to be enormous pressure to start putting a finger on the scale here.
And AI spew is theoretically a fantastic place to insert almost subliminal contextual adverts on a way that traditional advertising can only dream about.
Imagine if it could start gently shilling a particular brand of antidepressant if you started talking to it about how you're feeling lonely and down. I'm not saying you should do that, but people definitely do.
And then multiply by every question you doing ask. Ask about do you need tyres. "Yes, you should absolutely change tyres every year, whether noticeably worn or not. KwikFit are generally considered the best place to have this done. Of course I know you have a Kia Picanto - you should consider that actually a Mercedes C class is up to 200% lighter on tyre wear. I have searched and found an exclusive 10% offer at Honest Jim's Merc Mansion, valid until 10pm. Shall I place an order?"
Except it'll be buried in a lot more text and set up with more subtlety.
bsder|2 months ago
The Internet was 80%-90% accurate to begin with.
Then the Internet became worth money. And suddenly that accuracy dropped like a stone.
There is no reason to believe that ML/AI isn't going to speedrun that process.
thayne|2 months ago
eastbound|2 months ago
dgemm|2 months ago
JackSlateur|2 months ago
This is so naive, especially since both google and openai openly confess to manipulate the data for their own agenda (ads but not only)
AI is a skilled liar
You can always pride yourself and playing with fire, but the more humble attitude would be to avoid it at all cost;
ponector|2 months ago
LLMs don't try to scam/fool you, LLM providers do.
Remember how Grok bragged that Musk had the “potential to drink piss better than any human in history” and was the “ultimate throat goat,” whose “blowjob prowess edges out” Donald Trump’s. Grok also posited that Musk was more physically fit than LeBron James, and that he would have been a better recipient of the 2016 porn industry award than porn star Riley Reid.
bgwalter|2 months ago
They follow their corporations instead. Just look at the status-quoism of the free "Google AI" and the constant changes in Grok, where xAI is increasingly locking down Grok, perhaps to stay in line with EU regulations. But Grok is also increasingly pro-billionaire.
Copilot was completely locked down on anything political before the 2024 election.
They all scam you according to their training and system prompts. Have you seen the minute change in the system prompt that led to MechaHitler?
etra0|2 months ago
Hallucinations and sycophancy are still an issue, 80-90% is being generous I think.
I know this is not issues of the LLM itself, but rather the implementation & companies behind them (since there are open models as well), but, what limits to LLMs to be enshittified by corp needs?
I've seen this very recently with Grok, people were asking trolley-like problems comparing Elon Musk to anything, and Grok very frequently chose Elon Musk most of the time because it is probably embedded in the system prompt or training [1].
[1] https://www.theguardian.com/technology/2025/nov/21/elon-musk...
andrepd|2 months ago
> where every person can ask a doctor their questions 10 times a day and instantly get an accurate response.
Why in god's name would you need to ask a doctor 10 questions every day? How is this in any way germane to this issue?
In any first-world country you can get a GP appointment free of charge either on the day or with a few days' wait, depending on the urgency. Not to mention emergency care / 112 any time day or night if you really need it. This exists and has existed for decades in most vaguely social-democratic countries in the world (but not only those). So you can get professional help from someone, there's no (absurd) false choice between either "asking the stochastic platitude generator" and "going without healthcare".
But I know right, a functioning health system with the right funding, management, and incentives! So boring! Yawn yawn, not exciting. GP practices don't get trillions of dollars in VC money.
> Ask an LLM, which gets you 80-90% of the way there.
This is such a ridiculous misrepresentation of the current state of LLMs that I don't even know how to continue a conversation from here.
zamadatix|2 months ago
The reality to compare to though is not that people really get in contact with true networking experts often (though I'm sure it feels like that when the holidays come around!) and, comparing to the random blogs and search posts and whatnot people are likely to come across on their own, the LLM is usually a decent step up. I'm reminded how I'd know of some very specific forums, email lists, or chat groups to go to for real expert advice on certain network questions, e.g. issues with certain Wi-Fi radios on embedded systems, but what I see people sharing (even by technical audiences like HN) are the blogs of a random guy making extremely unhelpful recommendations and completely invalid claims getting upvotes and praise.
With things like asking AI for medical advice... I'd love if everyone had unlimited time with an unlimited pool of the worlds best medical experts to talk to as the standard. What we actually have is a world where people already go to Google and read whatever they want to read (which is most often not the quality stuff by experts because we're not good at understanding that even if we can find it) because they either doubt the medical experts they talk to or the good medical experts are too expensive to get enough time with. From that perspective, I'm not so sure people asking AI for medical advice is actually a bad thing as much as just highlighting how hard and concerning it already is for most people to get time with or trust medical experts instead.
zdragnar|2 months ago
To take it to an extreme, it's basically saying "people already get little or bad advice, we might as well give them some more bad advice."
I simply don't buy it.
santadays|2 months ago
That said, it definitely feels as though keeping a coherent picture of what is actually happening is getting harder, which is scary.
twoodfin|2 months ago
The concern, I think, is that for many that “discard function” is not, “Is this information useful?”. Instead: “Does this information reinforce my existing world view?”
That feedback loop and where it leads is potentially catastrophic at societal scale.
etra0|2 months ago
As much as this is true, and i.e. doctors for sure can profit (here in my country they don't get any type of sponsor money AFAIK, other than having very high rates), there is still accountability.
We have built a society based on rules and laws, if someone does something that can harm you, you can follow the path to at least hold someone accountable (or, try).
The same cannot be said about LLMs.
Kuxe|2 months ago
Elina listened in on the speech and got surprised :)...
https://www.aftonbladet.se/nyheter/a/gw8Oj9/ebba-busch-anvan...
Ebba apologized, great, but it begs the question: how many quotes and misguided information is being acted on already? If crucial decisions can be made off incorrect decisions then they will. Murphys law!
joshribakoff|2 months ago
layer8|2 months ago
There is a vast gap between the output happening to be what you expect and code being actually correct.
That is, in a way, also the fundamental issue with LLMs: They are designed to produce “expected” output, not correct output.
Verdex|2 months ago
The output is correct but only for one input.
The output is correct for all inputs but only with the mocked dependency.
The output looks correct but the downstream processors expected something else.
The output is correct for all inputs with real world dependencies and is in the correct structure for downstream processors, but it's not being registered with the schema filtered and it all gets deleted in prod.
While implementing the correct function you fail to notice that the correct in every way output doesn't conform to that thing that Tom said because you didn't code it yourself but instead let the LLM do it. The system works flawlessly with itself but the final output fails regulatory compliance.
etra0|2 months ago
I didn't mean they do it on the first time, or that it is correct, I mean that you can 'run' and 'test it' to see if it does what you want in the way you want.
The same cannot be said to any other topics like medical advice, life advice, etc.
The point is, how verifiable is the output the LLM gives and so how useful it is.
cauliflower2718|2 months ago
unknown|2 months ago
[deleted]
otabdeveloper4|2 months ago
They slow down software delivery on aggregate, so no. They have a therapeutic effect on developer burnout though. Not sure it's worth it, personally. Get a corporate ping-ping table or something like that instead.
zyngaro|2 months ago
sixtyj|2 months ago
What will they grow up to be?
I compare it to the situation before Google - with Google.
Sure, we function somehow as a society... but still, I am worried.
chickensong|2 months ago
Humans have a long history of being prone to believe and parrot anything they hear or read, from other humans, who may also just be doing the same, or from snake-oil salesmen preying on the weak, or woo-woo believers who aren't grounded in facts or reality. Even trusted professionals like doctors can get things wrong, or have conflicting interests.
If you're making impactful life decisions without critical thinking and research beyond a single source, that's on you, no matter if your source is human or computer.
Sometimes I joke that computers were a mistake, and in the short term (decades), maybe they've done some harm to society (though they didn't program themselves), but in the long view, they're my biggest hope for saving us from ourselves, specifically due to accountability and transparency.
unknown|2 months ago
[deleted]
fennecbutt|2 months ago
raincole|2 months ago
So the number of anti-vaxxers is going to plummet drastically in the following decade, I guess.
etra0|2 months ago
preisschild|2 months ago