top | item 45620241

(no title)

mattlutze | 4 months ago

> exotic topics [...] I don't know how much

We also don't know, in situations like this, whether all of or how much of the research is true. As has been regularly and publicly demonstrated [0][1][2], the most capable of these systems still make very fundamental mistakes, misaligned to their goals.

The LLMs really, really want to be our friend, and production models do exhibit tendencies to intentionally mislead when it's advantageous [3], even if it's against their alignment goals.

0: https://www.afr.com/companies/professional-services/oversigh... 1: https://www.nbcnews.com/world/australia/australian-lawyer-so... 2: https://calmatters.org/economy/technology/2025/09/chatgpt-la... 3: https://arxiv.org/pdf/2509.18058?

discuss

order

dangus|4 months ago

Despite those mistakes, the utility is undeniable.

I converted some tooling from bash scripts leveraging the AWS CLI to a Go program leveraging the AWS SDK, improving performance, utility, and reliability.

I did this in less than two days and I don’t even know how to write Go.

Yes, it made some mistakes, but I was able to correct them easily. Yes, I needed to have general programming knowledge to correct those mistakes.

But overall, this project would not exist without AI. I wouldn’t have had the spare time learn all I needed to learn (mostly boilerplate) and to implement what I wanted to do.

autoexec|4 months ago

> The LLMs really, really want to be our friend

They want you to think they are your friend but they actually want to be your master and steal your personal data. It's what the companies who want to be masters over you and the AI have programed them to do. LLMs want to gain your confidence, and then your dependence, and then they can control you.

dangus|4 months ago

This seems hyperbolic to me. Sometimes companies just want to make money.

Similarly, a SaaS company that would very much prefer you renew your subscription isn’t trying to make you into an Orwellian slave. They’re trying to make a product that makes me want to pay for it.

100% of paid AI tools include the option to not train on your data, and most free ones do as well. Also, AI doesn’t magically invalidate GDPR.

marcellus23|4 months ago

I wonder if, in any of those legal cases, the users turned on web search or not. We just don't know -- but in my experience, a thinking LLM with web search on has never just hallucinated nonexistent information.

andrepd|4 months ago

I'm sorry to be so blunt but this is a massive cope and deeply annoying to see this every. fucking. time. the limitations of LLMs are brought up. There is every single time someone saying yeah you didn't use web search / deep thinking / gpt-5-plus-pro-turbo-420B.

It's absurd. You can trivially spend 2 minutes on chatgpt and it will hallucinate on some factually incorrect answer. Why why why always this cope.

noosphr|4 months ago

>We also don't know, in situations like this, whether all of or how much of the research is true.

That's perfectly fine since we don't know how much of the original research is true either: https://en.wikipedia.org/wiki/Replication_crisis

If I waste three months doing a manual literature review on papers which are fraudulent with 100% accuracy have I gained anything compared to doing it with an AI in 20 minutes with 60% accuracy?

Jensson|4 months ago

> If I waste three months doing a manual literature review on papers which are fraudulent with 100% accuracy have I gained anything compared to doing it with an AI in 20 minutes with 60% accuracy?

You don't see how adding 40% error rate on top of that makes things worse? Your 20 minute study there made you less informed, not more, at least the fraudulent papers teaches you what the community thinks about the topic while the AI just misinforms you about the world in your example.

For example, while reading all those fraudulent papers you will probably discover that they don't add up and thus figure out that they are fraudulent. The AI study however will likely try to connect the data in those so they make sense (due to how LLM works, it has seen more examples that connect and make sense than not, so hallucinations will go in that direction) then the studies will not seem as fraudulent as they actually are and you might even miss the fraud entirely due to AI hallucinating arguments in favor of the studies.