top | item 44983570

Being “Confidently Wrong” is holding AI back

155 points| tango12 | 6 months ago |promptql.io

262 comments

order

lucideer|6 months ago

While the thrust of this article is generally correct, I have two issues with it:

1. The words "the only thing" massively underplays the difficulty of this problem. It's not a small thing.

2. One of the issues I've seen with a lot of chat LLMs is their willingness to correct themselves when asked - this might seem, on the surface, to be a positive (allowing a user to steer the AI toward a more accurate or appropriate solution), but in reality it simply plays into users' biases & makes it more likely that the user will accept & approve of incorrect responses from the AI. Often, rather than "correcting" itself it merely "teaches" the AI how to be confidently wrong in an amenable & subtle manner which the individual user finds easy to accept (or more difficult to spot).

If anything, unless/until we can solve the (insurmountable) problem of AI being wrong, AI should at least be trained to be confidently & stubbornly wrong (or right). This would also likely lead to better consistency in testing.

traceroute66|6 months ago

> is their willingness to correct themselves when asked

Except they don't correct themselves when asked.

I'm sure we've all been there, many, many, many,many,many times ....

   - User: "This is wrong because X"
   - AI: "You're absolutely right !  Here's a production-ready fixed answer"
   - User: "No, that's wrong because Y"
   - AI: "I apologise for frustrating you ! Here's a robust answer that works"
   - User: "You idiot, you just put X back in there"
   - and so continues the vicious circle....

dns_snek|6 months ago

> but in reality it simply plays into users' biases & makes it more likely that the user will accept & approve of incorrect responses from the AI.

Yes! I often find myself overthinking my phrasing to the nth degree because I've learned that even a sprinkle of bias can often make the LLM run in that direction even if it's not the correct answer.

It often feels a bit like interacting with a deeply unstable and insecure people pleasing person. I can't say anything that could possibly be interpreted as a disagreement because they'll immediately flip the script, I can't mention that I like pizza before asking them what their favorite food is because they'll just mirror me.

stingraycharles|6 months ago

> 1. The words "the only thing" massively underplays the difficulty of this problem. It's not a small thing.

Exactly. One could argue that this is just an artifact from the fundamental technique being used: it’s a really fancy autocomplete based on a huge context window.

People still think there’s actual intelligence in there, while the actual problems by making these systems appear intelligent is mostly algorithms and software managing exactly what goes into these context windows at what place.

Don’t get me wrong: it feels like magic. But I would argue that the only way to recognize a model being “confidently wrong” is to let another model, trained on completely different datasets with different techniques, judge them. And then preferably multiple.

(This is actually a feature of an MCP tool I use, “consensus” from zen-mcp-server, which enables you to query multiple different models to reach a consensus on a certain problem / solution).

tango12|6 months ago

The AI being wrong problem is probably not insurmountable.

Humans have meta-cognition that helps them judge if they're doing a thing with lots of assumptions vs doing something that's blessed.

Humans decouple planning from execution right? Not fully but we choose when to separate it and when to not.

If we had enough data on here's a good plan given user context and here's a bad plan, it doesn't seem unreasonable to have a pretty reliable meta cognition capability on the goodness of a plan.

energy123|6 months ago

Mechanistic interpretability could play a role here. The sycophancy you describe in chat mode could be when the question is "too difficult" and the AI defaults to easy circuits that rely on simple rule of thumbs (like does the context contain positive words such as "excellent"). The user experiences this as the AI just following basic nudges.

Could real-time observability into the network's internals somehow feed back into the model to reduce these hallucination-inducing shortcuts? Like train the system to detect when a shortcut is being used, then do something about it?

ninetyninenine|6 months ago

It’s not massively underplaying it imo. AI hype is real. This is revolutionary technology that humanity has never seen before.

But it happened at a time where hype can be delivered at a magnitude never before seen by humanity as well to a degree of volume that is completely unnatural by any standard set previously by hype machines created by humanity. Not even landing on the moon has inundated people with as much hype. But inevitably like landing on the moon, humanity is suffering from hype fatigue.

Too much hype makes us numb to the reality of how insane the technology is.

Like when someone says the only thing stopping LLMs is hallucinations… that is literally the last gap. LLMs cover creativity, comprehension, analysis, knowledge and much more. Hallucinations is it. The final problem is targeted and boxed into something much more narrower then just build a human level AI from scratch.

Don’t get me wrong. Hallucinations are hard. But this being the last thing left is not an underplay. Yes it’s a massive issue but yes it is also a massive achievement to reduce all of agi to simply solving just an hallucination problem.

stetrain|6 months ago

Yes, the quick to correct itself isn't really useful. I would not like a human assistant/intern/pair programmer who when asked how to do X said:

> To accomplish X you can just use Y!

But Y isn't applicable in this scenario.

> Oh, you're absolutely right! Instead of Y you can do Z.

Are you sure? I don't think Z accomplishes X.

> On second thought you're absolutely correct. Y or Z will clearly not accomplish X, but let's try Q....

sfn42|6 months ago

Being confidently wrong isn't even the problem. It's a symptom of the much deeper problem that these things aren't AI at all, they're just atocomplete bots good enough to kind of seem like AI. There's no actual intelligence. That's the problem.

burnte|6 months ago

I'm just so glad people are seeing this. I started saying this literally days after ChatGPT came out and I started examining the technology. It's SUPER useful, but it's assistive, it can't be trusted to do things autonomously yet. That's ok, though, it can make human workers more productive, rather than worrying about replacing humans.

KoolKat23|6 months ago

Gemini 2.5 pro is quite good at being stubborn (well at least the initial release versions, haven't tested since).

decentrality|6 months ago

Agreed with #1 ( came here to say that also )

Pronoun and noun wordplay aside ( 'Their' ... `themselves` ) I also agree that LLMs can correct the path being taken, regenerate better, etc...

But the idea that 'AI' needs to be _stubbornly_ wrong ( more human in the worst way ) is a bad idea. There is a fundamental showing, and it is being missed.

What is the context reality? Where is this prompt/response taking place? Almost guaranteed to be going on in a context which is itself violated or broken; such as with `Open Web UI` in a conservative example: Who even cares if we get the responses right? Now we have 'right' responses in a cul-de-sac universe. This might be worthwhile using `Ollama` in `Zed` for example, but for what purpose? An agentic process that is going to be audited anyway, because we always need to understand the code? And if we are talking about decision-making processes in a corporate system strategy... now we are fully down the rabbit hole. The corporate context itself is coming or going on whether it is right/wrong, good/evil, etc... as the entire point of what is going on there. The entire world is already beating that corporation to death or not, or it is beating the world to death or not... so the 'AI' aspect is more of an accelerant of an underlying dynamic, and if we stand back... what corporation is not already stubbornly wrong, on average?

corytheboyd|6 months ago

Isn’t it obvious that the confidently wrong problem will never go away because all of this is effectively built on a statistical next token matcher? Yeah sure you can throw on hacks like RAG, more context window, but it’s still built on the same foundation.

It’s like saying you built a 3D scene on a 2D plane. You can employ clever tricks to make 2D look 3D at the right angle, buts it’s fundamentally not 3D, which obviously shows when you take the 2D thing and turn it.

It seems like the effectiveness plateau of these hacks will soon be (has been?) reached and the smoke and mirrors snake oil sales booths cluttering Main Street will start to go away. Still a useful piece of tech, just, not for every-fucking-thing.

raynr|6 months ago

As a layman, this too strikes me as the problem underlying the "confidently wrong" problem.

The author proposes ways for an AI to signal when it is wrong and to learn from its mistakes. But that mechanism feeds back to the core next token matcher. Isn't this just replicating the problem with extra steps?

I feel like this is a framing problem. It's not that an LLM is mostly correct and just sometimes confabulates or is "confidently wrong". It's that an LLM is confabulating all the time, and all the techniques thrown at it do is increase the measured incidence of LLM confabulations matching expected benchmark answers.

jdbernard|6 months ago

It seems obvious to me, but there was a camp that thought, at least at one time, that probabilistic next token could be effectively what humans are doing anyways, just scaled up several more orders of magnitude. It always felt obvious to me that there was more to human cognition than just very sophisticated pattern matching, so I'm not surprised that these approaches are hitting walls.

yifanl|6 months ago

There are people convinced that if we throw a sufficient amount of training data and VC money at more hardware, we'll overcome the gap.

Technically, I can't prove that they're wrong, novel solutions sometimes happen, and I guess the calculus is that it's likely enough to justify a trillion dollars down the hole.

kovacs|6 months ago

This is the best analogy I've read to explain what's going on and takes me back to the days of Doom and how it was so transformative at the time. Perhaps in time the current generation will be viewed as the Doom engine as we await the holy grail of full 3D in Quake.

gus_massa|6 months ago

It's easy to solve if they modify they training to remove some weight from Stack Overflow and add more weight to Yahoo! Answers :) .

I remember a few years ago, we were planing to make some kind of math forum for students in the first year of the university. My opinion was that it was too easy to do it wrong. On one way you can be like Math Overflow were all the questions are too technical (for first year of the university) and all the answers are too technical (first year of the university). On the other way, you can be like Yahoo! Answers, where more than half of the answers were "I don't know", with many "I don't know" per question.

For the AI, you want to give it some room to generalize/bullshit. It one page says that "X was a few months before Z" and another page says that "Y was a few days before Z", than you want an hallucinated reply that says that "X happened before Y".

On the other hand, you want the AI to say "I don't know.". They just gave too little weight to the questions that are still open. Do you know a good forum where people post questions that are still open?

aidenn0|6 months ago

I mean if it's trained on things like Reddit then it's just reflecting its training data. I asked a question on reddit just yesterday and the only response I got was confidently wrong. This is not the first time it has happened.

rwmj|6 months ago

Only thing? Just off the top of my head: That the LLM doesn't learn incrementally from previous encounters. That we appear to have run out of training data. That we seem to have hit a scaling wall (reflected in the performance of GPT5).

I predict we'll get a few research breakthroughs in the next few years that will make articles like this seem ridiculous.

energy123|6 months ago

Re online learning - If I freeze 40 yo Einstein and make it so he can't form new memories beyond 5 minutes, that's still an incredibly useful, generally intelligent thing. Doesn't seem like a problem that needs to be solved on the critical path to AGI.

Re training data - We have synthetic data, and we probably haven't hit a wall. Gpt-5 was only 3.5 months after o3. People are reading too much into the tea leaves here. We don't have visibility into the cost of Gpt-5 relative to o3. If it's 20% cheaper, that's the opposite of a wall, that's exponential like improvement. We don't have visibility into the IMO/IOI medal winning models. All I see are people curve fitting onto very limited information.

j-krieger|6 months ago

Never before did we have a combination of well and poison where the pollution of the first was both as instantaneous and as easily achieved.

I‘ve yet to see a convincing article for artificial training data.

tliltocatl|6 months ago

> LLM doesn't learn incrementally from previous encounters

This. Lack of any way to incorporate previous experience seems like the main problem. Humans are often confidently wrong as well - and avoiding being confidently wrong is actually something one must learn rather than an innate capability. But humans wouldn't repeat same mistake indefinitely.

FergusArgyll|6 months ago

The problem is the kinds of "data" users will feed it. It's basically an impossible task to put a continuous learning model online and not have it devolve into the optimal mix of stalin & hitler

lvl155|6 months ago

Incrementally learning model is pretty hard. That’s actually something I am working on right now and it’s completely different from developing/implementing LLMs.

impossiblefork|6 months ago

Having run out of training data isn't something holding back LLMs in this sense.

But I agree that being confidently wrong is not the only thing they can't do. Programming, great, maths, apparently great nowadays, since Google and OpenAI have something that could solve most problems on the IMO, even if the models we get to see probably aren't models that can do this, but LLMs produce crazy output when asked to produce stories, they produce crazy output when given too long confusing contexts and have some other problems of that sort.

I think much of it is solvable. I certainly have ideas about how it can be done.

traceroute66|6 months ago

> That we appear to have run out of training data.

I think the next iteration of LLM is going to be "interesting", i.e. now that all the websites they used to freely scrape have been increasingly putting up walls.

procaryote|6 months ago

Also that no companies involved seem to be making a profit, have a reasonable vision to make a profit, or even revenue in the same ballpark as costs.

Except nvidia perhaps

tango12|6 months ago

Author here.

You’re right in that it’s obviously not the only problem.

But without solving this seems like no matter how good the models get it’ll never be enough.

Or, yes, the biggest research breakthrough we need is reliable calibrated confidence. And that’ll allow existing models as they are to become spectacularly more useful.

mettamage|6 months ago

> Only thing? Just off the top of my head: That the LLM doesn't learn incrementally from previous encounters. That we appear to have run out of training data.

Ha, that almost seems like an oxymoron. The previous encounters can be the new training data!

harsh3195|6 months ago

In terms of adoption, I think the user is right. That is the only thing stopping adoption of existing models in the real world.

moduspol|6 months ago

Unclear limits on how much context can be reliably provided and effectively used without degrading the result.

ninetyninenine|6 months ago

It does. We keep a section of the context window for memory. The LLM however is the one deciding what is remembered. Technically via the system prompt we can have it remember every prompt if needed.

But memory is a minor thing. Talking to a knowledgeable librarian or professor you never met is the level we essentially need to get it to for this stuff to take off.

firesteelrain|6 months ago

> That we appear to have run out of training data

And now, in some cases for a while, it is training on its own slop.

lazide|6 months ago

The article is the peak of confidently wrong itself, for solid irony points.

roxolotl|6 months ago

The big thing here is that they can’t even be confident. There is no there there. They are a, admittedly very useful, statistical model. Ascribing confidence to it is an anthropomorphizing mistake which is easy to make since we’re wired to trust text that feels human.

They are at their most useful when it is cheaper to verify their output than it is to generate it yourself. That’s why code is rather ok; you can run it. But once validation becomes more expensive than doing it yourself, be it code or otherwise, their usefulness drops off significantly.

projektfu|6 months ago

The article buries the lede by waiting until the very end to talk about solutions like having the LLM write DSL code. Presumably if you feed an LLM your orders table and a question about it, you'll get an answer that you can't trust. But if you ask it to write some SQL or similar thing based on your database to get the answer and run it, you can have more confidence.

z3c0|6 months ago

Agreed. All these attempts to benchmark LLM performance based on the interpreted validity of the outputs are completely misguided. It may be the semantics of "context" causing people to anthropomorphize the models (besides the lifelike outputs.) Establishing context for humans is the process of holding external stimuli against an internal model of reality. Context for an LLM is literally just "the last n tokens". In that case, the performance would be how valid the most probablistic token was with the prior n tokens being present, which really has nothing to do with the perceived correctness of the output.

hodgehog11|6 months ago

But as a statistical model, it should be able to report some notion of statistical uncertainty, not necessarily in its next-token outputs, but just as a separate measure. Unfortunately, there really doesn't seem to be a lot of effort going into this.

NoGravitas|6 months ago

The thing holding AI back is that LLMS are not world models, and do not have world models. Being confidently wrong is just a side effect of that. You need a model of the world to be uncertain about. Without one, you have no way to estimate whether your next predicted sentence is true, false, or uncertain; one predicted sentence is as good as another as long as it resembles the training data.

mojuba|6 months ago

In other words, just like with autonomous driving, you need real world experience aka general intelligence to be truly useful. Having a model of the world and knowing your place in it is one of the critical parts of intelligence that both autonomous vehicle systems and LLM's are missing.

rar00|6 months ago

I know people are pushing back, taking "only" literally, but from a reasonable perspective what causes LLMs (technically their outputs) to give that impression is indeed the crux of what holds progress back: how/what LLMs learn from data. In my personal opinion, there's something fundamentally flawed the whole field has yet to properly pinpointing and fix.

jqpabc123|6 months ago

there's something fundamentally flawed the whole field has yet to properly pinpointing and fix.

Isn't it obvious?

It's all built around probability and statistics.

This is not how you reach definitive answers. Maybe the results make sense and maybe they're just nice sounding BS. You guess which one is the case.

The real catch --- if you know enough to spot the BS, you probably didn't need to ask the question in the first place.

darth_avocado|6 months ago

Funnily the same thing would get you promoted in corporate America as a human

jqpabc123|6 months ago

But only if you are physically attractive and skilled at golf.

EE84M3i|6 months ago

This is so prominent in the cultural consciousness that it was lampooned in this week episode of south park, where Randy Marsh goes on a chatgpt (and ketamone) fueled bender and destroys his business.

CloseChoice|6 months ago

LLMs are largely used by developers, who (in some sense or the other) supervise what the LLM does constantly (even if that means for sum committing to main and running in production). We do already have a lot of tools: tests, compilation, a programming language with its harsh restrictions compared to natural language, and of course the eye test, this is not the case for a lot of jobs where GenAI is used for hyperautomation, so I am really curious in which way it will or won't get adopted in other areas.

JCM9|6 months ago

Add to being confidently wrong is the super annoying way it corrects itself after disastrously screwing something up.

AI: “I’ve deployed the API data into your app, following best practices and efficient code.”

Me: “Nope thats totally wrong and in fact you just wrote the API credential into my code, in plaintext, into the JavaScript which basically guarantees that we’re gonna get hacked.”

AI: “You’re absolutely right. Putting API credentials into the source code for the page is not a best practice, let me fix that for you.”

jqpabc123|6 months ago

AI Apologetics: "It's all your fault for not being specific enough."

AlecSchueler|6 months ago

And then proceeds not to fix it.

esafak|6 months ago

Bayesian models solve this problem but they occupy model capacity which practitioners have traditionally preferred to devote to improving point estimates.

hodgehog11|6 months ago

I've always found this perspective remarkably misguided. Prediction performance is not everything; it can be extraordinarily powerful to have uncertainty estimates as well.

tangotaylor|6 months ago

I don't think humans are good at assessing the accuracy of their own opinions either and I'm not sure how AI is going to do it. Usually what corrects us is failure: some external stimulus that is indifferent or hostile to us.

As Mazer Rackham from Ender's Game said: "Only the enemy shows you where you are weak."

nijave|6 months ago

Maybe AI isn't artificial enough here...

blibble|6 months ago

the only thing holding me back from being a billionare is my lack of a billion dollars

jqpabc123|6 months ago

Being able to recall all the data from the internet doesn't make you "intelligent".

It makes you a walking database --- an example of savant syndrome.

Combine this with failure on simple logical and cognitive tests and the diagnosis would be --- idiot savant.

This is the best available diagnosis of an LLM. It excels at recall and text generation but fails in many (if not most) other cognitive areas.

But that's ok, let's use it to replace our human workers and see what happens. Only an idiot would expect this to go well.

https://nypost.com/2024/06/17/business/mcdonalds-to-end-ai-d...

pxc|6 months ago

For programming, at least, there are also problems with overall output quality, instruction following, and the scopes of changes.

LLMs don't do well at following style instructions, and existing memory systems aren't adequate for "remembering" my style preferences.

When you ask for one change, you often get loads of other changes alongside it. Transformers suck at targeted edits.

The hallucination problem and the sycophancy/suggestibility problem (which perhaps both play into the phenomenon of being "confidently wrong") are both real and serious. But they hardly form a singular bottleneck for the usefulness of LLMs.

bwfan123|6 months ago

Arguably, the biggest breakthroughs we have had came out of formalization of our world models. Math formalizes abstract worlds, and science formalizes the real world with testable actions.

The key feature of formalization is the ability to create statements, and test statements for correctness. ie, we went from fuzzy feel-good thinking to precise thinking thanks to the formalization.

Furthermore, the ingenuity of humans is to create new worlds and formalize them, ie we have some resonance with the cosmos so to speak, and the only resonance that the LLMs have is with their training datasets.

ofrzeta|6 months ago

LLMs just can't learn or understand from the context. The context is there to somehow statistically affect the token production but there is no real understanding. You can provide an LLM a full specification of a problem including all elements that are needed to solve it, for instance all specific functions of a programming library (that is not on the Internet). An competent programmer could read this and implement the solution straightforward. With LLMs this does not work - they still confidently continue producing wrong solutions, though.

dismalaf|6 months ago

No, AI's lack of understanding holds it back.

It's literally just a statistical model that guesses what you want based on the prompt and a whole bunch of training data.

If we want a black box that's AGI/SGI, we need a completely new paradigm. Or we apply a bunch of old-school AI techniques (aka. expert systems) to augment LLMs and get something immediately useful, yet slightly limited.

RIght now LLMs do things and are somewhat useful. Short of some expectations, butter than others, but yeah, a statistical model was never going to be more than the sum of its training data.

ldikrtjliaj|6 months ago

Well fucking yeah

Yesterday I asked ChatGPT a really simple, factual question. "Where is this feature on this software?" And it made up a menu that didn't exist. I told "No,, you're hallucinating, search the internet for the correct answer" and it directly responded (without the time delay and introspection bubbles that indicate an internet search) "That is not a hallucination, that is factually correct". God damn.

lenerdenator|6 months ago

Works fine for humans; I guess we'll know that AI has truly reached human levels of intelligence when being confidently wrong stops holding it back.

rokkamokka|6 months ago

The interesting question here is if a statistical model like GPTs actually can encode this is a meaningful way. Nobody has quite found it yet, if so

ACCount37|6 months ago

They can, and they already do it somewhat. We've found enough to know that.

As the most well known example: Anthropic examined their AIs and found that they have a "name recognition" pathway - i.e. when asked about biographic facts, the AI will respond with "I don't know" if "name recognition" has failed.

This pathway is present even in base models, but only results in consistent "I don't know" if AI was trained for reduced hallucinations.

AIs are also capable of recognizing their own uncertainity. If you have an AI-generated list of historic facts that includes hallucinated ones, you can feed that list back to the same AI and ask it about how certain it is about every fact listed. Hallucinated entries will consistently have less certainty. This latent "recognize uncertainty" capability can, once again, be used in anti-hallucination training.

Those anti-hallucination capabilities are fragile, easy to damage in training, and do not fully generalize.

Can't help but think that limited "self-awareness" - and I mean that in a very mechanical, no-nonsense "has information about its own capabilities" way - is a major cause of hallucinations. An AI has some awareness of its own capabilities and how certain it is about things - but not nearly enough of it to avoid hallucinations consistently across different domains and settings.

1970-01-01|6 months ago

Confidently wrong while being unable to unlearn its incorrect assumption. I'd be happy with confidently wrong if it understood critical feedback is not an ask, it's an ultimatum for our current discussion to continue with the facts.

kemcho|6 months ago

The angle that being to detect confidently wrong, which then helps kicks off new learning is interesting.

Has anyone had any success with continuous learning type AI products? Seems like there’s a lot of hype around RL to specialise.

ACCount37|6 months ago

There's no "hype" because continuous learning is algorithmically hard and computationally intensive.

There's no known good recipe for continuous learning that's "worth it". No ready-made solution for everyone to copy. People are working on it, no doubt, but it's yet to get to the point of being readily applicable.

nyeah|6 months ago

PG pointed this out a while back. He said that AIs were great at generating typical online comments. (NB I don't know which site's comments he might have been referring to.)

squigz|6 months ago

I've said from the beginning that until an LLM can determine and respond with "I do not know that", their usefulness will be limited and they cannot be trusted.

gloosx|6 months ago

We had many examples of AIs which tried to learn from feedback in the public domain. They all quickly becoming racist nazis for some reason.

AlecSchueler|6 months ago

What are examples other than Grok which apparently had nazi sympathies hardcoded in the system prompt?

mediumsmart|6 months ago

I sometimes take an answer that does not work, open a new chat, paste the thing and ask "why does this not do (whatever its supposed to do) ?

mtkd|6 months ago

The link is a sales pitch for some tech that uses MCPs ... see the platform overview on the product top menu

Because MCPs solve the exact issue the whole post is about

witnessme|6 months ago

I like the DSL approach but can't imagine how practical and effective it is. Specially considering the cost.

frays|6 months ago

Great article, words a lot of my experiences with AI (needing to tell it to make a plan, then I assess it)

ChrisMarshallNY|6 months ago

My favorite is "Tested and Verified," then giving me code that won't even compile.

myahio|6 months ago

Yep, this is why I'm skeptical about using LLMs as a learning tool

giancarlostoro|6 months ago

What's really funny to me is, sometimes it fixes itself if you just ask "are you SURE ABOUT THIS ANSWER?" myself and others often wonder, why the heck don't they run a 2nd model to "proofread" output or spot check it. Like did you actually answer the question or are you going off a really weird tangent.

I asked Perplexity some question for sample UI code for Rust / Slint, it gave me a beautiful web UI, I think it got confused because I wanted to make a UI for an API that has its own web UI, I told it you did NOT give me code for Slint, even though some of its output made references to "ui.slint" and other Rust files, it realized its mistake and gave me exactly what I wanted to see.

tl;dr why dont llms just vet themselves with a new context window to see if they actually answered the question? The "reasoning" models don't always reason.

asadotzler|6 months ago

I've asked that question on accurate answers and had the bot say oops and change the answer to an inaccurate one. This seems to happen with about the same frequency on both sides so I'm not sure how helpful it will ultimately be.

ACCount37|6 months ago

Because that would be twice as computationally intensive.

"Reasoning" models integrate some of that natively. In a way, they're trained to double check themselves - which does improve accuracy at the cost of compute.

merelysounds|6 months ago

I’m especially surprised by how little progress has been made. Today’s hallucinations, while less frequent, continue to have a major negative impact. And the problem has been noticed since the start.

> "I will admit, to my slight embarrassment … when we made ChatGPT, I didn't know if it was any good," said Sutskever.

> "When you asked it a factual question, it gave you a wrong answer. I thought it was going to be so unimpressive that people would say, 'Why are you doing this? This is so boring!'" he added.

https://www.businessinsider.com/chatgpt-was-inaccurate-borin...

dankobgd|6 months ago

i am pretty sure it has many more problems

_Algernon_|6 months ago

Rolling weighted dice repeatedly to generate words isn't factually accurate. More at 11.

chpatrick|6 months ago

It is if the weights are sufficiently advanced.

paul7986|6 months ago

And being overHyped with the doom and gloom of it's affects on society.

chatGPT (5) is not there especially in replacing my field and skills: graphic, web design and web development. The first 2 there it spits out solid creations per your prompt request yet can not edit it's creations just creates new ones lol. So it's just another tool in my arsenal not a replacement to me.

Such Makes me wonder how it generates the logos and website designs ... is it all just hocus pocus.. the Wizard of OZ?

nijave|6 months ago

I don't know much about it but apparently we've been having success at work with Figma MCP hooked up to Claude in Cursor. Apparently it can pull from our component library and generate useable code (although still needs engineering to productionalize)

I don't know about replacing anyone but our UI/UX designers are claiming it's significantly faster than traditional mock ups

amai|6 months ago

Bayesian LLMs anyone?

jeffxtreme|6 months ago

Does anyone know which XKCD comic the top image was? Or was it just created in the style of XKCD?

arduanika|6 months ago

The latter, I think.

Randall Munroe has called this abomination "an insult to life itself". But that might be quoting him out of context.

jrm4|6 months ago

Genuine question from someone who thinks they understand the tech:

I don't get why I haven't seen a whole lot of (or any) of these models or tools "self reporting" on "confidence in their answer?"

This feels like it would be REALLY easy; these things predict likelihoods of tokens -- just, you know, give us that number?

SalariedSlave|6 months ago

Anybody remember active learning? I'm old, and ML was much different back then, but this reminds me of grueling annotation work I had to do.

On a different note: is it just me or are some parts of this article oddly written? The sentence structure and phrasing read as confusing - which I find ironic, given the context.

1vuio0pswjnm7|6 months ago

What is interesting IMO about the "confidently wrong" phenomenon is that this was also commonly found in internet forums and online commentary in general prior to widespread use of today's confidently wrong "AI". That is, online commenters routinely were and still are "confidently wrong". IMHO and IME, the "confidentlay wrong" phenonmenon was and still is greater represented in online commnentary than "IRL".

No surprise IMO that, generally, online commenters and so-called "tech" companies who tend to be overly fixated on computers as the solution to all problems, are also the most numerous promoters of confidently wrong "AI".

The nature of the medium itself and those so-called "tech" companies that have sought to dominate it through intermediation and "ad services"^1 could have something to do with the acceptance and promotion of confidently wrong "AI". Namely, its ability to reduce critical thinking and the relative ease with which uninformed opinions, misinformation, and other non-factual "confidently wrong" information can be spread by virtually anyone.

1. If "confidently wrong" information is popular, if it "goes viral", then with few exceptions it will be promoted by these companies to drive traffic and increase ad services revenue.

Please note: I could be wrong.

dgfitz|6 months ago

s/confidently//

Because “ai” is fallible, right now it is at best a very powerful search engine that can also muck around in (mostly JavaScript) codebases. It also makes mistakes in code, adds cruft, and gives incorrect responses to “research-type” questions. It can usually point you in the right direction, which is cool, but Google was able to do that before its enshittification.

s/AI/LLMs

The part where people call it AI is one of the greatest marketing tricks of the 2020s.

captainclam|6 months ago

Wow, there really is an xkcd for everything.

Dwedit|6 months ago

Those are original cartoons drawn in the style of XKCD. But strangely enough, in the second cartoon, the Megan clone seems to change from a thin stick figure to suddenly wearing clothes?

I'm not sure if the comic was AI-assisted or not. AI-generated images do not usually contain identical pixel data when a panel repeats.