top | item 47123590

Anthropic Education the AI Fluency Index

72 points| armcat | 7 days ago |anthropic.com

61 comments

order

mlpoknbji|7 days ago

> But we know that any person who uses AI is likely to improve at what they do.

Do we?

shimman|7 days ago

I could have sworn there was research that stated the more you use these tools the quicker your skills degrade, which honestly feels accurate to me and why I've started reading more technical books again.

dsr_|7 days ago

Not until large-N research is done without sponsorship, support, or veiled threats from AI companies.

At which point, if the evidence turns out to be negative, it will be considered invalid because no model less recent than November 2027 is worth using for anything. If the evidence turns out to be slightly positive, it will be hailed as the next educational paradigm shift and AI training will be part of unemployment settlements.

poszlem|7 days ago

I would even say it's likely the opposite. My output as a programmer is now much higher than before, but I am losing my programming skills with each use of claude code.

throwaw12|7 days ago

Let me add a single data point.

> is likely to improve at what they do

personally, my skills are not improving.

professionally, my output is increased

j45|7 days ago

People who use AI mindfully and actively can possibly improve.

The olden days of buidling skills and competencies are largely dying or dead when the skills and competencies are changing faster than skills and competency training ever intended to.

selridge|7 days ago

We DEEPLY do not.

That's not, IMO, a "skills go down" position. It's respecting that this is a bigger maybe than anyone in living memory has encountered.

jimbokun|7 days ago

Clearly this means Anthropic believes this but would be nice to have a footnote pointing to research backing this claim.

co_king_5|7 days ago

[deleted]

dmk|7 days ago

So I guess the key takeaway is basically that the better Claude gets at producing polished output, the less users bother questioning it. They found that artifact conversations have lower rates of fact-checking and reasoning challenges across the board. That's kind of an uncomfortable loop for a company selling increasingly capable models.

Terr_|7 days ago

> the less users bother questioning it

This makes me think of checklists. We have decades of experience in uncountable areas showing that checklists reminding users to question the universe improve outcomes: Is the chemical mixture at the temperature indicated by the chart? Did you get confirmation from Air Traffic Control? Are you about to amputate the correct limb? Is this really the file you want to permanently erase?

Yet our human brains are usually primed to skip steps, take shortcuts, and see what we expect rather than what's really there. It's surprisingly hard to keep doing the work both consistently and to notice deviations.

> lower rates of fact-checking and reasoning challenges

Now here we are with LLMs, geared to produce a flood of superficially-plausible output which strikes at our weak-point, the ability to do intentional review in a deep and sustained way. We've automated the stuff that wasn't as-hard and putting an even greater amount of pressure on the remaining bottleneck.

Rather than the old definition involving customer interaction and ads, I fear the new "attention economy" is going to be managing the scarce resource of human inspection and validation.

boplicity|7 days ago

> So I guess the key takeaway is basically that the better Claude gets at producing polished output, the less users bother questioning it.

This is exactly what I worry about when I use AI tools to generate code. Even if I check it, and it seems to work, it's easy to think, "oh, I'm done." However, I'll (often) later find obvious logical errors that make all of the code suspect. I don't bother, most of the time though.

I'm starting to group code in my head by code I've thoroughly thought about, and "suspect" code that, while it seems to work, is inherently not trustworthy.

Florin_Andrei|7 days ago

I think we're still at the stage where model performance largely depends on:

- how many data sources it has access to

- the quality of your prompts

So, if prompting quality decreases, so does model performance.

lukev|7 days ago

This is a highly circular method of evaluation. It correlates "fluency behaviors" with longer conversations and more back and forth.

What it notably does not correlate any of these these behaviors with is external value or utility.

It is entirely possible that those people who are getting the most value out of LLMs are the ones with shorter interactions, and that those who engage in lengthier interactions are distracting themselves, wasting time, or chasing rabbit trails (the equivalent of falling in a wiki-hole, at the most charitable.)

I can't prove that either -- but this data doesn't weigh in one way or the other. It only confirms that people who are chatty with their LLMs are chatty with their LLMs.

In my own case, I find the longer I "chat" with the LLM the more likely I am to end up with a false belief, a bad strategy, or some other rabbit hole. 90% of the value (in my personal experience) is in the initial prompt, perhaps with 1-2 clarifying follow-ups.

bargainbin|7 days ago

I’m not alone in finding this against the claims of the product right?

Claude is meant to be so clever it can replace all white collar work in the next n-years, but also “you’re not using it right?” Which one is it?

dsr_|7 days ago

Which one will convince you to buy more Claude? Please answer honestly, it's for the sake of profits.

jimbokun|7 days ago

Anthropic is a weird company where the CEO almost admits at times they are probably building the Torment Nexus, yet still feel they need to do it anyway…because someone else might do it first?

SpicyLemonZest|7 days ago

I'm not quite convinced of the maximalist claims, but these two aren't incompatible. Every time we talk about a company being "mismanaged" by e.g. a private equity buyout, what we mean is that the owners had access to a large volume of high quality white collar work but couldn't figure out how to use it right.

rsynnott|7 days ago

Anthropic in particular seem to be in a weird place where on the one hand they fund some real research, which is often not all roses and sunshine for them, but on the other hand, like all AI companies, they feel the need to make absurdly over-the-top claims about what's coming up Real Soon Now(TM).

kseniamorph|7 days ago

I feel like the authors make a logical inconsistency. They present the drop in "identify missing context" behavior in artifact conversations as potentially concerning, like people are thinking less critically. But their own data suggests a simpler explanation: artifact conversations show higher rates of upfront specification (clarifying goals +14.7pp, specifying format +14.5pp, providing examples +13.4pp). It's obvious that when you provide more context upfront, you end up with less missing context later. I'd be more sceptical about such research.

Kye|7 days ago

You could arrive at the essence of this by just having read and internalized Carl Sagan's The Demon-Haunted World. Especially the Baloney Detection Kit.

In my experience good prompting is mostly just good thinking.

esafak|7 days ago

And having the experience and judgment to ask the right thing.

zahlman|7 days ago

> In line with our recent Economic Index, we find that the most common expression of AI fluency is augmentative—treating AI as a thought partner, rather than delegating work entirely. In fact, these conversations exhibit more than double the number of AI fluency behaviors than quick, back-and-forth chats.

> But we also find that when AI produces artifacts—including apps, code, documents, or interactive tools—users are less likely to question its reasoning (-3.1 percentage points) or identify missing context (-5.2pp). This aligns with related patterns we observed in our recent study on coding skills.

Well, sure. If you're asking the AI to produce artifacts directly, it's likely because you pre-judged yourself less competent to do that kind of analysis.

rickydroll|7 days ago

While AI fluency is an important question to ask, affordability is another. Can a low-income person use AI to the same level of fluency as a high-income person? Will fluency become another force for income inequality?

bigstrat2003|7 days ago

To the extent that this should be a thing, there are very few people I would want doing it less than the company who has repeatedly been caught lying about its product's achievements. Anthropic should not be taken seriously after their track record.

sarkarghya|7 days ago

Honestly to use llms properly all you need to know is that it’s a next word (or action) prediction model and like all models increased entropy hurts it. Try to reduce entropy to get better results. Rest is just sugarcoated nonsense. To use llms properly you need a physics class.

Barbing|7 days ago

Which class? Or what subjects

rishabhaiover|7 days ago

And then some alignment, prompting structure, and task decomposition.