top | item 47059059

(no title)

anon-3988 | 12 days ago

You are forgetting that they are now going to use AI to summarize it back.

discuss

order

kombookcha|12 days ago

This is one of my major concerns about people trying to use these tools for 'efficiency'. The only plausible value in somebody writing a huge report and somebody else reading it is information transfer. LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high, and you will be worse off reading the summary than if you skimmed the first and last pages. In fact, you will be worse off than if you did nothing at all.

Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.

Relatedly, I think people get the sense that 'getting better at prompting' is purely a one-way issue of training the robot to give better outputs. But you are also training yourself to only ask the sorts of questions that it can answer well. Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!

notahacker|12 days ago

Yep. The other way it can have net no impact is if it saves thousand of hours of report drafting and reading but misses the one salient fact buried in the observations that could actually save the company money. Whilst completely nailing the fluff.

birdsongs|12 days ago

> LLM's are notoriously bad at this. The noise to signal ratio is unacceptably high

I could go either way on the future of this, but if you take the argument that we're still early days, this may not hold. They're notoriously bad at this so far.

We could still be in the PC DOS 3.X era in this timeline. Wait until we hit the Windows 3.1, or 95 equivalent. Personally, I have seen shocking improvements in the past 3 months with the latest models.

crabmusket|12 days ago

It reminds me of that Apple ad where a guy just rocks up to a meeting completely unprepared and spits out an AI summary to all his coworkers. Great job Apple, thanks for proving Graeber right all along.

kykeonaut|12 days ago

> Those questions that it will no longer occur to you to ask (not just of the robot, but of yourself) might be the most pertinent ones!

That is true, but then again also with google. You could see why some people want to go back to the "read the book" era where you didn't have google to query anything and had to make the real questions.

JimboOmega|11 days ago

One thing AI should eliminate is the "proof of work" reports. Sometimes the long report is not meant to be read, but used as proof somebody has thoroughly thought through various things (captured by, for instance, required sections).

When AI is doing that, it loses all value as a proof of work (just as it does for a school report).

My AI writes for your AI to read is low value. But there is probably still some value in "My AI takes these notes and makes them into a concise readable doc".

micik|11 days ago

> Using AI to output noise and learn nothing at breakneck speeds is worse than simply looking out the window, because you now have a false sense of security about your understanding of the material.

i may put this into my email signature with your permission, this is a whip-smart sentence.

and it is true. i used AI to "curate information" for me when i was heads-down deep in learning mode, about sound and music.

there was enough all-important info being omitted, i soon realized i was developing a textbook case of superficial, incomplete knowledge.

i stopped using AI and did it all over again through books and learning by doing. in retrospect, i'm glad to have had that experience because it taught me something about knowledge and learning.

mostly that something boils down to RTFM. a good manual or technical book written by an expert doesn't have a lot of fluff. what exactly are you expecting the AI to do? zip the rar file? it will do something, it might look great, lossless compression it will be not.

P.S. not a prompt skill issue. i was up to date on cutting edge prompting techniques and using multiple frontier models. i was developing an app using local models and audio analysis AI-powered libraries. in other words i was up to my neck immersed in AI.

after i grokked as much as i could, given my limited math knowledge, of the underlying tech from reading the theory, i realized the skill issue invectives don't hold water. if things break exactly in the way they're expected to break as per their design, it's a little too much on the nose. even appealing to your impostor syndrome won't work.

P.P.S. it's interesting how a lot of the slogans of the AI party are weaponizing trauma triggers or appealing to character weaknesses.

"hop on the train, commit fully, or you'll be left behind" > fear of abandonment trigger

"pah, skill issue. my prompts on the other hand...i'm afraid i can't share them as this IP is making me millions of passive income as we speak (i know you won't probe further cause asking a person about their finances is impolite)" > imposter syndrome inducer par excellence, also FOMO -- thinking to yourself "how long can the gold rush last? this person is raking it in!! what am i doing? the miserable sod i am"

1. outlandish claims (Claude writes ALL the code) noone can seem to reproduce, and indeed everyone non-affiliated is having a very different experience

2. some of the darkest patterns you've seen in marketing are the key tenets of the gospel

3. it's probably a duck.

i've been 100% clear on the grift since October '25. Steve Eisman of the "Big Short" was just hopping onto the hype train back then. i thought...oh. how much analysis does this guru of analysts really make? now Steve sings of AI panic and blood in the streets.

these things really make you think, about what an economy even is. it sure doesn't seem to have a lot to do with supply and demand, products and services, and all those archaisms.

SpaceNoodled|12 days ago

So what we now have is a very expensive and energy-intensive method for inflating data in a lossy manner. Incredible.

amoss|12 days ago

Remarkably it has only cost a few trillion dollars to get here!

mold_aid|12 days ago

So a circular economy in which you add mistakes

forinti|11 days ago

For all the technology we develop, we rarely invest in processes. Once in a blue moon some country decides to revamp its bureaucracy, when it should really be a continuous effort (in the private sector too).

OTOH, what happens continuously is that technology is used to automate bureaucracy and even allows it to grow some complexity.

harshreality|12 days ago

An economy of the LLMs, by the LLMs, for the LLMs, shall not perish from the Earth.

salawat|11 days ago

Rather poignant actually. By replacing people with LLM's, you've just made the economy as a whole something which can be owned.

OkayPhysicist|11 days ago

See, this is an opportunity. Company provides AI tool, monitors for cases where AI output is being fed as AI input. In such cases, flag the entire process for elimination.