top | item 44142923

(no title)

sspiff | 9 months ago

I find this phenomenon really frustrating. I understand (or am at least aware of) the probabilistic nature of LLMs and their limitations, but when I point this out to my wife or friends when they are misusing LLMs for tasks they are both unsuited for and unreliable at, they wave their hands and dismiss my concerns as me being an AI cynic.

They continue to use AI for math (asking LLMs to split bills, for example) and treat its responses for factual data lookup as 100% reliable and correct.

discuss

order

osmsucks|9 months ago

> They continue to use AI for math (asking LLMs to split bills, for example)

Ah, yes, high tech solutions for low tech problems. Let's use the word machine for this number problem!

sspiff|9 months ago

Many of them try to be mindful of their climate impact.

I've tried to explain it in those terms as well: every medium-sized prompt on these large models consumes roughly one phone battery charge worth of energy. You have a phone with a calculator.

I'd ask them to do the math on how much energy they're wasting asking stupid things of these systems, but I'm too afraid they'd ask ChatGPT to do the math.

thaumasiotes|9 months ago

> Let's use the word machine for this number problem!

You know, that's a thought process that makes internal sense.

You have someone who's terrible at math. They want something else to do math for them.

Will they prefer to use a calculator, or a natural language interface?

How do you use a calculator without knowing what you're doing?

datavirtue|9 months ago

I'm so lazy, I have chat bots do all kinds of complex calculations for me. I even use it as a stock screener and the poor thing just suffers, burning fuck tons of electricity.

veunes|9 months ago

What's tricky is that for casual use, it gets things "close enough" often enough that people start building habits around it

jatora|9 months ago

Using it for simple math is actually pretty hilarious. Hey maybe they make sure to have it use python!...but I dream

BlueTemplar|9 months ago

Using LLMs (or platforms in general) is a bit like smoking (in closed spaces, with others present) : a nuisance.

diggan|9 months ago

That's just plain wrong, and I'm a smoker. LLMs won't affect the ones around you, unless you engage with them in some way. Sit next to me while I smoke and you'll be affected by passive smoking regardless of how much you engage or not. Not really a accurate comparison :)

JeremyNT|9 months ago

> They continue to use AI for math (asking LLMs to split bills, for example) and treat its responses for factual data lookup as 100% reliable and correct.

I don't do this but isn't it basically... fine? I assume all the major chatbots can do this correctly at this point.

The trick here is that chatbots can do a wide range of tasks, so why context switch to a whole different app for something like this? I believe you'll find this happening more frequently for other use cases as well.

Usability trumps all.

JeremyNT|9 months ago

Wish I could edit, but I was referring to the bill splitting math specifically here. I didn't mean to quote the rest.

When it comes to facts that actually matter, people need to know to verify the output.