(no title)
sspiff | 9 months ago
They continue to use AI for math (asking LLMs to split bills, for example) and treat its responses for factual data lookup as 100% reliable and correct.
sspiff | 9 months ago
They continue to use AI for math (asking LLMs to split bills, for example) and treat its responses for factual data lookup as 100% reliable and correct.
osmsucks|9 months ago
Ah, yes, high tech solutions for low tech problems. Let's use the word machine for this number problem!
sspiff|9 months ago
I've tried to explain it in those terms as well: every medium-sized prompt on these large models consumes roughly one phone battery charge worth of energy. You have a phone with a calculator.
I'd ask them to do the math on how much energy they're wasting asking stupid things of these systems, but I'm too afraid they'd ask ChatGPT to do the math.
thaumasiotes|9 months ago
You know, that's a thought process that makes internal sense.
You have someone who's terrible at math. They want something else to do math for them.
Will they prefer to use a calculator, or a natural language interface?
How do you use a calculator without knowing what you're doing?
datavirtue|9 months ago
veunes|9 months ago
jatora|9 months ago
BlueTemplar|9 months ago
diggan|9 months ago
JeremyNT|9 months ago
I don't do this but isn't it basically... fine? I assume all the major chatbots can do this correctly at this point.
The trick here is that chatbots can do a wide range of tasks, so why context switch to a whole different app for something like this? I believe you'll find this happening more frequently for other use cases as well.
Usability trumps all.
JeremyNT|9 months ago
When it comes to facts that actually matter, people need to know to verify the output.