(no title)
adlpz | 2 months ago
BUT. For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.
If the AI can regurgitate their thinking, my output is better.
Humans may need to think to advance the state of the art.
Humans may not need to think to just... do stuff.
latexr|2 months ago
And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.
> If the AI can regurgitate their thinking, my output is better.
But it can’t. Not definitively and consistently, so that hypothetical is about as meaningful as “if I had a magic wand to end world hunger, I’d use it”.
> Humans may not need to think to just... do stuff.
If you don’t think to do regular things, you won’t be able to think to do advanced things. It’s akin to any muscle; you don’t use it, it atrophies.
acoard|2 months ago
That's solvable though, whether through changing training data or RL.
adlpz|2 months ago
Theoretically fixable, then.
> But it can’t. Not definitively and consistently
Again, it can't, yet, but with better training data I don't see a fundamental impossibility here. The comparison with any magic wand is, in my opinion, disingenuous.
> If you don’t think to do regular things, you won’t be able to think to do advanced things
Humans already don't think for a myriad of critical jobs. Once expertise is achieved on a particular task, it becomes mostly mechanical.
-
Again, I agree with the original comment I was answering to in essence. I do think AI will make us dumber overall, and I sort of wish it was never invented.
But it was. And, being realistic, I will try to extract as much positive value from it as possible instead of discounting it wholly.
y0eswddl|2 months ago
And if the average person is orders of magnitude better than you at thinking, you're right... you should let the AI do it lol
adlpz|2 months ago
Ask the LLM to... I don't know, to explain to you the chemistry of aluminium oxides.
Do you really think the average human will even get remotely close to the knowledge an LLM will return to such a simple question?
Ask an LLM to amend a commit. Ask it to initialize a rails project. Have it look at a piece of C code and figure out if there are any off-by-one errors.
Then try the same to a few random people on the street.
If you think the knowledge stored in the LLM weights for any of these questions is that of the average person I don't even know what to say. You must live in some secluded community of savant polymaths.
woopwoop|2 months ago
djaouen|2 months ago
God forbid we should ever have to think lol
gedy|2 months ago
alchemism|2 months ago
toobulkeh|2 months ago
Unfortunately that’s not where we’re headed.
adlpz|2 months ago
With AI and robotics there may be the slim chance we get closer to that.
But we won't. Not because AI, but because humans, of course.