(no title)
rat9988 | 1 month ago
You'd need to prove that this assertion applies here. I understand that you can't deduce the future gains rate from the past, but you also can't state this as universal truth.
rat9988 | 1 month ago
You'd need to prove that this assertion applies here. I understand that you can't deduce the future gains rate from the past, but you also can't state this as universal truth.
bayindirh|1 month ago
Knowledge engineering has a notion called "covered/invisible knowledge" which points to the small things we do unknowingly but changes the whole outcome. None of the models (even AI in general) can capture this. We can say it's the essence of being human or the tribal knowledge which makes experienced worker who they are or makes mom's rice taste that good.
Considering these are highly individualized and unique behaviors, a model based on averaging everything can't capture this essence easily if it can ever without extensive fine-tuning for/with that particular person.
enraged_camel|1 month ago
Self-driving cars don't use LLMs, so I don't know how any rational analysis can claim that the analogy is valid.
>> The saying I have quoted (which has different forms) is valid for programming, construction and even cooking. So it's a simple, well understood baseline.
Sure, but the question is not "how long does it take for LLMs to get to 100%". The question is, how long does it take for them to become as good as, or better than, humans. And that threshold happens way before 100%.
rat9988|1 month ago
damethos|1 month ago
thfuran|1 month ago
None of the current models maybe, but not AI in general? There’s nothing magical about brains. In fact, they’re pretty shit in many ways.
sanderjd|1 month ago
rybosworld|1 month ago
I've heard this same thing repeated dozens of times, and for different domains/industries.
It's really just a variation of the 80/20 rule.