This type of research requires experimentation (mostly failures) on extremely complex real-world equipment. Same with the nuclear weapons. AI being able to magically figure it out without experimental grounding is pure and absolute fantasy, used by companies like OpenAI and Anthropic as a justification for monopolizing AI R&D. In a sense it's not surprising this idea comes from rationalism-adjacent folks, as rationalism is mostly about the idea that experimentation is irrelevant and you can infer anything using just logic alone.
dennis_jeeves2|14 hours ago
Thanks for putting it the way you did. I didn't knew it was meant be that way, but it sort of confirms my suspicion that people who use the term 'rational' and 'logic' loosely often to dismiss an opposing view never really seek experimental results before having a point of view.
totetsu|21 hours ago
KPGv2|14 hours ago
Yeah IIRC Yudkowski famously said something about a super intelligence could derive the theory of gravity correctly by seeing only three frames of a video depicting an apple falling from a tree. This is the same Less Wrong nonsense, rejecting how vital and irreplaceable experimentation is.
There's an infinite number of explanations for the location of an object in three equally time-spaced instances. Not to mention limitations of the measuring equipment itself.