top | item 38528649

(no title)

Widdershin | 2 years ago

Can you give any examples of times you’ve easily anticipated X when a whole field of subject matter experts have demonstrably overlooked it?

discuss

order

simiones|2 years ago

I don't really agree with the OP, but I do think there is at least one, possibly two such examples. The pretty clear one is nutrition: the vast majority of studies and recommendations made over the years are pure bullshit, and quite transparently so. They either study a handful of people in detail, or a huge swathe of population in aggregate, and get so many confounding variables that there is 0 explanatory power in any of them. This is quite obvious to anyone, but the field keeps churning out papers and making official recommendations as if they know anything more about nutrition than "missing certain key nutrients can cause certain disease, like scurvy for missing vitamin C".

sam0x17|2 years ago

Nutrition in particular is a scenario where major corporations willfully hid research about sugar and things for years and years and funded research attacking fat content instead, which turns out is actually pretty benign. Perfect example.

digging|2 years ago

Is that an example of "the experts didn't actually think of [simple explanation]" though?

anymouse123456|2 years ago

Can't speak for OP, but I've had more than a few similar experiences (from both sides of the fence FWIW).

I can think of one example in software deployment frequency. The observation (many years ago), was that it's painful and risky (therefore, expensive) to deploy software, so we should do it as infrequently as the market will allow.

Many companies used to be on annual release schedules, some even longer. Many organizations still resist deploying software more than every couple/few weeks.

~15 years ago, I was working alongside the other (obviously ignorant) people who believed that when something is painful, slow and repetitive, it should be automated. We believed that software deployment should happen continuously as a total non-event.

I've had to debate this subject with "experts" over and over and over again, and I've never met a single person who, once migrated, wanted to go back to the nightmare of slow, periodic software deployments.

ndriscoll|2 years ago

I don't see why a slow deployment cadence is a nightmare. When I've worked in that setting, it mostly didn't matter to me when something got deployed. When it did (e.g. because something was broken), we had a process in place to deploy only high priority fixes between normal releases.

Computers mostly just continue to work when you don't change anything, so that meant after the first week or so after a release, the chance of getting paged dropped dramatically for 3 months.

rtsil|2 years ago

That's more "the experts had a (wrong) opinion on something" than "the experts overlooked something obvious". They didn't overlook it, they thought about it and came to a conclusion.

And if by "many years ago" you refer to a period where software deployment was mostly offline and through physical media, then it was indeed painful and risky (and therefore expensive). The experts weren't wrong back then.

dataflow|2 years ago

This isn't to agree with the parent comment, but wouldn't this situation itself be an answer to your question (assuming the claim is true)? Laymen like me easily anticipated mass divergence, but purportedly scientists have been surprised by it.

cwillu|2 years ago

The procedure of multiple weights being calibrated against a single standard is _predicated_ on anticipated mass divergence.

The mystery being discussed is that, even after the obvious sources of error are allowed for, there is still a discrepancy, and it's not easy to determine how much of that discrepancy is with the weights being recalibrated vs the test standard they're being calibrated to. None of which is shocking to anyone involved, just puzzling.

sumtechguy|2 years ago

I think that they do not have an exact reason and measured it and seen it happen is the surprising bit. Anything else is a good guess. Of those, people have plenty.

monktastic1|2 years ago

This comment chain is getting circular. We can't use this as an example for itself by assuming that it is true.

sam0x17|2 years ago

* [insert every example of "15 year old unknown vulnerability in X found" here]

* have to be a bit vague here, but while working as a research scientist for the US Department of Defense I regularly witnessed and occasionally took part in scenarios where a radical idea turned "expert advice" on its head, or some applied thing completely contradicted established theoretical models in a novel or interesting way. Consistently, the barrier to such advancements was always "experts" telling you that your thing should not / could not work, blocking your efforts, withholding funding, etc., only to be proven wrong. Far too many experts care more about maintaining the status quo than actually advancing the field, and a concerning number are actually on the payroll of various corporations or private interests to actively prevent such advancements.

* over the last 30 years in the AI field, there have been a few major inflection points, Yann LeCun's convolutional neural networks and his more general idea that ANNs stood to gain something by loosely mimicking the complexity of the human brain, for which he was originally ridiculed and largely ignored by the scientific community until convolution revolutionized computer vision; and the rise of large language models, which came out of a whole branch of AI research that had been disregarded for decades and was definitely not seen as a thing that might ever come close to something like AGI, natural language processing.

* going back further in history there are plenty of examples, like quantum mechanics turning the relativistic model on its head, Galileo, etc etc. The common theme is a bunch of conservative, self-described experts scoffing at something that ends up completely redefining their field and making them ultimately look pretty silly and petty. This happens so frequently in history that I think it should just be assumed at all times in all fields, as this dynamic is one of the few true constants throughout history. No one is an expert, no one has perfect knowledge of everything, and the next big advancement will be something that contradicts conventional wisdom.

Admittedly, I derived these beliefs from some of the Socratic teachings I received very early in life, around 6th grade or so back in the late 90s, but they have continually borne fruit for me. Question everything. When debugging, question your most basic assumptions first., "is it plugged in?" etc, etc

It's sort of at a point these days where if you want to find a fruitful research idea, probably best to just browse through conventional wisdom on your topic and question the most fundamental assumptions until you find something fishy.

Widdershin|2 years ago

I missed this comment first time around, but I really appreciate this write-up.

I apologize for being a bit snide in my original challenge, I'm fairly sensitive to the "why don't you just" attitude, but I agree with pretty much everything you have to say here.

I have a very similar approach around enumerating and testing assumptions when the going gets tough, and similarly have found that has enabled me to solve a handful of problems previously claimed impossible.

I think the tautological issue with our initial framing is that if you're able to easily identify these problems you probably are a subject matter expert. In many ways it's the outsider art of analytical problem solving - established wisdom should not be sacred.