(no title)
spease | 7 months ago
I wonder if our current research product is only considered the gold standard because doing things in a probabilistic way is the only way we can manage the complexity of the human body to date.
It’s like me running an application many, many times with many different configurations and datasets, while scanning some memory addresses at runtime before and after the test runs, to figure out whether a specific bug exists in a specific feature.
Wouldn’t it be a lot easier if I could look at the relevant function in the source code and understand its implementation to determine whether it was logically possible based on the implementation?
We currently don’t have the ability to decompile the human body, or understand the way it’s “implemented”, but that is something that tech is rapidly developing tools that could be used for such a thing. Either a way to corroborate enough information aggregated about the human body “in mind” than any person can in one lifetime and reason about it, or a way to simulate it with enough granularity to be meaningful.
Alternatively, the double-blindedness of a study might not be as necessary if you can continually objectively quantify the agreement of the results with the hypothesis.
Ie if your AI model is reporting low agreement while the researchers are reporting high agreement, that could be a signal that external investigation is warranted, or prompt the researchers to question their own biases where they would’ve previously succumbed to confidence bias.
All of this is fuzzy anyway - we likely will not ever understand everything at 100% or have perfect outcomes, but if you can cut the overhead of each study down by an order of magnitude, you can run more studies to fine-tune the results.
Alternatively, you can have an AI passively running studies to verify reproducibility and flag cases where it fails, whereas now the way the system values contributions makes it far less useful for a human author to invest the time, effort, and money. Ie improve recovery from a bad study a lot quicker rather than improve the accuracy.
EDIT: These are probably all ideas other people have had before, so sorry to anyone who reaches the end of my brainstorming and didn’t come out with anything new. :)
mapt|7 months ago
Do a detailed enough study of an entire population and you get very strong hypothesis testing for all sorts of diseases & treatments simultaneously. You don't have to spend tens of millions of dollars and multiple PHD generations running a blinded study to replicate a specific untested first-principles part of modern medicine's treatment for a rare disease, you get that shit for free and call it up in a SQL query.