I was heavily encouraged to do what would later be called “p-hacking”, but it looked different from what they describe here. This article describes p-hacks for people that aren’t into math/stats. I always ended up p hacking because I was into stats methods.
Somebody would say “here’s an old dataset that didn’t work out, I bet you can use one of those new stats methods you’re always reading about to find a cool effect!”, and then the fishing expedition takes off.
A couple weeks later you show off some cool effects that your new cutting edge results were able to extract from an old, useless dataset.
But instead of saying “that’s good pilot data, let’s see if it holds up with a new experiment”, you’re told “you can publish that! Keep this up and maybe you’ll be lucky enough to get a job someday!”
The practice you describe is called data dredging though. The thing about it is that you do not know enough experimental design details to make sure it was all on the up, especially worse the older the dataset gets.
Normally when doing that you need a multiple comparison corrections and conservative stats. That won't get you published though, or if you do get published you won't get noticed except by someone running a meta analysis. Perhaps not even then.
Usually you end up with negative results from reanalysis, evidence of tampering or small effect sizes.
And this does not that reliably detect dataset manipulation, p hacking on the part of experimenters or accidental violations of the protocol, not even necessarily if the data collection included measures to prevent it.
In short: you cannot 100% trust any dataset you did not make. Not even as part of the team that makes it.
I got my undergrad in physics and data hacking was discussed at length in every lab class. I don't know if this is a common experience but it was really one of the most beneficial lessons.
In be beginning it always felt obvious what hacking was or wasn't but towards the end it really felt hard to distinguish. I think that was the point. It created a lot of self doubt which led to high levels of scrutiny.
Later I worked as an engineer and saw frequent examples of errors you describe. One time another engineer asked if we could extrapolate data in a certain way, I said no and would likely lead to catastrophic failure. Lead engineer said I was being a perfectionist. Well, the rocket engine exploded during the second test fire, costing the company millions and years of work. The perfectionist label never stopped despite several instances (not to that scale). Any extra time and money to satisfy my "perfectionism" was greatly offset by preventable failures.
Later I went to grad school for CS and it doesn't feel much different. Academia, big tech, small tech, whatever. People think you plug data into algorithms and the result you get is all there is. But honestly, that's where the real work starts.
Algorithms aren't oracles and you need to deeply study them to understand their limits and flaws. If you don't, you get burned. But worse, often the flame is invisible. A lot of time and money is wasted trying to treat those fires and it's frequent for people to believe the only flames that exist are the obvious and highly visible ones.
As long as there is transparency about the process, I think this sort of thing is basically fine. It's roughly at the level of observational science rather than experimental science, and it can help lead to new research to validate the effect discovered.
Where this gets dangerous is when it is taken at face value, either in scientific circles, or, more common, journalistic circles.
> Stopping an experiment once you find a significant effect but before you reach your predetermined sample size is classic P hacking.
Although much of the article is basic common sense, and although I'm not a statistician, I had to seriously question the author's understanding of statistics at this point. The predetermined sample size (statistical power) is usually based on an assumption made about the effect size; if the effect size turns out to be much larger than you assumed, then a smaller sample size can be statistically sound.
Clinical trials very frequently do exactly this -- stop before they reach a predetermined sample size -- by design, once certain pre-defined thresholds have been passed. Other than not having to spend extra time and effort, the reasons are at least twofold: first, significant early evidence of futility means you no longer have to waste patients' time; second, early evidence of utility means you can move an effective treatment into practice that much sooner.
A classic example of this was with clinical trials evaluating the effect of circumcision on susceptibility to HIV infection; two separate trials were stopped early when interim analyses showed massive benefits of circumcision [0, 1].
In experimental studies, early evidence of efficacy doesn't mean you stop there, report your results, and go home; the typical approach, if the experiment is adequately powered, is to repeat it (three independent replicates is the informal gold standard).
There are of course statistical methods designed to support early stopping. But I don’t think you can use a regular p-test every day and decide to stop if p < 0.05. That’s something else.
> I had to seriously question the author's understanding of statistics at this point.
I think you may want to start the questioning closer to home.
Early stopping is fine as long as the test has been designed with the possibility of early stopping in mind and this possibility has been factored in the p - value formulation.
In lots of human studies, you can’t just stop at an arbitrary number of participants because you’ve counterbalanced manipulations to decorrelate potential confounders (e.g., which color stimulus is paired with reward, the order of trials).
The distinction is between ‘data peeking’, i.e. repeatedly checking the p-value you've obtained and stopping if it falls below 0.05, and repeating assays in the light of new information. Such new information can relate to the distribution of the values, the expected effect size, or any other parameter that you did not know at the outset of the study.
In ‘data peeking’, the flaw is that if an assay is repeated often enough, one will eventually get a result that deviates far from the mean result. This is a natural consequence of the data having a normal distribution, i.e. not all results will be identical. It's the equivalent of getting six heads or tails in a row (which should happen at least once if you flip a coin 200 times), and then reporting your coin as biased.
Repeating an assay because the distribution of the data is not what you thought, or because the likely difference between means is smaller than you thought is a valid approach.
Source: Big little lies: a compendium and simulation of p-hacking strategies
Angelika M. Stefan and Felix D. Schönbrodt
Sounds like a variable cost experiment. Each observation cost x$. Like an A/B split on Google ads. Why keep paying for A when you know B is better already.
It is an old saying, and I’m not sure there’s much use to it as it feels like a mitigation.
No doubt the system needs to change, but lots of careers benefit from cheating or unethical behavior. It doesn’t rationalize it or force a choice on anyone.
The Bonferroni correction part of this article is the most important. The amount of papers that don't account for this is shocking, comparing 20 variables with a 0.05 confidence interval is extremely annoying, as you end up having to do analysis on all papers data yourself to correct for it to see if it is still significant or not.
If the conclusion is "be transparent", I'm strongly supportive.
And moreover, I would be even more supportive if we found a way to change the incentives for tenure and promotion such that reproducibility was an important factor in how we make decisions about grants, tenure, and promotion.
It might be below the fold, but it looks like they're missing the most important p-hacking strategy of all: the dogshit null hypothesis. It's very reliable and it's the most common type of p-hacking that I see.
It's easy to create a dogshit null hypotheses by negligence or by "negligence" and it's easy to reject a dogshit null hypothesis by simply collecting enough data as it automatically crumbles on contact with the real world -- that's what makes it dogshit. One might hope that this would be caught by peer review (insist on controls!) but I see enough dogshit null hypotheses roaming around the literature that these hopes are about as realistic as fairy dust. In practice, the dogshit null hypothesis reins supreme, or more precisely it quietly scoots out of the way so that its partner in crime, the dogshit alternative hypothesis, can have an unwarranted moment in the spotlight.
Is that it's literally what us software optimization engineers do. We keep writing optimizations until we find one that is a statistically significant speed-up.
Hence we are running experiments until we get a hit.
The only defense I know against this is to have a good perf CI. If your patch seemed like a speed-up before committing, but perf CI doesn't see the speed-up, then you just p-hacked yourself. But that's not even fool proof.
You just have to accept that statistics lie and that you will fool yourself. Prepare accordingly.
> Is that it's literally what us software optimization engineers do. We keep writing optimizations until we find one that is a statistically significant speed-up.
I don't think that is what it is saying. It is saying you would write one particular optimization (your hypothesis), and then you would run the experiment (measuring speed-up) multiple times until you see a good number.
It's fine to keep trying more optimizations and use the ones that have a genuine speedup.
Of course the real world is a lot more nuanced -- often times measuring the performance speed up involves hypothesis as well ("Does this change to the allocator improve network packet transmission performance?"), you might find that it does not, but you might run the same change on disk IO tests to see if it helps that case. That is presumably okay too if you're careful.
These seem like two different things. Testing many different optimizations is not the same experiment; it's many different experiments. The SE equivalent of the practice being described would be repeatedly benchmarking code without making any changes and reporting results only from the favorable runs.
There's another cheeky example of this where you select a pseudo-random seed that makes your result significant. I have a personal seed, I use it in every piece of research that uses random number generation. It keeps me honest!
what they’re referring to might be better put as applying a patch once and then running it 500 times until you get a benchmark thats better than baseline for some reason
The article cuts off for me so I do not know if they talk about this, but preregistration has to be part of the conversation moving forward.
And it has to have teeth -- withdrawn studies have to have a reputational risk that affects the credibility of future studies, even if it means publishing a retrospective or a null result in a minor journal.
Before you start your experiment, you calculate how many samples you need based on the estimated effect size you're looking for and how small you want your confidence interval to be.
In the physical sciences you can often estimate the noise level in a null measurement -- or even measure it. You often do this just to get your setup working before doing something like wasting a precious specimen on a "this time for real" measurement.
Won’t these just make it less likely that you can publish your work, and end up damaging your career in the short term? As opposed to getting published, having a career, with a long tail risk of being found out later?
And you could mitigate that risk by publishing research that doesn’t really matter, so no one ever checks.
> The problem with p-hacking is not the "hacking," it’s the "p." Or, more precisely, the problem is null hypothesis significance testing, the practice of finding data which reject straw-man hypothesis B, and taking this as evidence in support of preferred model A.
> I understand falisificationism to be that you take the hypothesis you love, try to understand its implications as deeply as possible, and use these implications to test your model, to make falsifiable predictions. The key is that you’re setting up your own favorite model to be falsified.
> In contrast, the standard research paradigm in social psychology (and elsewhere) seems to be that the researcher has a favorite hypothesis A. But, rather than trying to set up hypothesis A for falsification, the researcher picks a null hypothesis B to falsify and thus represent as evidence in favor of A.
> As I said above, this has little to do with p-values or Bayes; rather, it’s about the attitude of trying to falsify the null hypothesis B rather than trying to trying to falsify the researcher’s hypothesis A.
> Take Daryl Bem, for example. His hypothesis A is that ESP exists. But does he try to make falsifiable predictions, predictions for which, if they happen, his hypothesis A is falsified? No, he gathers data in order to falsify hypothesis B, which is someone else’s hypothesis. To me, a research program is confirmationalist, not falsificationist, if the researchers are never trying to set up their own hypotheses for falsification.
> That might be ok—maybe a confirmationalist approach is fine, I’m sure that lots of important things have been learned in this way. But I think we should label it for what it is.
Reading this article tbh causes second hand embarrassment. Ostensibly it’s targeting professional scientists using the brand of a prestigious journal, yet it has a vibe of explaining ethics and common sense to school kids. We’ve come to the point of having to explain to PhDs why cherry picking data is bad.
I’m not criticizing the article, rather bemoaning the fact that it’s needed. Of course the problem is not just with the much maligned social sciences, it’s physics and computer science too. The controversy around Microsoft’s topological qubits, a super complex topic, in part involved the most basic kind of this nonsense, something like including 4 samples of 20 measured in the paper iirc.
The community needs to get its shit together. The world we’re living in now, the post truth era, is the result of many factors but this is one of them. The loss of faith in science is partially a self-inflicted wound.
There are many more or less obvious ways that people do p-hacking without even realising it.
A classic one is looking at eg an eeg topographic plot, notice which areas or channels within an area seem to be more promising, and running stats and follow ups on these. There are of course degrees of these: people may have preregistered which area (let's say prefrontal cortex for example) but leave open which channels (because it is a bit hard to make that exact guesses anyway). There are methods to deal with this (eg cluster permutation analysis) but often people seem to think that they have to choose between averaging between too many channels, thus risking smoothening out and decreasing an existing effect, or cherry-picking channels based on visual inspection of the data, which means artificially increasing an existing effect or even creating an artifactual one. Because people do not actually run a test to pick the channels, they just visually inspect the data, they do not actually realise this is p-hacking. The problem is that determining the researcher's degrees of freedom is not an easy task, and not one that can just be formalised in a p-adjustment technique.
There is a huge spectrum of practices around these degrees of freedom, that may happen during any stage of the data processing, that range from obviously to subtly sketchy and problematic. And believe me that often people who do that think that they actually have good practices, and others do p-hacking.
Imo the main way to actually avoid this issue is actually being transparent with all the decisions one makes, even if this can reduce the faith on one's results (which actually should be the point of it, if that's the case!). A lot of time shit happens, and often it is hard to predict everything in advance in a preregistration. If the incentive was to just play safe then not much innovation and method experimentation would occur. It is easy to talk about preregistration as panacea in fields with long ago established practices, but much harder when the state of the art wrt both methods and theory may change wildly even in 2 years that may take to run a study.
I believe we need better frameworks for rigorous exploratory research. The only paper I have seen to actually take this idea seriously is this one [0], but I believe a lot of research would more honestly fit in such a framework, and not everything should be conceptualised within a hypothesis testing framework.
Method-wise, closed testing procedures also seem very interesting for such research (and can work both actually inferentially, but also for extracting hypotheses for further testing), such as [1].
parpfish|9 months ago
Somebody would say “here’s an old dataset that didn’t work out, I bet you can use one of those new stats methods you’re always reading about to find a cool effect!”, and then the fishing expedition takes off.
A couple weeks later you show off some cool effects that your new cutting edge results were able to extract from an old, useless dataset.
But instead of saying “that’s good pilot data, let’s see if it holds up with a new experiment”, you’re told “you can publish that! Keep this up and maybe you’ll be lucky enough to get a job someday!”
AstralStorm|9 months ago
Normally when doing that you need a multiple comparison corrections and conservative stats. That won't get you published though, or if you do get published you won't get noticed except by someone running a meta analysis. Perhaps not even then. Usually you end up with negative results from reanalysis, evidence of tampering or small effect sizes.
And this does not that reliably detect dataset manipulation, p hacking on the part of experimenters or accidental violations of the protocol, not even necessarily if the data collection included measures to prevent it.
In short: you cannot 100% trust any dataset you did not make. Not even as part of the team that makes it.
godelski|9 months ago
In be beginning it always felt obvious what hacking was or wasn't but towards the end it really felt hard to distinguish. I think that was the point. It created a lot of self doubt which led to high levels of scrutiny.
Later I worked as an engineer and saw frequent examples of errors you describe. One time another engineer asked if we could extrapolate data in a certain way, I said no and would likely lead to catastrophic failure. Lead engineer said I was being a perfectionist. Well, the rocket engine exploded during the second test fire, costing the company millions and years of work. The perfectionist label never stopped despite several instances (not to that scale). Any extra time and money to satisfy my "perfectionism" was greatly offset by preventable failures.
Later I went to grad school for CS and it doesn't feel much different. Academia, big tech, small tech, whatever. People think you plug data into algorithms and the result you get is all there is. But honestly, that's where the real work starts.
Algorithms aren't oracles and you need to deeply study them to understand their limits and flaws. If you don't, you get burned. But worse, often the flame is invisible. A lot of time and money is wasted trying to treat those fires and it's frequent for people to believe the only flames that exist are the obvious and highly visible ones.
andrewla|9 months ago
Where this gets dangerous is when it is taken at face value, either in scientific circles, or, more common, journalistic circles.
neilv|9 months ago
You never count your results, when you're sitting at the lab bench, there will be time enough for counting, when the experiments are done.
boulos|9 months ago
(And TIL, this wasn't original to Kenny Rogers!)
gwerbret|9 months ago
Although much of the article is basic common sense, and although I'm not a statistician, I had to seriously question the author's understanding of statistics at this point. The predetermined sample size (statistical power) is usually based on an assumption made about the effect size; if the effect size turns out to be much larger than you assumed, then a smaller sample size can be statistically sound.
Clinical trials very frequently do exactly this -- stop before they reach a predetermined sample size -- by design, once certain pre-defined thresholds have been passed. Other than not having to spend extra time and effort, the reasons are at least twofold: first, significant early evidence of futility means you no longer have to waste patients' time; second, early evidence of utility means you can move an effective treatment into practice that much sooner.
A classic example of this was with clinical trials evaluating the effect of circumcision on susceptibility to HIV infection; two separate trials were stopped early when interim analyses showed massive benefits of circumcision [0, 1].
In experimental studies, early evidence of efficacy doesn't mean you stop there, report your results, and go home; the typical approach, if the experiment is adequately powered, is to repeat it (three independent replicates is the informal gold standard).
[0]: https://pubmed.ncbi.nlm.nih.gov/17321310/
[1]: https://pubmed.ncbi.nlm.nih.gov/16231970/
hiddencost|9 months ago
The author is absolutely correct. Early stopping is a classic form of p hacking. See attached image for an illustration.
If you want to be rigorous, you can define criterion for early stopping such that it's not, but you require relatively stronger evidence.
Clinical trials that stop early do so typically at predefined times with higher significance thresholds.
bjornsing|9 months ago
dccsillag|9 months ago
srean|9 months ago
I think you may want to start the questioning closer to home.
Early stopping is fine as long as the test has been designed with the possibility of early stopping in mind and this possibility has been factored in the p - value formulation.
parpfish|9 months ago
pcrh|9 months ago
In ‘data peeking’, the flaw is that if an assay is repeated often enough, one will eventually get a result that deviates far from the mean result. This is a natural consequence of the data having a normal distribution, i.e. not all results will be identical. It's the equivalent of getting six heads or tails in a row (which should happen at least once if you flip a coin 200 times), and then reporting your coin as biased.
Repeating an assay because the distribution of the data is not what you thought, or because the likely difference between means is smaller than you thought is a valid approach.
Source: Big little lies: a compendium and simulation of p-hacking strategies Angelika M. Stefan and Felix D. Schönbrodt
https://royalsocietypublishing.org/doi/10.1098/rsos.220346
ekianjo|9 months ago
coolcase|9 months ago
cypherpunks01|9 months ago
"It is difficult to get a researcher to stop P hacking, when his career depends on his not stopping P hacking."
WhitneyLand|9 months ago
No doubt the system needs to change, but lots of careers benefit from cheating or unethical behavior. It doesn’t rationalize it or force a choice on anyone.
bjornsing|9 months ago
It’s not a knowledge problem. It’s a vales and incentives problem.
zipy124|9 months ago
p4ul|9 months ago
And moreover, I would be even more supportive if we found a way to change the incentives for tenure and promotion such that reproducibility was an important factor in how we make decisions about grants, tenure, and promotion.
analog31|9 months ago
Disclosure: I left academia before I had to worry about any of this.
eviks|9 months ago
smallmancontrov|9 months ago
It's easy to create a dogshit null hypotheses by negligence or by "negligence" and it's easy to reject a dogshit null hypothesis by simply collecting enough data as it automatically crumbles on contact with the real world -- that's what makes it dogshit. One might hope that this would be caught by peer review (insist on controls!) but I see enough dogshit null hypotheses roaming around the literature that these hopes are about as realistic as fairy dust. In practice, the dogshit null hypothesis reins supreme, or more precisely it quietly scoots out of the way so that its partner in crime, the dogshit alternative hypothesis, can have an unwarranted moment in the spotlight.
nmca|9 months ago
unknown|9 months ago
[deleted]
aw1621107|9 months ago
Would you mind giving an example(s) of such and how it differs from a "good" null hypothesis?
pizlonator|9 months ago
> Running experiments until you get a hit
Is that it's literally what us software optimization engineers do. We keep writing optimizations until we find one that is a statistically significant speed-up.
Hence we are running experiments until we get a hit.
The only defense I know against this is to have a good perf CI. If your patch seemed like a speed-up before committing, but perf CI doesn't see the speed-up, then you just p-hacked yourself. But that's not even fool proof.
You just have to accept that statistics lie and that you will fool yourself. Prepare accordingly.
starspangled|9 months ago
I don't think that is what it is saying. It is saying you would write one particular optimization (your hypothesis), and then you would run the experiment (measuring speed-up) multiple times until you see a good number.
It's fine to keep trying more optimizations and use the ones that have a genuine speedup.
Of course the real world is a lot more nuanced -- often times measuring the performance speed up involves hypothesis as well ("Does this change to the allocator improve network packet transmission performance?"), you might find that it does not, but you might run the same change on disk IO tests to see if it helps that case. That is presumably okay too if you're careful.
throwanem|9 months ago
jean_lannes|9 months ago
bbertelsen|9 months ago
unknown|9 months ago
[deleted]
doubletwoyou|9 months ago
which is understandably a bit more loony
appleaday1|9 months ago
gregwebs|9 months ago
a_bonobo|9 months ago
pcrh|9 months ago
~Ernest Rutherford.
biofox|9 months ago
~Psychologists
>What are statistics?
~Computer scientists
andrewla|9 months ago
And it has to have teeth -- withdrawn studies have to have a reputational risk that affects the credibility of future studies, even if it means publishing a retrospective or a null result in a minor journal.
spinf97|9 months ago
> Running experiments until you get a hit
But if I'm running an experiment how do I know how many time to run it.
remus|9 months ago
Small effect with high confidence => more samples
Big effect with low confidence=> less samples
analog31|9 months ago
dimal|9 months ago
And you could mitigate that risk by publishing research that doesn’t really matter, so no one ever checks.
shoo|9 months ago
> The problem with p-hacking is not the "hacking," it’s the "p." Or, more precisely, the problem is null hypothesis significance testing, the practice of finding data which reject straw-man hypothesis B, and taking this as evidence in support of preferred model A.
https://statmodeling.stat.columbia.edu/2021/09/30/the-proble...
See also this post from 2014 with a discussion of Confirmationist and falsificationist approaches to reasoning in science: https://statmodeling.stat.columbia.edu/2014/09/05/confirmati...
> I understand falisificationism to be that you take the hypothesis you love, try to understand its implications as deeply as possible, and use these implications to test your model, to make falsifiable predictions. The key is that you’re setting up your own favorite model to be falsified.
> In contrast, the standard research paradigm in social psychology (and elsewhere) seems to be that the researcher has a favorite hypothesis A. But, rather than trying to set up hypothesis A for falsification, the researcher picks a null hypothesis B to falsify and thus represent as evidence in favor of A.
> As I said above, this has little to do with p-values or Bayes; rather, it’s about the attitude of trying to falsify the null hypothesis B rather than trying to trying to falsify the researcher’s hypothesis A.
> Take Daryl Bem, for example. His hypothesis A is that ESP exists. But does he try to make falsifiable predictions, predictions for which, if they happen, his hypothesis A is falsified? No, he gathers data in order to falsify hypothesis B, which is someone else’s hypothesis. To me, a research program is confirmationalist, not falsificationist, if the researchers are never trying to set up their own hypotheses for falsification.
> That might be ok—maybe a confirmationalist approach is fine, I’m sure that lots of important things have been learned in this way. But I think we should label it for what it is.
See also: Andrew Gelman and Eric Loken's 2014 "garden of forking paths" paper: https://sites.stat.columbia.edu/gelman/research/unpublished/...
WhitneyLand|9 months ago
I’m not criticizing the article, rather bemoaning the fact that it’s needed. Of course the problem is not just with the much maligned social sciences, it’s physics and computer science too. The controversy around Microsoft’s topological qubits, a super complex topic, in part involved the most basic kind of this nonsense, something like including 4 samples of 20 measured in the paper iirc.
The community needs to get its shit together. The world we’re living in now, the post truth era, is the result of many factors but this is one of them. The loss of faith in science is partially a self-inflicted wound.
notpushkin|9 months ago
Huh. I’m not on a university connection or anything. Is it just open access?
some_random|9 months ago
freehorse|9 months ago
A classic one is looking at eg an eeg topographic plot, notice which areas or channels within an area seem to be more promising, and running stats and follow ups on these. There are of course degrees of these: people may have preregistered which area (let's say prefrontal cortex for example) but leave open which channels (because it is a bit hard to make that exact guesses anyway). There are methods to deal with this (eg cluster permutation analysis) but often people seem to think that they have to choose between averaging between too many channels, thus risking smoothening out and decreasing an existing effect, or cherry-picking channels based on visual inspection of the data, which means artificially increasing an existing effect or even creating an artifactual one. Because people do not actually run a test to pick the channels, they just visually inspect the data, they do not actually realise this is p-hacking. The problem is that determining the researcher's degrees of freedom is not an easy task, and not one that can just be formalised in a p-adjustment technique.
There is a huge spectrum of practices around these degrees of freedom, that may happen during any stage of the data processing, that range from obviously to subtly sketchy and problematic. And believe me that often people who do that think that they actually have good practices, and others do p-hacking.
Imo the main way to actually avoid this issue is actually being transparent with all the decisions one makes, even if this can reduce the faith on one's results (which actually should be the point of it, if that's the case!). A lot of time shit happens, and often it is hard to predict everything in advance in a preregistration. If the incentive was to just play safe then not much innovation and method experimentation would occur. It is easy to talk about preregistration as panacea in fields with long ago established practices, but much harder when the state of the art wrt both methods and theory may change wildly even in 2 years that may take to run a study.
I believe we need better frameworks for rigorous exploratory research. The only paper I have seen to actually take this idea seriously is this one [0], but I believe a lot of research would more honestly fit in such a framework, and not everything should be conceptualised within a hypothesis testing framework.
Method-wise, closed testing procedures also seem very interesting for such research (and can work both actually inferentially, but also for extracting hypotheses for further testing), such as [1].
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7098547/
[1] https://openpharma.github.io/CTP/articles/closed_testing_pro...
ivansavz|9 months ago
aaron695|9 months ago
[deleted]