(no title)
armoredkitten | 1 year ago
It is sad to see stereotype threat being one of those findings that seems less and less credible. I once worked as a research assistant on a project related to stereotype threat, and I recall the study going through several iterations because it all needed to be just so -- we were testing stereotypes related to women and math, but the effect was expected to be strongest for women who were actually good at math, so it had to be a test that would be difficult enough to challenge them, but not so challenging that we would end up with a floor effect where no one succeeds. In hindsight, it's so easy to see the rationale of "oh, well we didn't find an effect because the test wasn't hard enough, so let's throw it out and try again" being a tool for p-hacking, file drawer effects, etc. But at the time...it seemed completely normal. Because it was.
I'm no longer in the field, but it is genuinely heartening that the field is heading toward more rigour, more attempts to correct the statistical and methodological mistakes, rather than digging in one's heels and prioritizing theory over evidence. But it's a long road, especially when trying to go back and validate past findings in the literature.
simpaticoder|1 year ago
While I'm sure it is an honest statement, this sentiment is itself concerning. Science is ideally done at a remove - you cannot let yourself want any particular outcome. Desire for an outcome is the beginning of the path to academic dishonesty. The self-restraint required to accept an unwanted answer is perhaps THE most important selection criteria for minting new academics, apart from basic competency. (Acadmeia also has a special, and difficult, responsibility to resist broader cultural trends that seep into a field demanding certain outcomes.)
disgruntledphd2|1 year ago
This basically never happens. I worked in academia for many years, and in psychology for some of that, and I have never met a disinterested scientist.
Like, you need to pick your topics, and the research designs within that etc, and people don't pick things that they don't care about.
This is why (particularly in social/medical/people sciences) blinding is incredibly important to produce better results.
> The self-restraint required to accept an unwanted answer is perhaps THE most important selection criteria for minting new academics,
I agree with this, but the trouble is that this is not what is currently selected for.
I once replicated (four times!) a finding seriously contrary to accepted wisdom and I basically couldn't get it published honestly. I was told to pretend that I had looked for this effect on purpose, and provide some theory around why it could be true. I think that was the point where I realised academia wasn't for me.
Now, the same thing happens in the private sector, but ironically enough, it's much less common.
parpfish|1 year ago