Unlisted6446's comments

Unlisted6446 | 3 months ago | on: Autism's confusing cousins

I think I understand what you mean.

You're saying that relative to the 'typical individual', autistic brains weigh sensory inputs more heavily than their internal model. And that in schizotypal brains, relative to the 'typical individual', the internal model is weighed more heavily than the sensory input, right?

I don't know much about this area, so I can't comment on the correctness. However, I think we should be cautious in saying 'over-weigh' and 'under-weigh' because I really do think that there may be a real normative undertone when we say 'over-weigh'. I think it needlessly elevates what the typical individual experiences into what we should consider to be the norm and, by implicit extension, the 'correct way' of doing cognition.

I don't say this to try to undermine the challenges by people with autism or schizotypy. However, I think it's also fair to say that if we consider what the 'typical' person really is and how the 'typical' person really acts, they frequently do a lot of illogical and --- simply-put --- 'crazy' things.

Unlisted6446 | 1 year ago | on: Are SSDs more reliable than hard drives? (2021)

I don't understand why they aren't trying to use multiple linear regression to control for the effects of how old SSD vs HDD are, or to use something like survival analysis? I thought this was a largely solved problem...

Unlisted6446 | 1 year ago | on: A statistical approach to model evaluations

Well, I think it's usually more complicated than that. An over-simplification is that there's no free lunch.

If you use a robust sandwich estimator, you're robust against non-normality and etc, but you lower the efficiency of your estimator.

If you use bayes, the con is you have a prior and the pro is you have a prior + a lot of other things.

And strictly speaking, these are benefits on paper based on theory. In practice, of course, the drawback to using a new advanced technique is that there may be a bug(s) lurking in the software implementation that might invalidate your results.

In practice, we generally forget to account for the psychology of the analyst. Their biases, what they're willing to double-check and what they're willing to take for granted. There's also the issue of bayesian methods being somewhat confirmatory, to the point that the psychological experience of doing bayesian statistics makes one so concerned with the data generating process and of the statistical model, that one might forget to 'really check their data'.

Unlisted6446 | 1 year ago | on: You must read at least one book to ride

Well, my understanding is that we have not yet found any clear scientific method that will be consistently 'the one' to choose at any time. There are a few criteria that generally stand out, but a general method--no. And if there's no general method, then how can there be a general epistemology?

I mean, psychology isn't actually paradigmatic yet, is it? I don't think there actually is a general method throughout the field beyond surveys and null hypothesis significance testing--but those are too broad to be particularly symbolic of psychology imo.

In that sense, I'm not sure what value the list of perspectives you provided have i.r.t to what scientists actually do in practice and what kind of practice is successful.

Unlisted6446 | 1 year ago | on: You must read at least one book to ride

Well that's a thorny question, now isn't it? I mean, if it was so clear what 'epistemologies' exist in any field, then there would be little need or interest in the study of philosophy and history of science, no? If it was clear, then I think one would simply state what the epistemology of the field is.

That philosophy and history of science are so successful seems to suggest that the way of the scientist is both multifarious and difficult to pin down. I'm skeptical about using either the conscious report of the practitioner of psychology or the labels we may ascribe to their behaviors to triangulate on what their epistemology could be.

Unlisted6446 | 1 year ago | on: You must read at least one book to ride

Well, there's a couple of things going on, right? One is whether we it's a bad idea to judge an entire mass of literature because of its epistemology and the second is whether the OP's claim that psychology has horrendous epistemology is valid.

I'd say that judging an entire mass of literature because of its epistemology makes logical sense. However, in practice, it's not possible to make a judgment as to 'what the epistemology of an entire field is'. What would that even mean? Does OP think that every psychologist has an analogous enough epistemology that anyone can claim what the field's epistemology is? I think not.

Unlisted6446 | 1 year ago | on: A statistical approach to model evaluations

All things considered, although I'm in favor of Anthropic's suggestions, I'm surprised that they're not recommending more (nominally) advanced statistical methods. I wonder if this is because more advanced methods don't have any benefits or if they don't want to overwhelm the ML community.

For one, they could consider using equivalence testing for comparing models, instead of significance testing. I'd be surprised if their significance tests were not significant given 10000 eval questions and I don't see why they couldn't ask the competing models 10000 eval questions?

My intuition is that multilevel modelling could help with the clustered standard errors, but I'll assume that they know what they're doing.

Unlisted6446 | 1 year ago | on: The Median Researcher Problem

> The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.

That doesn't seem true: See Figure 1 of https://www.sciencedirect.com/science/article/pii/S105353570...? and the original results associated with the Linda problem.

Statistics is difficult and unintuitive.

Unlisted6446 | 1 year ago | on: The Toxic Consequences of Attending a High Achieving School

Something feels off about this. I mean, it can go both ways, no? Perhaps, pressure from attending a HAS might push one towards substance abuse and more. But couldn't pressure from attending an elite institution and being an elite also make push one against activities like substance abuse?

If we assume that the type of school affects lifelong outcomes, then we should also control for something like parent's latent neuroticism, which would affect both what school their child goes to and (I presume) also life-long probability of engaging in substance abuse as a coping mechanism.

Unlisted6446 | 1 year ago | on: The Battle to Define Mental Illness (2010)

But what would you have them do instead of FA? I think we're partially agreeing here, but my thinking is that no analytical technique on its-own will be a panacea whether the users really understand it or not. Why would increasing their understanding of the technique affect what they do, when there's not really any other truly different methodological alternative?

Unlisted6446 | 1 year ago | on: The Battle to Define Mental Illness (2010)

I'm not sure I'm particularly convinced that this is an issue with the method of factor analysis and by extension psychometrics, per-se. Unless one specifies a causal model and actually tries to do a risky test of their theory, any other method is liable to the issue of arbitrariness and subjectivity. Psychometrics itself has come a long way and there have been many advancements to put it on firmer footing. If anything, the issue isn't with the method, but by the user of the method. I don't know if I agree that it's an issue of understanding a method, rather than an over-reliance on data (analysis) over theoretical guidance and trying to take a hammer to theories.

Unlisted6446 | 2 years ago | on: Google Scholar PDF Reader

So I'm a researcher that almost always uses pdfs... Does HTML have the reproducibility that PDF promises? My feeling is that if I store a PDF, it'll look the same in a decade. But is HTML the same way? It seems like it relies on the web browser and many other things... How would one manage things like images and gifs? Is there a way to keep everything into one HTML file that's easily shareable and feels secure?
page 1