asddkk's comments

asddkk | 8 years ago | on: Accuracy dominates bias and self-fulfilling prophecy [pdf]

Group versus individuals. Stereotypes can be accurate on average between groups but inaccurate for individuals. The accuracy versus bias distinction of the paper kind of overlooks this, even though it's the crux of the issue in many ways.

asddkk | 8 years ago | on: What the SATs Taught Us about Finding the Perfect Fit

I live and breathe IRT models of the sort they discuss for my work and it's fascinating to me to see it applied to clothing sizing. It makes sense because it's a measurement problem.

One thing they don't get into really is that the IRT model they are using is pretty simple, which is typical of the random/mixed effects model formulation as a way of keeping things mathematically tractable to permit parameter explanation/prediction.

They could add other components to the model, though, such as different dimensions of fit (different aspects of body shape and fit), or how closely different items track those dimensions (as opposed to just how large or how small; e.g., maybe some types of fabric provide more information about fit than others). The models they're fitting are a sort of entry point in that regard.

It wouldn't solve the problem of course, and I agree with you 100% about in-person fitting being the final word, but they also have a lot of ways they could improve these models.

asddkk | 8 years ago | on: New silicon structure opens the gate to quantum computers

The CPU-GPU analogy is very good for all sorts of reasons. For some things there's no speedup, for some important things there's a huge speedup, for other things there's some significant speedup, but not game-changing.

I tend to think of applications where you might want to brute-force search or simulate over some large set as being where quantum computing will be most significant, but I say that very vaguely and timidly.

This gives a number of applications and algorithms; I found some of the references in it really interesting to read:

http://math.nist.gov/quantum/zoo/

asddkk | 8 years ago | on: Near-miss math provides exact representations of almost-right answers

I think there's some information-theoretic ideas about compressibility or codelength that are relevant.

The appeal, I think, is in being able to succinctly represent a random or irrational mathematical object with some other object that's not exactly the same, but is simpler to describe and equivalent to some high degree of similarity. Normally these ideas are applied to things that are thought of as random in a physically stochastic sense, but you could apply them to things that are random in an information-theoretic irrationality sense also.

I'd say the polyhedra they discuss are kind of examples of this, maybe in reverse or something: they are simplified constructs that work as representations to some close extent.

There's some interesting ties here to pseudorandom numbers, in that usually we think of them as approximating true randomness, even though they're entirely reproducible and predictable. This seems similar to me at some level.

asddkk | 8 years ago | on: College Presidents Making $1M Rise with Tuition and Student Debt

To be fair, the programs that are more profitable also have more money pumped into them, and those that are less profitable have less money put into them. So yes, many of them are unprofitable, but it also seems like the less profitable ones have much smaller costs. Of course, the school's budgets at those schools might be smaller too, but...

asddkk | 8 years ago | on: Does Life End at 35? (2013)

I've come to agree with you about the question "how do we define success?" It seems fiendishly difficult to answer, but is so central to well-being in the long run, and so many assumptions seem to be made about it without explicit reflection.

Think about it: financial success? Power? Expression? It's easy to assume these sorts of things define success, but I think it's easy to come up with tricky exceptions.

asddkk | 8 years ago | on: Optometrists Feel Threatened by Technology, So Guess What They Do

It depends on the state, but what I've read of so far is mostly expanding the range of eye diseases that can be treated with medication, and expanding their prescription privileges. Some of it focuses on lasik surgery, which probably merits more scrutiny, but certain states already allow it.

E.g.,

http://www.sacbee.com/opinion/op-ed/soapbox/article75185362.... https://www.reviewofoptometry.com/article/new-york-ods-push-...

Here's examples of discussion from the physician side, which tends to be a bit more FUD-focused, and to be biased in how certain topics are approached:

http://journalofethics.ama-assn.org/2010/12/pfor1-1012.html https://jamanetwork.com/journals/jamaophthalmology/fullartic...

One of the articles starts by discussing psychologists, where similar issues arise. The typical AMA/physician group argument goes like this:

Profession X has specialized training in area A but limited to that, and not in area A+B. Physicians, in contrast, have training across a wide scope including B, and then in addition specialize in A.

The problem with this argument is that it presupposes that the only way to obtain training is by going from B, and then adding A, and ignores the argument that one could go from starting with A and then adding B. Optometrists aren't saying "everyone should be able to do lasik, or prescribe everything without training," they're saying "we are qualified to learn how to do it." Psychologists aren't saying "everyone with a license should prescribe," they're saying "people who have a license and then get additional training can prescribe."

The other problem is the physician groups tend to vastly undercharacterize the amount of training that often accompanies e.g., psychologists or optometrists. A psychologist was recent interim head of the National Institute of Mental Health, and many of them do research in psychophysiology, interventional and observational brain imaging, pharmacology, and so forth, and enter with the bio backgrounds. Conversely, the amount of actual coursework in medical school is increasingly less and less, and because they are covering so many areas of medicine, the amount of training they get across areas is relatively small.

My point isn't to belittle MD training, it's just to say that people have to be careful in the assumption that going from broader -> specific is necessarily better than going from specific -> broad, or that you have to start with certain education and then specialize, as opposed to picking up additional education along the way.

The issue is really a monopoly of (a) specific educational and training model(s) in the health care setting. Licensing is not actually competency-based, it's credential-based.

asddkk | 8 years ago | on: Is There a Limit to Scientific Understanding?

The problem is the frame of reference of "subjective." For example, few would question whether or not coat color on cats is an explainable phenomenon, although it varies. So, why isn't qualia similar? Subjective experience is a property of a biological system, so it would seem that the claim that subjective experience is not scientifically explainable at some level would be a claim that biology is not explainable.

I agree with your sentiment at some level, in that I think the explainability of certain things is at least open to question, or should be questioned, but I think the subjective/objective distinction is misleading or misguided because from some frame of reference, subjective is objective.

The bigger issue maybe, that the article touches on, is the problem of emergence.

One definition of emergence is basically that the complexity at one scale of analysis becomes so extreme that you have to move to another scale of analysis. I.e., emergence is associated with unavoidable information loss, where what is random at one scale is nonrandom at another, but predicting from one scale to another is impossible. It's kind of a measurement horizon, to borrow a cosmological metaphor: your measurements at one scale become so complex to model as a system at some point that you have to simply remeasure at a different scale.

I think this is a more immediate pressing problem with science, that there may be some kind of information-theoretic limits to explainability across scale in complex systems. It's something that the reductionistic push kind of misses: just because something is physically reducible, and logically necessary, it's not necessarily informatically reducible, and logically knowable a priori (to borrow from the philosopher Kripke).

asddkk | 8 years ago | on: Optometrists Feel Threatened by Technology, So Guess What They Do

Part of the problem is that this regulation works from the other side too, but doesn't get as much attention. Optometrists have been trying to expand their practice into other areas in ways that seem reasonable to me, but then physician groups start to wield FUD tactics to stop them.

So what should happen is smartphone app provides another option, optometrists shrug because they have so many other things to offer, physicians shrug because they have other things to offer, etc.

But what's happening instead is the physician groups are staking their turf, and the smartphone apps are coming out, and optometrists are kind of squeezed, and fight back in all directions.

The problem isn't that optometrists aren't capable of offering other services in a capitalist-competitive sense, it's that licensing regulations are preventing those services from existing.

If the congress wasn't so busy in some damn pissing contest over payments in the health care system, they might try to enact more structural improvements, like massive licensing deregulation.

asddkk | 8 years ago | on: Did Bitcoin just prove it can't scale?

This has always seemed to me to be the elephant in the room to me.

I brought this up with some ethereum devs at one point in a forum a year or two ago, and they addressed the questions pretty openly and graciously, but I still wonder about it. Essentially, they talked about the presumption of branches, subchains, etc. When that happens it seems the system would be a little more complicated than the classic crypto-libertarian currency model.

These chains get pretty large, and the overhead in terms of time and storage space seems well beyond what most people are used to now. Maybe people just have to get used to 15 minute payments, and having an extra drive or computer for financial transactions, as the cost of decentralized finance and economics, but from the current vantage point it seems burdensome to me, and only more so as adoption would grow.

asddkk | 8 years ago | on: Student Loan Debt Is Now as Big as the U.S. Junk Bond Market

So... I got an undergrad degree in psychology, but then went on to a doctoral program and got a PhD.

I'm not actually disputing your suggestion that a lot of students obtain the degree as a placeholder--it's a problem--but depending on the university, it's far from easy, and I know lots of students who obtain lots of degrees as placeholders. Students perceive that the degree is easy based on introductory psychology, but in a lot of programs, it quickly turns into statistics, neuroscience, and cognitive modeling with upper-level classes. Students coast for their first year or two and then sometime by their 3rd year start struggling.

Anyway, one of my frustrations after my doctoral degree is the stereotypes about the field. I've certainly had more experience with computer science, math, and statistics than most undergrads with those degrees (math might be different). I've worked with comp sci undergrads, and without meaning anything negative, I felt like I was teaching them about programming most of the time, rather than anything the other way around.

And yet, because of my degree, somehow people just assume that I'm interested in past lives therapy or something like that. What I do is closer to ML/AI/epidemiology than anything else. I've published papers in the areas of information theory and statistics, and coded in a large number of languages across various paradigms.

One shift that seems to have occurred since I was in undergrad is this idea that you are your degree. It's pernicious. The liberal arts philosophy is sort of along the lines of "get a degree in philosophy," take your advanced maths and statistics, and then learn more of it later, because the specific degree doesn't matter. But now we see college as an advanced job training program, and people assume that you are only qualified to do what you got your undergrad in. It's absurd.

I admit there's some limits--I'd wonder if an BFA could get deep experience in computer science, but who knows? I've seen all sorts of art projects that involve heavy coding, statistics / ML, and low-level hardware stuff, and know history faculty who are basically doing signal processing research. Coetzee, a nobel prize winner in literature, used to code.

At some level, the problem isn't these degrees all the time, it's the stereotypes about them and the people getting them.

page 1