top | item 32061954

(no title)

sam-2727 | 3 years ago

The beginning of the conclusion of the original study [1] is worth repeating:

No one should try to reform or rehabilitate the ranking. It is irredeemable. In Colin Diver’s memorable formulation, “Trying to rank institutions of higher education is a little like trying to rank religions or philosophies. The entire enterprise is flawed, not only in detail but also in conception.”

Students are poorly served by rankings. To be sure, they need information when applying to colleges, but rankings provide the wrong information. As many critics have observed, every student has distinctive needs, and what universities offer is far too complex to be projected to a single parameter. These observations may partly reflect the view that the goal of education should be self-discovery and self-fashioning as much as vocational training. Even those who dismiss this view as airy and impractical, however, must acknowledge that any ranking is a composite of factors, not all of which pertain to everyone. A prospective engineering student who chooses the 46th-ranked school over the 47th, for example, would be making a mistake if the advantage of the 46th school is its smaller average class sizes. For small average class sizes are typically the result of offering more upper-level courses in the arts and humanities, which our engineering student likely will not take at all.

[1]: http://www.math.columbia.edu/~thaddeus/ranking/investigation... (section 8)

discuss

order

thomasahle|3 years ago

> Trying to rank institutions of higher education is a little like trying to rank religions

Did anyone try to make a ranking for religions similar to those for universities? Sounds like it would be a fun project making a great point.

Measures like "Fees", "Diversity", "Alumni Salary", "Financial Aid Provided" would all be interesting to see for each religion.

physicles|3 years ago

I’d love to see that too. Mormonism for example would rank very high in “Alumni Salary” and “Quality of Alumni Network”, but low in “Diversity”, “Fees”, and “Amount of Bullshit You Need to Swallow”.

btheshoe|3 years ago

Here's the response:

Universities are mostly unimportant in terms of what they offer: class sizes, education, curriculum, etc... All mostly bs that doesn't matter.

The real role of universities is to gather together smart people as they develop. This requires mostly a sort of self-selection of applicants, who need to agree independently to go to the same university. Hence rankings, prestige, and all that nonsense.

cuteboy19|3 years ago

This is just the wrong way to look at it. Clearly, there is real demand for rankings by students. No one is stupid enough to think that there is some real difference between #47 and #48. But obviously #47 is very different from #26.

Just because you can't get an exact measurement does not mean that a metric does not exist or is not useful.

brewdad|3 years ago

I would argue that the difference between 47 and 26 comes down largely to field of study or cost of attendance.

The top 10-15 offer an almost indisputable advantage with the top 5 or so being a tier unto itself. Anything outside of those groups is largely "it depends" and 50-100 forms another tier where total cost of attendance largely dictates whether one school is "better" than another.

waylandsmithers|3 years ago

I couldn’t agree more. The schools I got into were ranked something like 12, 13, 25, and 30. I went to 12 for only that reason and always regretted it. Was it my own dumb fault? Of course. I was 17.

lazyjeff|3 years ago

I've been looking at the bias in rankings for a little while. I think one way to identify and raise awareness of the biases, is just put rankings together side-by-side. I did this for computer science programs, and there's some interesting differences that I noticed:

https://jeffhuang.com/computer-science-open-data/#bias-in-co...

evouga|3 years ago

The focus on best paper awards is odd as the major conferences of some CS subfields dole out awards as if they were party favors, and those of other subfields don't have best paper awards at all.

For instance SIGCHI 2021 had 28 best papers out of 747 accepted papers (or 3.7%) whereas CVPR 2021 had one best paper out of 1660 accepted papers (0.06%).

I have no opinion about whether it's "better" to be stingy or generous with best paper awards. But obviously any kind of ranking that doesn't account for differences between conferences and subfields is going to be quite suspect.

HWR_14|3 years ago

This is completely true of undergraduate studies. There is a very real reason to think that department (not university) rankings in graduate studies matter.