This sounds nice, but I'm not holding my breath for hope that the "new" SAT will be any better.
I took both the "old" SAT (the one that they discontinued around 2004) and the "new" SAT (the one that they're now discontinuing). I actually thought that the structure old one was better in many ways - for example, the analogies were often terribly written, but the idea of testing analogies as a reasoning tool is very powerful, and much more so than just doing passage after passage of reading comprehension.
Furthermore, this is the exact same language that they used to justify the decision to change the SAT 10 years ago. Coleman isn't saying anything new when he's criticizing the SAT today; he's just recycling the same PR language that they used a decade ago.
Of course, perhaps they really are genuine. I'd love to be pleasantly surprised. But reading this gives me total deja vu from the news stories I remember reading in 2002.
Analogies are legitimate tests of reasoning ability. As I recall, however, the analogies of the "old" old test often relied on obscure words. This turned them into de facto vocabulary tests, defeating their purpose as pure assessments of reasoning. At the very least, vocabulary became a confounding variable in assessing the specific aptitude that analogies were meant to assess.
One could make a strong argument for plain-English analogies, absent the $10 words. If every test taker understood most of the words, then we could test for reasoning ability on a more normalized basis.
That's not to say testing vocabulary is totally invalid. Opinions vary. But if we assume vocabulary is worth testing, then we should test it independently.
At Khan Academy, we're really excited to work with the College Board to provide awesome, free test prep to everyone. We're putting huge emphasis on really learning the material instead of practicing test-taking skills that won't be useful afterwards.
Here's a little more about the partnership if you're interested:
"Instead of teaching to the test we're going to put huge emphasis on really learning the material instead of practicing test-taking skills that won't be useful afterwards."
Now how do we get the schools to actually do that?
As a former SAT prep teacher, I'm very excited to learn about Khan Academy's partnership with the College Board. Previously, success on the SAT seemed to depend solely on how much money was spent on SAT prep. Glad to hear that the College Board is interested in leveling the playing field.
I do not envy the College Board. We are in a social and political environment where many issues that bear directly on the test are things you can't discuss in polite company (e.g. whether IQ is a real and relevant phenomenon, the 0.8 correlation between IQ tests and the SAT, the correlation between IQ and socioeconomic status). This makes it extremely difficult to both optimize for their goals, and to communicate that to the involved parties. Because of this it's hard to even know what their true goals are and whether they are actually going to achieve them.
I think these changes are long overdue. To me, the CR portion of the SAT seemed incredibly biased against people from other cultures. Not only was there a language barrier, but also there was the problem of allusions. Many of the passages contained references to things that you wouldn't expect a typical immigrant to know (things like Greek mythology, the indulgences of the wealthy, cultural icons, etc).
I think the waivers for low-income students are also pretty great. When I was in high school, taking all of the standardized exams cost me hundreds of dollars. I went to school in a place with a lot of gentrification, so a lot of my classmates could barely afford to get their high school transcript (let alone pay to take tests). There were many efforts to get the school system to subsidize standardized exams, but they were all unsuccessful.
These tests are for admission to an American college / university. Basic knowledge of Western culture is immensely useful if you're going to be in such an environment, both for studies and for social purposes.
Not just the exams -- the practice exams, the prep courses, the books, the tutors.
And not just the money -- the time investment in preparing and taking the SAT/ACT is massive. High schoolers in low-income areas or ones forced into situations where they need to support their family with home care/employment are at a huge disadvantage (though the same can be said about other college admission aspects as well.)
Am I the only one here that's bothered by a further narrowing of the expected knowledge of students? A good engineer or programmer isn't a human calculator who can bang out hundreds of caclulations per minute or recite petty facts in a very specific area. Good programmers and engineers draw on their wide breadth of knowledge, and the deep understanding it generates, to solve complex and abstract problems.
Anecdotally, I'm seeing a substantial number, if not the majority, of CS students who can't code worth a damn, but got into the department solely because they did well in a small number of unrelated subjects.
While I'm not terribly concerned with the changes in the SAT, I am concerned about it adding more momentum to this trend.
No. I was also depressed about the approach of 'we're excluding arcane words!' I do really well on those tests - not because I have encountered every word that might come up, but because I do have a big enough vocabulary to draw analogies and guess at the meaning of arcane words by looking at their internal structure, eg Membranous seems to consist of membrane + ous, so I guess it's an adjective to describe something that has a membrane-like quality.
I developed a love of etymology from spending many long hours poring over dictionaries as a child, including the bits about word roots and so on. When I see that things like this are being dropped from the SAT, it's like being told that that knowledge and the effort to acquire it lacks value.
I agree. I would actually prefer if the test subjects were considered new to all students, and they were expected to become experts in those subjects before taking the test. What you need to score for is the ability to learn and apply the knowledge that you already have to new subjects, not to repeat the basics. You're ready for college when you're able to learn new things and apply them quickly.
It's a good thing IMO that people have to learn 'SAT words', it filters out the ones that are able to put in the effort to expand their vocabulary, and also retain this new knowledge.
"I'm seeing a substantial number, if not the majority, of CS students who can't code worth a damn, but got into the department solely because they did well in a small number of unrelated subjects."
Isnt that argument for more relevant testing? If the current selection picked up people with poor aptitude for programming or whatever, then it makes sense to change it.
Engineers needs ability to think hard and solve complex problems. So, lets measure those instead of how many arcane words they memorized, how great are they in writing self promoting essays, how much irrelevant facts they memorized or how fast they are in solving simple memorizable math problems.
There is nothing wrong with adjusting expectations to current world.
> For many students, Mr. Coleman said, the tests are mysterious and “filled with unproductive anxiety.” Nor, he acknowledged, do they inspire much respect from classroom teachers: only 20 percent, he said, see the college-admissions tests as a fair measure of the work their students have done in school.
What is really interesting to me is that while the rhetoric against the SAT (and similar tests like the MCAT and LSAT), have increased over the last few decades, in practice the tests are more important than ever. In the last 30-40 years, median scores at the top 10-15 schools are up about 100 points adjusted for recentering in 1995. In practice, SAT scores are almost entirely determinative for college admissions. High schools pump out so many 4.0+ students, that 100-200 point differences on the SAT dominate differences in admissions outcomes.
And apparently, even companies are asking people for their SAT scores these days: http://www.businessinsider.com/goldman-sachs-bain-mckinsey-j....
Without taking a position on either side, I have to wonder why the rhetoric and the practice are so out of step on this issue.
It's pretty hard to be dumb as a post and score 1500+ (excluding the short-lived essay). It may not be the most predictive factor for college success overall, but as the top schools have become increasingly competitive, they can afford to use higher and higher scores as a baseline. If they miss some diamonds in the rough who test poorly, they still won't have any trouble filling a class. Test-taking wizards with little else to recommend them will get filtered out through other methods.
On the other hand, if you're a mid-tier school trying to choose between a pair of otherwise similar applicants with scores of 1000 and 1200, it might matter to you a great deal how predictive the SAT is vs. the ACT or some other metric. Meanwhile, no matter how confident you are in its predictive power, at least it's an objective metric. One that, for better or worse, also has a marketing impact, since average incoming class scores show up near the top of college survey data.
I think most of the controversy about the SAT and other standardized tests centers around the predictive power of results near the median. Especially since re-centering, the test doesn't try hard to distinguish the top 1% from the top .01%, but it's very important that it correctly distinguish the 60th percentile from the 40th.
Removing the penalty for wrong answers seems to be an interesting choice to me. I thought a strong part of the test was the emphasis on carefully considering the questions so that the answers given were deliberate.
Without a penalty, guessing is just a chance for free points. Not really a measure of any skill aside from test taking (of which the SAT already suffers from enough). But I guess if all of your tests at a university are similar to the SAT, it might be a good measure of your potential success.
Penalties for wrong answers are the ultimate test of test-taking abilities (actually, to an extent, an ability to implement probabilities). Your strategy changes based on how many answers you can eliminate, rather than on if you know the correct answer.
This is the best news about college admissions that I've heard in a very long time. As someone who was fortunate enough not to be disadvantaged by the clear socioeconomic preferences of the SAT (and I'm guessing the ACT as well, though I had much less exposure to it), this is absolutely a step in the right direction.
As someone who was perhaps the poorest member of my graduating class at an elite, private college, I can say with some degree of certainty that the SAT was an important, if not the most important, factor in helping me gain access to opportunities that no one from my rural community had ever had before. So in that sense, the SAT can be a tremendous equalizer, even if it is often the opposite.
My immediate reaction: My score is now forever unrelatable to all except those who took the exam between 2005 and 2014. Not that anyone has asked my SAT score since I got into undergrad, but now a sizable group of people who did substantially poorer than me will seem as though they did extremely well (a 1550, for example, means something very different depending on which exam you took, while my score of above 1600 will immediate remind people I took the 2400 exam).
It doesn't affect me at all, and I'm not complaining, but that's what ran through my head.
This is something that's been going on for a long time, something I watch as one who took the exam in 1979. Since then it's been successively watered down to the point it's much less useful. E.g. MIT now accepts the ACT ... and finds it's a better predictor than the SAT (!!!).
Unlike alma maters or perhaps Xoogler status, I don't think SAT scores ever comes up casually in a conversation, and where it does I doubt someone scoring 1500/2400 would be mixed up for someone scoring 1500/1600.
I'm not really sure why SAT scores matter to anyone other than kids trying to get into their university of choice, other than some sort of bragging rights. I had some of the highest standardized test scores in the history of my high school, but I'm certainly not the most accomplished.
"Math questions will focus on three areas: linear equations; complex equations or functions; and ratios, percentages and proportional reasoning. Calculators will be permitted on only part of the math section."
As if math requirements for American students aren't low enough already.
Voluntary extra tests are required for people who need to prove more than competence in fundamentals. I'm sure there are many people who wish their field was better represented in the test. But you have 3 hours to get a read on quite a spread of topics, it necessarily must be rather hit and run, with the deep dives saved for later.
David Coleman, president of the College Board, criticized his own test, the SAT, and its main rival, the ACT, saying that both “have become disconnected from the work of our high schools.”
When I took the ACT, two of the sections were Science Reasoning and Reading Comprehension (or something similar). The thing is, they required the exact same set of skills: the ability to read a passage and answer questions based upon what you read. The only real difference between them was that the Science Reasoning portion dealt with science related topics. I scored 36 in both sections the first and only time I took the test.
To be honest I'm somewhat sad that Khan Academy is jumping on board with this and planning on offering SAT prep videos. I don't think it's appropriate to mix such meaningless skills as test taking, with the true skills Khan Academy helps teach. I believe it could distract students from the important material, by allowing them to care more of what will help them improve that one particular test score.
This is great. I think the changes to the essay portion are excellent.
As a native english speaker, my instinct is to emphasize the essay part - but I realize that's idealistic. Still, I think essays and writing are critical for showing one's ability to synthesize (there's that word) and organize one's ideas or those of others. I think writing should be emphasized and multiple native languages supported... but that's work for the next revision. This one is certainly welcome.
When there is a penalty for wrong guesses, students who have a pretty good idea that they know the answer, but aren't certain, must waste time determining whether it is in their favor to answer the question.
Evaluating whether it is worth it to take a guess is a test-taking skill, and the SAT is trying to shift away from test-taking skills.
It's worth noting the similarities to the Common Core. David Coleman was one of the main authors of the Common Core and is now president of the College Board.
This article:
"Sometimes, students will be asked not just to select the right answer, but to justify it by choosing the quote from a text that provides the best supporting evidence for their answer."
"Going forward, though, students will get a source document and be asked to analyze it for its use of evidence, reasoning and persuasive or stylistic technique."
CCSS
CCSS.ELA-Literacy.CCRA.R.1 Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text.
CCSS.ELA-Literacy.CCRA.R.8 Delineate and evaluate the argument and specific claims in a text, including the validity of the reasoning as well as the relevance and sufficiency of the evidence.
As a member of the first year of students who were required to do the writing section, I'm glad it's gone. What an absolute crock that was, and it probably screwed up the other sections as well. When you make an already ponderous test over an hour longer, it becomes less about aptitude and more about endurance.
I always found it interesting the super strong correlation between essay length and score, and that the scorers are supposed to not care about factual accuracy at all. I also found that almost every single essay question had "right"-sounding and a "wrong"-sounding answer, as in answering one way would always make you sound smarter to the reader. I guess if you just know that you can do pretty well by following these rules you'll follow the formula.
I wish I had had the chance to take this, I could have gotten a perfect score. Taking the SAT as an ESL student those obscure words just kill you. There is a set of words that appear nowhere in modern English other than on an SAT test.
If I was on a college admissions committee, I'd take into account ESL status when comparing verbal scores from one applicant to the next. It only makes sense.
I'll start respecting the SAT just as soon as the results can be accepted as a measure of general intelligence. It shouldn't be that difficult to simultaneously test for both content mastery and aptitude.
It used to be a fairly good measure of 'g', and I note the tables I found on the net that equate the scores of the 1979 exam I took are very close to my results on formal IQ tests, like within 2-3 points.
Reversing the watering down of the exam ... that's pretty hard to imagine for the foreseeable future.
[+] [-] chimeracoder|12 years ago|reply
I took both the "old" SAT (the one that they discontinued around 2004) and the "new" SAT (the one that they're now discontinuing). I actually thought that the structure old one was better in many ways - for example, the analogies were often terribly written, but the idea of testing analogies as a reasoning tool is very powerful, and much more so than just doing passage after passage of reading comprehension.
Furthermore, this is the exact same language that they used to justify the decision to change the SAT 10 years ago. Coleman isn't saying anything new when he's criticizing the SAT today; he's just recycling the same PR language that they used a decade ago.
Of course, perhaps they really are genuine. I'd love to be pleasantly surprised. But reading this gives me total deja vu from the news stories I remember reading in 2002.
[+] [-] jonnathanson|12 years ago|reply
One could make a strong argument for plain-English analogies, absent the $10 words. If every test taker understood most of the words, then we could test for reasoning ability on a more normalized basis.
That's not to say testing vocabulary is totally invalid. Opinions vary. But if we assume vocabulary is worth testing, then we should test it independently.
[+] [-] spicyj|12 years ago|reply
Here's a little more about the partnership if you're interested:
http://techcrunch.com/2014/03/05/khan-academy-gets-major-par...
[+] [-] 67726e|12 years ago|reply
Now how do we get the schools to actually do that?
[+] [-] abeinstein|12 years ago|reply
[+] [-] sehr|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] xanados|12 years ago|reply
[+] [-] shitlord|12 years ago|reply
I think the waivers for low-income students are also pretty great. When I was in high school, taking all of the standardized exams cost me hundreds of dollars. I went to school in a place with a lot of gentrification, so a lot of my classmates could barely afford to get their high school transcript (let alone pay to take tests). There were many efforts to get the school system to subsidize standardized exams, but they were all unsuccessful.
[+] [-] fiatmoney|12 years ago|reply
[+] [-] jmduke|12 years ago|reply
And not just the money -- the time investment in preparing and taking the SAT/ACT is massive. High schoolers in low-income areas or ones forced into situations where they need to support their family with home care/employment are at a huge disadvantage (though the same can be said about other college admission aspects as well.)
[+] [-] chroem|12 years ago|reply
Anecdotally, I'm seeing a substantial number, if not the majority, of CS students who can't code worth a damn, but got into the department solely because they did well in a small number of unrelated subjects.
While I'm not terribly concerned with the changes in the SAT, I am concerned about it adding more momentum to this trend.
[+] [-] anigbrowl|12 years ago|reply
I developed a love of etymology from spending many long hours poring over dictionaries as a child, including the bits about word roots and so on. When I see that things like this are being dropped from the SAT, it's like being told that that knowledge and the effort to acquire it lacks value.
[+] [-] minor_nitwit|12 years ago|reply
It's a good thing IMO that people have to learn 'SAT words', it filters out the ones that are able to put in the effort to expand their vocabulary, and also retain this new knowledge.
[+] [-] watwut|12 years ago|reply
Isnt that argument for more relevant testing? If the current selection picked up people with poor aptitude for programming or whatever, then it makes sense to change it.
Engineers needs ability to think hard and solve complex problems. So, lets measure those instead of how many arcane words they memorized, how great are they in writing self promoting essays, how much irrelevant facts they memorized or how fast they are in solving simple memorizable math problems.
There is nothing wrong with adjusting expectations to current world.
[+] [-] rayiner|12 years ago|reply
What is really interesting to me is that while the rhetoric against the SAT (and similar tests like the MCAT and LSAT), have increased over the last few decades, in practice the tests are more important than ever. In the last 30-40 years, median scores at the top 10-15 schools are up about 100 points adjusted for recentering in 1995. In practice, SAT scores are almost entirely determinative for college admissions. High schools pump out so many 4.0+ students, that 100-200 point differences on the SAT dominate differences in admissions outcomes.
And apparently, even companies are asking people for their SAT scores these days: http://www.businessinsider.com/goldman-sachs-bain-mckinsey-j.... Without taking a position on either side, I have to wonder why the rhetoric and the practice are so out of step on this issue.
[+] [-] twoodfin|12 years ago|reply
It's pretty hard to be dumb as a post and score 1500+ (excluding the short-lived essay). It may not be the most predictive factor for college success overall, but as the top schools have become increasingly competitive, they can afford to use higher and higher scores as a baseline. If they miss some diamonds in the rough who test poorly, they still won't have any trouble filling a class. Test-taking wizards with little else to recommend them will get filtered out through other methods.
On the other hand, if you're a mid-tier school trying to choose between a pair of otherwise similar applicants with scores of 1000 and 1200, it might matter to you a great deal how predictive the SAT is vs. the ACT or some other metric. Meanwhile, no matter how confident you are in its predictive power, at least it's an objective metric. One that, for better or worse, also has a marketing impact, since average incoming class scores show up near the top of college survey data.
I think most of the controversy about the SAT and other standardized tests centers around the predictive power of results near the median. Especially since re-centering, the test doesn't try hard to distinguish the top 1% from the top .01%, but it's very important that it correctly distinguish the 60th percentile from the 40th.
[+] [-] saraid216|12 years ago|reply
[+] [-] prawks|12 years ago|reply
Without a penalty, guessing is just a chance for free points. Not really a measure of any skill aside from test taking (of which the SAT already suffers from enough). But I guess if all of your tests at a university are similar to the SAT, it might be a good measure of your potential success.
[+] [-] the_watcher|12 years ago|reply
[+] [-] jmduke|12 years ago|reply
[+] [-] watty|12 years ago|reply
[+] [-] BadCookie|12 years ago|reply
[+] [-] the_watcher|12 years ago|reply
It doesn't affect me at all, and I'm not complaining, but that's what ran through my head.
[+] [-] hga|12 years ago|reply
[+] [-] MetricMike|12 years ago|reply
[+] [-] vostrocity|12 years ago|reply
[+] [-] MisterBastahrd|12 years ago|reply
[+] [-] zavi|12 years ago|reply
As if math requirements for American students aren't low enough already.
[+] [-] devindotcom|12 years ago|reply
[+] [-] aet|12 years ago|reply
[+] [-] jackhammons|12 years ago|reply
Understatement of the year.
[+] [-] MisterBastahrd|12 years ago|reply
[+] [-] jfmercer|12 years ago|reply
[+] [-] pyromine|12 years ago|reply
[+] [-] spicyj|12 years ago|reply
We're not going to teach test-taking.
[+] [-] devindotcom|12 years ago|reply
As a native english speaker, my instinct is to emphasize the essay part - but I realize that's idealistic. Still, I think essays and writing are critical for showing one's ability to synthesize (there's that word) and organize one's ideas or those of others. I think writing should be emphasized and multiple native languages supported... but that's work for the next revision. This one is certainly welcome.
[+] [-] walshemj|12 years ago|reply
[+] [-] ars|12 years ago|reply
[+] [-] jaybaxter|12 years ago|reply
Evaluating whether it is worth it to take a guess is a test-taking skill, and the SAT is trying to shift away from test-taking skills.
[+] [-] tgb|12 years ago|reply
[+] [-] gmichnikov|12 years ago|reply
This article: "Sometimes, students will be asked not just to select the right answer, but to justify it by choosing the quote from a text that provides the best supporting evidence for their answer." "Going forward, though, students will get a source document and be asked to analyze it for its use of evidence, reasoning and persuasive or stylistic technique."
CCSS CCSS.ELA-Literacy.CCRA.R.1 Read closely to determine what the text says explicitly and to make logical inferences from it; cite specific textual evidence when writing or speaking to support conclusions drawn from the text. CCSS.ELA-Literacy.CCRA.R.8 Delineate and evaluate the argument and specific claims in a text, including the validity of the reasoning as well as the relevance and sufficiency of the evidence.
[+] [-] truantbuick|12 years ago|reply
[+] [-] exue|12 years ago|reply
[+] [-] anigbrowl|12 years ago|reply
[+] [-] joelgrus|12 years ago|reply
[+] [-] hristov|12 years ago|reply
[+] [-] WalterBright|12 years ago|reply
[+] [-] rch|12 years ago|reply
[+] [-] hga|12 years ago|reply
Reversing the watering down of the exam ... that's pretty hard to imagine for the foreseeable future.