I find it curious that there are so many courses for data-science related subjects, which superficially seem to cover the same material, and relatively few courses covering more traditional CS topics such as computer systems, networks, OS. I suppose it has to do with the market, but also feels like colleges are skating to where the puck is, rather than where it will be (or perhaps, where it could be).
We live in 2018 and there is no open source course for everything. Instead there are probably 10-30k universities who have similar courses and professors who give the same lecture every year.
They get paid often enough by countries to create and do those courses. In germany most of our unversities are paid by all of us germans anyway.
And what do you find online? Always the starter verion like 101 computer science or videos with bad audio or video, no proper exercises, no solution helper etc. Nothing. You have to go to different sites to sometimes pay or sometimes not.
there are no local locations to meet up with people.
There should be a global initiative for global free and open access learning. Sponsored and supported by companies and countries. Build upon a core of a knowledge graph based on topics or 'snippets of knowledge'. Like for example:
math -> add -> sub
And when you wanna get the global accepted math 101 level, you have to take specific topics / snippets.
And those snippets can than be filled with different people who are making a lecture for that topic and you can choose whom you like more or who is better in explaining it to you.
What do i do instead? I ask around for the lecture scripts because they are always behind a simple password protected area or have multiple links to different pages of different universietses who offer different courses for free as videos for there students in sometimes/often bad quality and / or bad video players etc.
I have also found this interesting. What I don't understand is that the amount of data science jobs are no where near the levels that people make it seem. I am not sure where all these people will end up working if they want to be a data scientist. There is not a need to hire huge teams of data scientists like you might for dev roles, it doesn't scale the same way.
MIT open courseware has a bunch of classes related to traditional CS topics. Also just searching universities you can find some. Data Science is the new hip class and it’s just very aggressively marketed. The other one being marketed heavily is intro to coding .
Because it sounds cool...I remember when I was applying for college (2001), nanotechnology and biomedical engineering was all the rage. Glad I stuck with electrical engineering.
I'm just curious -- where do you think the puck will be? I've had a number of younger acquaintances ask for career advice. Pursuing some kind of data science seems like an obviously smart direction now, but I've wondered if this, as well as traditional CS career paths, may be in danger of becoming over-saturated areas, now that everyone views them as sure paths to a job that pays well.
I think it’s because data science has a more immediate, broader applicability than computer science. Not everybody needs to know how to program a full application; but they should be able to load in a dataset and statistically analyze it. Looking at the type of people taking Data 8 compared to CS61A (the introductory programming course), I would say the former is a fairly diverse crowd (political science majors, economics majors, biology majors, etc.)
It is also a possibility that Data Science is an easier topic to learn than Computer Science, and thus more popular.
You're talking only about the public + free MOOC stuff, right? I think it's reasonable for that to be biased toward less specialized stuff.
Internally, Berkeley definitely isn't based toward the intro-level stuff. Quite the opposite. But the most polished, rehearsed, mass-manufactured classes are certainly the gigantic intro-level ones everyone takes.
> I find it curious that there are so many courses for data-science related subjects
Is it that there are more courses in data science relative to other topics, or is just more marketing around these classes? It costs Berekely essentially nothing to pump out some press around the release of course materials in data science.
People at Berkeley view this class as kind of a joke. The average grade is insanely high and the topics are covered in much less depth than just the normal intro cs or stats classes.
The instructors have explicitly said, if you have previous CS or Statistics knowledge, the class isn’t for you. This is for people who don’t know how to program and haven’t taken a statistics course yet.
Good to know. I looked at this Berkeley course (along with some private offerings like General Assembly) and got the feeling that they really weren't worth the investment for a guy with a Math degree, a CS minor, and programming experience going back to childhood.
But I think I'd like some kind of formal, credentialed program that would build on my existing linear algebra + software skills (and address the weaknesses in my statistical understanding that I know are there based on how I felt about my grasp on the related material for even the classes I passed)... and maybe isn't quite as big an investment as a full-fledged master's degree.
I randomly started checking other CompSci courses on that site and they all had a similarly high average grade, in some cases even higher (A instead of a B+).
Which are the hard courses at Berkeley in CompSci using the site you linked?
I'm a UC Berkeley alum. When I was there this was a course taken by humanity majors to learn some programming so that their Resume looks cooler. majority of STEM majors take CS 61A (SICP) or E7 (Programming in MATLAB). Just noting this as a context, this is not the class intended for CS majors; this : https://cs61a.org/ one is.
I think this is a bad trend. These university make basic courses free to gain popularity and then ask for big money for their real courses.
This is bad in two ways:
1) The people taking these courses do not learn much for the effort and time they spend. Also it gives them illusion that they know enough as they take course from big university.
2) Industry is already so confused in hiring, they hire by name. So even you take these courses and study in depth on your own you can't get hired. Someone more qualified can not get hired just because they can't pay 100k to get a degree in machine learning from one of these big university.
This is really a bad trend and we should spend time on real courses. Everyone knows that TV series are waste of time, these courses are like TV series.
Stop watching them.
A course called "[Intro to] data science" should be taken about as seriously in hiring as "Intro to computer science", or "Intro to mechanical engineering". There's no reason these courses should bear any weight in hiring, and it's disingenuous to attempt to lead people to believe otherwise.
Always boggles my mind with these "free" online courses that still stick to old method of "registering" for the class and then following a regimented schedule.
Seriously, just upload the lecture videos, put the homework online and textbook. Add a message board and you're golden.
Having ~7 moocs and 1 udacity nanodegree under the belt, here is my anecdata :
Before Coursera, i was never able to finish anything on MIT opencourseware. Free flow of information need too much commitment from my end to be digerable.
It was the structure given by
> "registering" for the class and then following a regimented schedule.
that i managed to start and finish.
Disclaimer: I discovered Coursera after grad school
Berkeley and the UC schools are making major strides in online education, including edX participation and on-campus projects. If you're interested in Berkeley and data science, there's an online masters program too. (Disclosure: Berkeley is in my client roster). https://requestinfo.datascience.berkeley.edu
Okay, here's a view of what appears to be part of the course:
We have a course (right a school application of stuff taught in school!) with two teachers, that is, two sections of the course, each section with its own teacher and its own students. At the end of the two courses, that is, the two sections, we want to compare the teachers. So we give the same test to all of the students from both courses.
Suppose one section had 20 students and the other one, 25 -- the point here is that we don't ask that the two numbers be equal; fine if they are equal, but we're not asking that they be.
So, there were 45 students. So, get a good random number generator and pick 20 students from the 45 and average their scores; also average the scores of the other 25; then take the difference of the two averages.
That was once. It was resampling. Now, do that 1000 times -- remember, we have a computer to do this for us. So, now we have 1000 differences. If you want, then, "live a little" and do that 2000 times. Or, for A students, do all the combinations of 45 students taken 20 at a time. Ah, heck, lets stick closer to being practical and stay with the 1000.
Now, presto, bingo, drum roll please, may I have the envelope with the actual difference in the actual averages of the actual scores in the two classes.
If that actual difference is out in a tail of the empirical distribution of the 1000 differences from the resamplings, then we have a choice to make:
(1) The two teachers did equally well but just by chance in the luck of the draw of the students one of the teachers seemed to do much better than the other one.
(2) The actual difference is so far out in the tail that we don't believe that the two teachers were equally good, reject the hypothesis that there was no difference, called the null hypothesis, and conclude that the teacher with the higher actual average was actually a better teacher.
Sure, it happened that the real reason was that one section of the course started at 7 AM and was over before the sun came up and the other section was at 11 AM when nearly everyone was awake. We like to f'get about such details! Or, sure, we might get criticized for a poorly controlled experiment.
This is also called a statistical hypothesis test or a two sample test. It is a distribution free test because we are making no assumptions about probability distributions of the student scores, etc. Since we are not assuming a probability distribution, we are not assuming a probability distribution with parameters and, thus, have a non-parametric test. Uh, an example of a probability distribution with parameters is the Gaussian where the parameters are mean and standard deviation.
Such tests go way back in statistics for the social sciences, e.g., educational statistics.
In more recent years, leaders in resampling include B. Efron and P. Diaconis, recently both at Stanford.
Why teach such stuff? Well, some parts of computer science are tweaking old multivariate statistics, especially regression analysis, and calling the results machine learning and/or artificial intelligence, putting out a lot of hype and getting a lot of attention, publicity, students, and maybe consulting gigs. Also the newsies get another source of shocking headlines to get eyeballs for the ad revenue -- write about AI and the old "take over the world ploy"!
So, maybe now some profs of applied statistics, what for a while was called mathematical sciences, etc., or other profs of applied math want to get in on the party. Maybe.
What can be done with resampling tests? I don't know that there is any significant market for such: Long ago I generalized such things to a curious multidimensional case and published the results in Information Sciences. The work was a big improvement on what we were doing in AI at IBM's Watson lab for zero day monitoring of high end server farms and networks. Still, I doubt that my paper has ever been applied.
One of the best areas for applied statistics is the testing of medical drugs. Maybe at times resampling plans have been useful there.
I have a conjecture that resampling plans are closely tied to the now classic result in mathematical statistics that order statistics are always sufficient statistics. Sufficient statistics is cute stuff, from the Radon-Nikodym theorem in measure theory and, in particular, from a 1940s paper of Halmos and Savage, then both at the University of Chicago. Some of the interest is that sample mean and sample variance are sufficient for Gaussian distributed data, and that means that, given such data, you can always do just as well in statistics with only the sample mean and sample variance and otherwise just throw away the data. IIRC E. Dynkin, student of Kolmogorov and Gel'fand, long at Cornell, has a paper that this result for the Gaussian is in a sense unstable: If the distribution is only approximately Gaussian, then the sufficiency claim does not hold.
Other applications of resampling, such applied math, etc. might be in US national security. E.g., maybe monitoring activities in North Korea and looking for significant changes ....
Maybe there would be applications in A/B testing in ad targeting, but I wouldn't hold my breath looking for a job offer to do such from a big ad firm.
For all I know, some Wall Street hedge fund or some Chicago commodities fund uses such statistics to look for significant changes in the markets or anomalies that might be exploited. I doubt it, but maybe! Once I showed my work in anomaly detection to some people at Morgan Stanley, back before the 2008 crash of The Big Short, and there was some interest for monitoring their many Sun workstations but no interest for trading!
Net, IMHO for such applied math: If can find a serious application, that is, a serious problem where such applied math gives a powerful, valuable solution, the first good or much better solution, with a good barrier to entry, and cheap, fast, and easy to bring on-line and monetize, then be a company founder and go for it. But I wouldn't look for venture funding for such a project before had revenue significant and growing rapidly and no longer needed equity funding!
Otherwise look for job offers (1) in US national security, (2) medical research, (3) wherever else. But don't hold breath while waiting.
Now you may just have gotten enough from about 1/3rd of the Berkeley course!
What you are describing is known as bootstrapping (if sampling with replacement) jackknifing (if sampling without replacement), or (in the case you want to run a significance test, and not simply create a distribution or stats like confidence intervals) a permutation test. I think you already know that; I'm just mentioning in case others want to look these up by name. Also while they can be called 'distribution free' it only means you are not assuming a prefab distribution. If you want to perform a significance test you'll be creating (explicitly or implicitly) a distribution of your calculated statistic (known as the empirical distribution). If you want to be very explicit about this, you can plot a PDF or CDF of your sampled stats just like you could with a gaussian, exponential, poisson, etc., distribution.
We teach these methods to our students in intro stats at UC San Diego. Have been for as long as I've been here (5 years). Last year a data science program was also created here at UCSD. I've TA'd a flagship course in that program too. It's almost exactly the same content; the major difference is imo are the faculty personalities. The stats profs are smug, while the data science profs are energetically self-important. They teach the same shit. Self motivated students with a STEMy personality tend to learn more in the stats courses because the profs drive on hard core theory; on average though, students do better in the data science course because the profs are so bombastic the kids walk out of each class thinking they are basically ready to join the fellas over at Waymo on some machine learning projects - maybe even show 'em a thing or two, cutting edge tricks learned back at the ol' uni.
There's a lot of value in your posts. Mathematizing problems, when successful, brings elegant solutions with well understood properties. Hence, I don't understand the downvotes you are usually getting.
I'm a pure CS / logician by training, but I've spent a few years trying to expand my expertise into probability theory and stochastic processes. Lots of your advice resonates with me. My MSc advisor recommended I should go through Neveu. He was pretty good, had been a student of Pontryagin.
There are at least two big turn-offs to this course at first blush:
1) they insist on using anaconda (effectively another package manager complicating the already layered interaction of system pip, virtualenv, virtualenvwrapper etc ).
2) they use Microsoft VisualStudioCode (so, inevitably a good deal of time in this course will be spent learning how to navigate a bloated IDE)
What exactly is Data Science? It seems like such an overused term and the value of the subject really gets diluted for me when I see charts in Tableau being offered as examples of "data science".
What's the difference between, say, a Master's program in Computer Science where one studies machine learning and a Master's program in Data Science? Am I wrong for thinking the Data Science program weaker?
The CS program should focus more on data structures and algorithms (and possibly UX and good ol' software dev as well) and the DS program should focus more on statistical/analytical methods and their particular nuances and limitations. If the DS program is done well, with a lot of stats classes, then it is not a weaker program.
Berkeley also used to have this Data Science with Spark series on edX but they taught it just the one time and now even the archived versions of the courses are closed.
Many if not all of the courses on Edx has a free audit option, like this one. It gives you no certificate and often you cannot access or submit exercises.
[+] [-] jamestimmins|8 years ago|reply
[+] [-] sigi45|8 years ago|reply
We live in 2018 and there is no open source course for everything. Instead there are probably 10-30k universities who have similar courses and professors who give the same lecture every year.
They get paid often enough by countries to create and do those courses. In germany most of our unversities are paid by all of us germans anyway.
And what do you find online? Always the starter verion like 101 computer science or videos with bad audio or video, no proper exercises, no solution helper etc. Nothing. You have to go to different sites to sometimes pay or sometimes not.
there are no local locations to meet up with people.
There should be a global initiative for global free and open access learning. Sponsored and supported by companies and countries. Build upon a core of a knowledge graph based on topics or 'snippets of knowledge'. Like for example: math -> add -> sub
Something like 'The Map of Mathematics' (https://www.youtube.com/watch?v=OmJ-4B-mS-Y)
And when you wanna get the global accepted math 101 level, you have to take specific topics / snippets.
And those snippets can than be filled with different people who are making a lecture for that topic and you can choose whom you like more or who is better in explaining it to you.
What do i do instead? I ask around for the lecture scripts because they are always behind a simple password protected area or have multiple links to different pages of different universietses who offer different courses for free as videos for there students in sometimes/often bad quality and / or bad video players etc.
It sucks and this is stupid.
[+] [-] resolaibohp|8 years ago|reply
[+] [-] zitterbewegung|8 years ago|reply
[+] [-] fma|8 years ago|reply
[+] [-] ithilglin909|8 years ago|reply
[+] [-] QML|8 years ago|reply
It is also a possibility that Data Science is an easier topic to learn than Computer Science, and thus more popular.
[+] [-] soziawa|8 years ago|reply
[+] [-] ucarion|8 years ago|reply
Internally, Berkeley definitely isn't based toward the intro-level stuff. Quite the opposite. But the most polished, rehearsed, mass-manufactured classes are certainly the gigantic intro-level ones everyone takes.
[+] [-] electricslpnsld|8 years ago|reply
Is it that there are more courses in data science relative to other topics, or is just more marketing around these classes? It costs Berekely essentially nothing to pump out some press around the release of course materials in data science.
[+] [-] trendnet|8 years ago|reply
[+] [-] Socketopp|8 years ago|reply
/Cognitive scientist
[+] [-] Ar-Curunir|8 years ago|reply
[+] [-] bartart|8 years ago|reply
https://www.berkeleytime.com/grades/?course1=7765-all-all
[+] [-] bhl|8 years ago|reply
[+] [-] wwweston|8 years ago|reply
But I think I'd like some kind of formal, credentialed program that would build on my existing linear algebra + software skills (and address the weaknesses in my statistical understanding that I know are there based on how I felt about my grasp on the related material for even the classes I passed)... and maybe isn't quite as big an investment as a full-fledged master's degree.
Anybody have any suggestions?
[+] [-] Someone1234|8 years ago|reply
Which are the hard courses at Berkeley in CompSci using the site you linked?
[+] [-] gnulinux|8 years ago|reply
[+] [-] cbHXBY1D|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] master_yoda_1|8 years ago|reply
This is bad in two ways:
1) The people taking these courses do not learn much for the effort and time they spend. Also it gives them illusion that they know enough as they take course from big university.
2) Industry is already so confused in hiring, they hire by name. So even you take these courses and study in depth on your own you can't get hired. Someone more qualified can not get hired just because they can't pay 100k to get a degree in machine learning from one of these big university.
This is really a bad trend and we should spend time on real courses. Everyone knows that TV series are waste of time, these courses are like TV series. Stop watching them.
[+] [-] nerdponx|8 years ago|reply
[+] [-] bagacrap|8 years ago|reply
Even if you believe it's pointless it's pretty clear it's no something everyone else "knows".
[+] [-] anonymous5133|8 years ago|reply
Seriously, just upload the lecture videos, put the homework online and textbook. Add a message board and you're golden.
[+] [-] lucasverra|8 years ago|reply
Before Coursera, i was never able to finish anything on MIT opencourseware. Free flow of information need too much commitment from my end to be digerable.
It was the structure given by
> "registering" for the class and then following a regimented schedule.
that i managed to start and finish. Disclaimer: I discovered Coursera after grad school
[+] [-] benhamner|8 years ago|reply
[+] [-] jph|8 years ago|reply
[+] [-] appleiigs|8 years ago|reply
[+] [-] rrdkent|8 years ago|reply
[+] [-] seycombi|8 years ago|reply
(There are two ways you can follow the course: Certificate Program is paid, but the AUDIT program is free)
[+] [-] qntty|8 years ago|reply
[+] [-] based2|8 years ago|reply
[+] [-] dpflan|8 years ago|reply
> http://datasciencemasters.org/
[+] [-] graycat|8 years ago|reply
We have a course (right a school application of stuff taught in school!) with two teachers, that is, two sections of the course, each section with its own teacher and its own students. At the end of the two courses, that is, the two sections, we want to compare the teachers. So we give the same test to all of the students from both courses.
Suppose one section had 20 students and the other one, 25 -- the point here is that we don't ask that the two numbers be equal; fine if they are equal, but we're not asking that they be.
So, there were 45 students. So, get a good random number generator and pick 20 students from the 45 and average their scores; also average the scores of the other 25; then take the difference of the two averages.
That was once. It was resampling. Now, do that 1000 times -- remember, we have a computer to do this for us. So, now we have 1000 differences. If you want, then, "live a little" and do that 2000 times. Or, for A students, do all the combinations of 45 students taken 20 at a time. Ah, heck, lets stick closer to being practical and stay with the 1000.
Now, presto, bingo, drum roll please, may I have the envelope with the actual difference in the actual averages of the actual scores in the two classes.
If that actual difference is out in a tail of the empirical distribution of the 1000 differences from the resamplings, then we have a choice to make:
(1) The two teachers did equally well but just by chance in the luck of the draw of the students one of the teachers seemed to do much better than the other one.
(2) The actual difference is so far out in the tail that we don't believe that the two teachers were equally good, reject the hypothesis that there was no difference, called the null hypothesis, and conclude that the teacher with the higher actual average was actually a better teacher.
Sure, it happened that the real reason was that one section of the course started at 7 AM and was over before the sun came up and the other section was at 11 AM when nearly everyone was awake. We like to f'get about such details! Or, sure, we might get criticized for a poorly controlled experiment.
This is also called a statistical hypothesis test or a two sample test. It is a distribution free test because we are making no assumptions about probability distributions of the student scores, etc. Since we are not assuming a probability distribution, we are not assuming a probability distribution with parameters and, thus, have a non-parametric test. Uh, an example of a probability distribution with parameters is the Gaussian where the parameters are mean and standard deviation.
Such tests go way back in statistics for the social sciences, e.g., educational statistics.
In more recent years, leaders in resampling include B. Efron and P. Diaconis, recently both at Stanford.
Why teach such stuff? Well, some parts of computer science are tweaking old multivariate statistics, especially regression analysis, and calling the results machine learning and/or artificial intelligence, putting out a lot of hype and getting a lot of attention, publicity, students, and maybe consulting gigs. Also the newsies get another source of shocking headlines to get eyeballs for the ad revenue -- write about AI and the old "take over the world ploy"!
So, maybe now some profs of applied statistics, what for a while was called mathematical sciences, etc., or other profs of applied math want to get in on the party. Maybe.
What can be done with resampling tests? I don't know that there is any significant market for such: Long ago I generalized such things to a curious multidimensional case and published the results in Information Sciences. The work was a big improvement on what we were doing in AI at IBM's Watson lab for zero day monitoring of high end server farms and networks. Still, I doubt that my paper has ever been applied.
One of the best areas for applied statistics is the testing of medical drugs. Maybe at times resampling plans have been useful there.
I have a conjecture that resampling plans are closely tied to the now classic result in mathematical statistics that order statistics are always sufficient statistics. Sufficient statistics is cute stuff, from the Radon-Nikodym theorem in measure theory and, in particular, from a 1940s paper of Halmos and Savage, then both at the University of Chicago. Some of the interest is that sample mean and sample variance are sufficient for Gaussian distributed data, and that means that, given such data, you can always do just as well in statistics with only the sample mean and sample variance and otherwise just throw away the data. IIRC E. Dynkin, student of Kolmogorov and Gel'fand, long at Cornell, has a paper that this result for the Gaussian is in a sense unstable: If the distribution is only approximately Gaussian, then the sufficiency claim does not hold.
Other applications of resampling, such applied math, etc. might be in US national security. E.g., maybe monitoring activities in North Korea and looking for significant changes ....
Maybe there would be applications in A/B testing in ad targeting, but I wouldn't hold my breath looking for a job offer to do such from a big ad firm.
For all I know, some Wall Street hedge fund or some Chicago commodities fund uses such statistics to look for significant changes in the markets or anomalies that might be exploited. I doubt it, but maybe! Once I showed my work in anomaly detection to some people at Morgan Stanley, back before the 2008 crash of The Big Short, and there was some interest for monitoring their many Sun workstations but no interest for trading!
Net, IMHO for such applied math: If can find a serious application, that is, a serious problem where such applied math gives a powerful, valuable solution, the first good or much better solution, with a good barrier to entry, and cheap, fast, and easy to bring on-line and monetize, then be a company founder and go for it. But I wouldn't look for venture funding for such a project before had revenue significant and growing rapidly and no longer needed equity funding!
Otherwise look for job offers (1) in US national security, (2) medical research, (3) wherever else. But don't hold breath while waiting.
Now you may just have gotten enough from about 1/3rd of the Berkeley course!
[+] [-] subroutine|8 years ago|reply
We teach these methods to our students in intro stats at UC San Diego. Have been for as long as I've been here (5 years). Last year a data science program was also created here at UCSD. I've TA'd a flagship course in that program too. It's almost exactly the same content; the major difference is imo are the faculty personalities. The stats profs are smug, while the data science profs are energetically self-important. They teach the same shit. Self motivated students with a STEMy personality tend to learn more in the stats courses because the profs drive on hard core theory; on average though, students do better in the data science course because the profs are so bombastic the kids walk out of each class thinking they are basically ready to join the fellas over at Waymo on some machine learning projects - maybe even show 'em a thing or two, cutting edge tricks learned back at the ol' uni.
[+] [-] nextos|8 years ago|reply
I'm a pure CS / logician by training, but I've spent a few years trying to expand my expertise into probability theory and stochastic processes. Lots of your advice resonates with me. My MSc advisor recommended I should go through Neveu. He was pretty good, had been a student of Pontryagin.
[+] [-] Treegarden|8 years ago|reply
[+] [-] QML|8 years ago|reply
[+] [-] frabbit|8 years ago|reply
[+] [-] meri_dian|8 years ago|reply
What's the difference between, say, a Master's program in Computer Science where one studies machine learning and a Master's program in Data Science? Am I wrong for thinking the Data Science program weaker?
[+] [-] gaius|8 years ago|reply
Data Science and DevOps are both just labels for things people have been doing under more mundane terms for 40-odd years.
Even Machine Learning is just a trendy buzzword for what used to be called Predictive Statistics.
[+] [-] itronitron|8 years ago|reply
[+] [-] carlosgg|8 years ago|reply
https://www.edx.org/xseries/data-science-engineering-apacher...
[+] [-] bthallplz|8 years ago|reply
[+] [-] csjr|8 years ago|reply
[0] https://www.datacamp.com
[+] [-] analognoise|8 years ago|reply
Go to community college. It's ridiculously cheap, and the credits are worth something.
[+] [-] erokar|8 years ago|reply
[+] [-] simpleAdam|8 years ago|reply
https://www.youtube.com/watch?v=xcgrnZay9Yc&list=PLFeJ2hV8Fy...
[+] [-] daveheq|8 years ago|reply
[+] [-] Double_a_92|8 years ago|reply