> What a lot of math learners fail to understand is that grinding through concrete examples imbues you with intuition that you will not get if you jump directly to studying the most abstract ideas.
I feel that's more a lesson for a lot of math teachers to understand. I remember some frustrating linear algebra, calculus and computational complexity courses where the lector basically threw some formulas onto the blackboard, went line-by-line through a formal proof of their correctness and then called it a day. Giving actual examples of the application of the formula was an afterthought left to the student aides. Giving examples that could explain the derivation of the formula was not even considered as an idea.
It always reminded me of someone teaching an "introduction to vi" course but then just scrolling through vi's source code without any further explanation - and in the end expecting the students to be able to fluently use vi.
> > What a lot of math learners fail to understand is that grinding through concrete examples imbues you with intuition that you will not get if you jump directly to studying the most abstract ideas.
> I feel that's more a lesson for a lot of math teachers to understand.
That's certainly true, but the teachers who teach that way were probably once students who tried to learn that way (I was one on both counts, though I got better), and it'll be better for them as teachers if they learn the lesson as students.
Adding, not disagreeing, but at some point, those abstract concepts like dot products become concrete on their own when you get into things like SIMD programming.
Does it matter whether the professor or the teaching assistant is the one giving the examples?
The professor is in an awkward position, because the professor at the front of of the large-group lecture hall doesn't have anything to do to add value. Watch videos, read book, work exercise, and then go to recitation or office hourse for interactive tutoring.
Yeah. It's kind of ironic given how unhappy the author sounds about people being unable to figure out what's going on after seeing the same thing over and over again.
That's the same principle which makes it wise to first do some copy pasting before you abstract stuff into a common library. Gathering some (more than two) concrete use cases before you factor out the common functionality makes much better library functions.
A common sign of prematurely deduplicated code is a common function with lots of boolean flags and other knobs to tweak its behaviour for every use case added after it was written.
You owe it to yourself to type out your copied code again. So easy to lose track of context when you’re moving fast. Variables have the wrong name, conditionals can’t be false, etc. Copy, yes, but don’t paste. Do it the hard way.
The ideal heuristic is to only deduplicate towards more orthogonality.
Suppose you started from first principles and navigated down the ontology tree taking one of the shortest paths (well, shortest-ish since it's a hard problem that'll burn up too much time otherwise.) Would you encounter your deduplicated function on the way? If not, it's making it worse.
This is presumably about math problems, but I always approach programming problems this way.
One issue I have with math problems is that sometimes I wish I could immediately go down one level of abstraction and see something like a physics or programming problem that applies that specific or related problem I'm working on. I haven't found a resource like this yet.
Not a guide, per se, but it has many implementations of a wide variety of math and programming concepts, implemented in a large number of different programming languages.
And why is it so hard to have math education based on good concrete contextualized examples, vs just rules and problem sets? Understanding the “why” behind the math is often lacking… and math doesn’t always need to be applied, that’s ok— but if it can be, it is so much easier to understand
This was something that really lacked in my statistics degree. We were always learning different distributions, proofs and estimation methods but very rarely applied them to actual problems. I feel like you can kinda get away with this type of thing in math more, but in statistics, it makes things super hard to learn.
I kinda wish you could just take a course on a specific distribution. Like, here's the Poisson class where you learn all of its interesting properties and apply it to e.g. queuing problems.
I’m adopting an Example Driven Development approach in my latest project. The project is a library/framework with a public interface. EDD is great in this case. To explore and experiment new features, I turn the use cases into runnable examples and use those to drive the internal implementation. This combines feature implementation, documentation, examples, and testing in one shot.
Yes, exactly. See [0] for a concrete example of an abstraction that can't be grokked without grinding concrete examples, and how people who have already grokked it forget this fact.
A concept I learned about in Knowledge Based AI (gatech ONSCS) is called “version spaces” where instead of starting at specific examples and moving to be more general or the other way around you do both as a kind of knowledge bidirectional search. I feel humans work that way too. We need both specific examples and generic models to help us converge to a deeper understanding of topics.
But on the other hand, 2 or 3 elements of the sum are usually enough, i.e. you probably wouldn't improve understanding by writing out the first 10 elements or so.
FYI that article is an interview with Jo Boaler, the main architect of removing middle school Algebra in SF and California (attempted). Her views are pretty controversial among those promoting math excellence.
That's not to say there isn't anything worthwhile in the article, but figured people would want that context.
This applies to code as well. I worked on an internal PostgreSQL protocol driver for work and I've been focusing on understanding the binary message format inside and out, then the server state machine and query behaviour, and only then building a driver.
Don't underestimate the value of time spent grinding the low levels for code that is fundamentally important to what you are doing. You come away with a much stronger understanding than if you just hack a solution, cargo culting some existing code and just shipping it.
The author is generalizing their preference here to say it's "right". Some brains need this, others need the abstraction before the examples provide the most benefit.
As noted by others this preference influences how one learns code effectively too. It's a pretty basic trait.
The author's stated preference is most common but it is not the only one.
> others need the abstraction before the examples provide the most benefit
Would anyone need examples if they understand the abstraction? They can stamp out their own examples, can't they?
Needing examples after you've a grasp at the abstraction would be like saying 'I need help coming down this zip line', where as discovering or arriving at the abstraction is the result of working through and distilling n number of examples. To relate to the analogy, that's like climbing the hill in the first place.
Yes. I think I did well in math/physics classes in college and grad school just because I found the most effective way to study, for me, was to grind out as many problems as possible. Go through the old homework, old exams, books, anywhere you can find relevant problems with solutions to check against.
It's not just for you. It's the most effective way. I know several people who all crushed college etc and only later realized maybe they aren't so smart, maybe they just stumbled into the right way(by an order of magnitude) to study... hell, I'd redo problems and find depth or nuance in the same problem that I didn't see the first time.
Makes sense. I always intuitively understood that going from for example electronic engineering to computer science is probably easier than the other way around, but this article makes a great point. Without some low-level knowledge, you do not fully understand the higher level. Then you can only parrot the higher level (I’m also looking at you here LLM).
I'll go you one further. Many Electrical Engineers who learn about the progression of transistors, logic gates, flip-flops, registers, arithmetic logic units, central processing units, instructions, and so on, have such a profound epiphany about how all complex systems are merely collections of less complex systems working in concert, that they can't help but to see the rest of the world through that same lens, at all times.
It depends. Some people most certainly grasp things better if they are concrete, grounded in their perception of reality. For others, they see each 'example' as a mere partial projection at best that completely fails to capture the true essence of the abstraction.
That said, in sloppy engineering we often see the reverse. 'Here's a meta-model I cooked up overnight. Now if you spend the next 3 years gathering all the domain knowledge and expressing it in my nifty notation, you can have the outline of a potential candidate for your question. I'll write up a journal paper tomorrow about how I solved your problem'. There was a lot of that around when I was in academia.
With history classes in high school I wish they bumped up the abstraction level a little sooner. All those dates and names of important figures, almost completely useless information imho.
> All those dates and names of important figures, almost completely useless information imho.
As someone with an amateur's interest in history, I suspect that it's useless in the same way that learning words from a dictionary, or memorising multiplication tables, is useless. Learning words from a dictionary gives you no facility with a language, but you can't build much facility with a language if you don't know its words. Similarly, multiplication tables are useless for doing mathematics, but I think that it is good, both for the practice of math and in the real world, to have some basic number sense. (This is perhaps more contentious. My opinion is certainly shaped by the fact that I was among probably one of the last generations to be taught this skill, and am glad that I have it, but don't really know what it's like to grow up in a world where the skill is regarded as completely irrelevant.)
That's not to say that history classes don't lean too much on the names-and-dates approach. After all, it's easier for the teacher, both for preparing lessons and for evaluations—it's a lot easier to decide unambiguously whether a student has a date correct than if, say, that student has correctly understood the historical significance of some important event.
That feels different to me. Grinding scales helps with muscle memory and technique. There’s certainly aspects of that with math, especially with algebraic manipulations. Doing math problems can yield a deeper understanding of the underlying concepts and how they behave. Thinking you understand doesn’t cut it
xg15|1 year ago
I feel that's more a lesson for a lot of math teachers to understand. I remember some frustrating linear algebra, calculus and computational complexity courses where the lector basically threw some formulas onto the blackboard, went line-by-line through a formal proof of their correctness and then called it a day. Giving actual examples of the application of the formula was an afterthought left to the student aides. Giving examples that could explain the derivation of the formula was not even considered as an idea.
It always reminded me of someone teaching an "introduction to vi" course but then just scrolling through vi's source code without any further explanation - and in the end expecting the students to be able to fluently use vi.
JadeNB|1 year ago
> I feel that's more a lesson for a lot of math teachers to understand.
That's certainly true, but the teachers who teach that way were probably once students who tried to learn that way (I was one on both counts, though I got better), and it'll be better for them as teachers if they learn the lesson as students.
dehrmann|1 year ago
gowld|1 year ago
The professor is in an awkward position, because the professor at the front of of the large-group lecture hall doesn't have anything to do to add value. Watch videos, read book, work exercise, and then go to recitation or office hourse for interactive tutoring.
tom_|1 year ago
praptak|1 year ago
A common sign of prematurely deduplicated code is a common function with lots of boolean flags and other knobs to tweak its behaviour for every use case added after it was written.
sevensor|1 year ago
thr3000|1 year ago
Suppose you started from first principles and navigated down the ontology tree taking one of the shortest paths (well, shortest-ish since it's a hard problem that'll burn up too much time otherwise.) Would you encounter your deduplicated function on the way? If not, it's making it worse.
seeknotfind|1 year ago
awongh|1 year ago
One issue I have with math problems is that sometimes I wish I could immediately go down one level of abstraction and see something like a physics or programming problem that applies that specific or related problem I'm working on. I haven't found a resource like this yet.
kevindamm|1 year ago
https://rosettacode.org/wiki/Rosetta_Code
Not a guide, per se, but it has many implementations of a wide variety of math and programming concepts, implemented in a large number of different programming languages.
unknown|1 year ago
[deleted]
aeim|1 year ago
skobes|1 year ago
dr_dshiv|1 year ago
And why is it so hard to have math education based on good concrete contextualized examples, vs just rules and problem sets? Understanding the “why” behind the math is often lacking… and math doesn’t always need to be applied, that’s ok— but if it can be, it is so much easier to understand
joshdavham|1 year ago
bick_nyers|1 year ago
ww520|1 year ago
Handprint4469|1 year ago
[0]: https://byorgey.wordpress.com/2009/01/12/abstraction-intuiti...
schneems|1 year ago
xg15|1 year ago
But on the other hand, 2 or 3 elements of the sum are usually enough, i.e. you probably wouldn't improve understanding by writing out the first 10 elements or so.
billyp-rva|1 year ago
[0] https://news.stanford.edu/stories/2019/09/embrace-struggle-e...
achatham|1 year ago
That's not to say there isn't anything worthwhile in the article, but figured people would want that context.
mmastrac|1 year ago
Don't underestimate the value of time spent grinding the low levels for code that is fundamentally important to what you are doing. You come away with a much stronger understanding than if you just hack a solution, cargo culting some existing code and just shipping it.
hinkley|1 year ago
Which of course the author always does and so they don’t get what they’ve done to people.
erikerikson|1 year ago
As noted by others this preference influences how one learns code effectively too. It's a pretty basic trait.
The author's stated preference is most common but it is not the only one.
pitched|1 year ago
penguin_booze|1 year ago
Would anyone need examples if they understand the abstraction? They can stamp out their own examples, can't they?
Needing examples after you've a grasp at the abstraction would be like saying 'I need help coming down this zip line', where as discovering or arriving at the abstraction is the result of working through and distilling n number of examples. To relate to the analogy, that's like climbing the hill in the first place.
aqme28|1 year ago
danielmarkbruce|1 year ago
huijzer|1 year ago
flyingcircus3|1 year ago
PeterStuer|1 year ago
That said, in sloppy engineering we often see the reverse. 'Here's a meta-model I cooked up overnight. Now if you spend the next 3 years gathering all the domain knowledge and expressing it in my nifty notation, you can have the outline of a potential candidate for your question. I'll write up a journal paper tomorrow about how I solved your problem'. There was a lot of that around when I was in academia.
amelius|1 year ago
JadeNB|1 year ago
As someone with an amateur's interest in history, I suspect that it's useless in the same way that learning words from a dictionary, or memorising multiplication tables, is useless. Learning words from a dictionary gives you no facility with a language, but you can't build much facility with a language if you don't know its words. Similarly, multiplication tables are useless for doing mathematics, but I think that it is good, both for the practice of math and in the real world, to have some basic number sense. (This is perhaps more contentious. My opinion is certainly shaped by the fact that I was among probably one of the last generations to be taught this skill, and am glad that I have it, but don't really know what it's like to grow up in a world where the skill is regarded as completely irrelevant.)
That's not to say that history classes don't lean too much on the names-and-dates approach. After all, it's easier for the teacher, both for preparing lessons and for evaluations—it's a lot easier to decide unambiguously whether a student has a date correct than if, say, that student has correctly understood the historical significance of some important event.
k__|1 year ago
Don't know how I finished my CS degree, because most of the theory I just understood years after leaving university and doing some hands on work...
antipaul|1 year ago
otikik|1 year ago
nhlx2|1 year ago
klysm|1 year ago