top | item 30101637

(no title)

neel_k | 4 years ago

It sort of depends upon the course. This particular syllabus looks like a really standard linear algebra course (I have no idea what is robotics-specific about it), and so of course Gauss-Jordan would show up there.

More generally, though, I think there are two main reasons to put algorithms like Gauss-Jordan in the engineering curriculum.

1. While it would be nice to treat linear algebra solvers as a perfect black box, in practice this does not work. Engineers have to be aware of numerical stability issues, how to diagnose when this is an issue, and how to reformulate their routines in a way that resolves the problem. And to do this, they need to know the library of techniques.

A reasonable way of teaching this is to teach Gauss-Jordan, then showing how it goes badly awry, and then showing how things like pivoting can fix it.

2. Personally, though, I find the topic of numerical stability to be a little bit depressing, since it focuses on all the ways computers don't work!

To take a more positive view, a huge fraction of the algorithms in an undergraduate CS course -- from finite automata to parsing to relational algebra to graph traversals -- can be understood as basically doing linear algebra using modules over different semirings (rather than just the reals). Eg, for all-pairs shortest paths, the Floyd-Warshall algorithm is doing an LU decomposition, and Kleene's algorithm is doing Gauss-Jordan.

Not every student will enjoy this, but for the ones who are algebraically minded, it's really exciting to be able to offer them a unified perspective. And then you can show them the GraphBLAS library!

discuss

order

Tainnor|4 years ago

> 2. Personally, though, I find the topic of numerical stability to be a little bit depressing, since it focuses on all the ways computers don't work!

Maybe a way to more positively reformulate this would be: There is no a priori reason to assume that floating point numbers are well behaved. The fact that we were able to come up with a structure so that it approximates real numbers adequately, that arithmetic operations on it are fast (which they aren't for infinite precision) and that, if we design the algorithms correctly, errors are well-behaved, is an astonishing feat of engineering.

kevin_thibedeau|4 years ago

Computers work perfectly fine. Engineering was done for a long time with slide rules to avoid the tedium of looking up values in a book and grinding out results by hand or adding machine. Using them correctly requires knowing their limitations just as knowing the limitations of a computer is important. They aren't magic oracles that always give correct answers.

iamcreasy|4 years ago

> To take a more positive view, a huge fraction of the algorithms in an undergraduate CS course -- from finite automata to parsing to relational algebra to graph traversals -- can be understood as basically doing linear algebra using modules over different semirings (rather than just the reals). Eg, for all-pairs shortest paths, the Floyd-Warshall algorithm is doing an LU decomposition, and Kleene's algorithm is doing Gauss-Jordan.

The blew my mind! I loved these undergrad and grad courses but never realized they are connected. Could you please forward me to few books or resources that go over these relationships?

a_zaydak|4 years ago

I really agree with your comments; especially number 1. Often I can't use some black box implementation of an solver (or other algorithm for that matter) without some modifications. Numerical stability is a big one but also just performance. Sometimes the mathematically correct way of doing something is not always the best in practice. Short cuts and approximations can provide huge benefits. It is difficult to make those modifications without understanding the inner workings of the original method.

Tainnor|4 years ago

One example is that GJ stops being generally efficient in arbitrary precision settings. Many people will never have to deal with this, but if you are doing cryptography, it matters.

For example, while you can use GJ to calculate the determinant of a matrix, this can easily become exponential, and for integer matrices (or generally for matrices over division rings), there is an alternative method (the Bareiss algorithm) that is actually efficient.