(no title)
wayoverthecloud | 1 year ago
Also, since you mentioned scaling systems and equations, are you by any chance working on numerical linear algebra stuffs like iterative solvers etc.? MPI/HPC etc? If so, I am in HPC as well.
wayoverthecloud | 1 year ago
Also, since you mentioned scaling systems and equations, are you by any chance working on numerical linear algebra stuffs like iterative solvers etc.? MPI/HPC etc? If so, I am in HPC as well.
fn-mote|1 year ago
Be careful of "surface level work". Your Ph.D. is not going to come from surface level work. It will come from picking something and developing a deep understanding of that topic ... deeper than most of the people in the audience, anyway.
Also, since you mention HPC, be aware of what areas are "well-explored" and stay away from them. Your advisor should be able to help with this. You want an area that has not been dug up by many brilliant minds before you, leaving only small nuggets of semi-precious metal for you to find.
verdverm|1 year ago
The field I researched in was Symbolic Regression / Genetic Programming. My research was able to recover differential equations and systems of equations from data. We worked with GLEON to help scientists better understand lake dynamics through such systems
https://verdverm.com/projects/pge if you want to learn more
I'm no longer in academia, I'm working on ATProto things lately
graycat|1 year ago
Been there; done some of that:
Once worked on numerical linear algebra, e.g., Gauss-Seidel. Then ran into the M. Newmann numerically exact technique based on (i) multiply by a suitable power of 10 to have only whole numbers, (ii) for a list of prime numbers, solve the system in the integers modulo each prime, (ii) construct the multi-precision rational results using the Chinese remainder theorem.
From a course, a rule: "For numerical calculation, multiply and divide freely, add OK, but avoid subtraction, especially avoid subtracting two numbers whose difference is small, i.e., nearly equal."
One day, talking with Richard Bartels, mentioned that once I wanted a random unitary matrix so generated some random vectors and applied the Gram-Schmidt process, and right away Bartels responded that Gram-Schmidt is "numerically unstable" to which I replied "Wondered about that so applied Gram-Schmidt twice". In general, Bartels has done a lot in numerical methods.
In summary, in my experience, for many decades, a LOT has been done on numerical linear algebra, including iterative methods, for the simplex algorithm, etc. E.g., at one time, used Linpack -- it seemed terrific; on a computer with a 1.8 GHz clock, called Linpack 11,000 times a second.
As I recall, in numerical linear algebra there are some fundamental issues having to do with the eigenvectors of the polar decomposition. E.g., can argue that for these issues, sometimes Gauss-Seidel cannot work well.
Might look at some of the Golub LU decomposition work, e.g., in linear programming.
If you can find some problems where can get new, correct, significant results, okay. Maybe can get some problems from some of the current AI work.
Examples: (1) Took an industrial problem, did some math and computing, and got an engineering style solution. Some other students in the department did much the same but for different practical problems. (2) Had a course in optimization, in the summer went over the notes word by word and rewrote the notes, got deep into the subject, found an unanswered question, got a solution, along the way found a surprising, general result, wrote a paper, got it accepted right away at Mathematical Programming, published later elsewhere. (3) Working in some AI to monitor systems, wanted some results with meager assumptions so used the general result tightness. So, each of these is an example of how to find a problem and get some results.
But broadly for some decades, "numerical linear algebra", including iterative approaches, is a 'well plowed field'.