wwalker3's comments

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

If you use the theory of nonlinear partial differential equations to analyze the behavior of compressible materials, you can find what are called the "characteristic" speeds, which are the speeds that various types of waves propagate at.

Compressible materials tend two have two different characteristic speeds, one for sound waves and one for shock waves.

The speed of sound basically works out to speed = sqrt(stiffness / density). So as a material gets stiffer, the speed of sound goes up. An infinitely stiff (i.e. incompressible) material by implication would have an infinite speed of sound, though this can't happen in any real material.

Shock waves travel faster than sound, at a speed related to the pressure difference across the shock. The greater the pressure difference, the faster the shock travels. So if you had an infinite pressure difference, you could have an infinitely fast shock wave, but again this can't happen in the real world.

However, sound and shock speeds only apply to pressure waves in a material. Other influences like gravity and electromagnetism travel at the speed of light. So for example, if you're doing fluid dynamics for plasma, then you'll have a third characteristic speed, the speed of light, because of the charged nature of the material attracting and repelling itself.

There are also more exotic characteristics, like the speed of a propagating combustion front in a flammable material. But when you get to this level you're no longer just solving one simple set of differential equations.

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

If mathematicians could solve these kinds of problems, they could answer valuable questions like "Will this equation always have a physically meaningful solution?" If the answer was "No", then we would know that the equation can't be a faithful model of reality.

We already know that the incompressible Euler equations can't be a faithful model, for reasons I've mentioned elsewhere in the thread. But I think the hope is that if they can answer these questions for incompressible Euler, then they can eventually extend their techniques to more complex fluid equations like Navier-Stokes, which people generally assume (but can't yet prove) is physically reasonable.

Simulation has great practical value, but it doesn't give you any guarantees about the behavior of the solutions for all the cases you haven't actually tried.

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

It's the equations themselves that are singular. When we write simulators, we usually have to paper over the singularities that are inherent in the math.

For example, if you're simulating charged particles moving around, and you use a force equation F = k q1 q2 / d^2 (1), then when d approaches 0 (i.e. when the distance between particles approaches zero), then the force F goes to infinity.

For atoms, it works the same way. If you use a force law like Lennard-Jones (2), it also has the interatomic distance in the denominator, so the equation has a singularity baked right in.

You could always adopt a more complex force equation that doesn't have a singularity in it. But in practice, it's easier to use a simple but singular equation, and then selectively ignore its bad behavior.

1) https://en.wikipedia.org/wiki/Coulomb%27s_law

2) https://en.wikipedia.org/wiki/Interatomic_potential

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

The density can be constant, but it doesn't have to be. If the density field starts out with some variation in it, then those variations move around as the fluid flows. Incompressibility just means that those density variations can't get bigger or smaller, they can only move, shear, and rotate.

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

Pretty much any mathematical model of a real phenomenon can have some sort of singularity or discontinuity in it.

If you model atoms as dimensionless points (1), then any kind of force law with the distance between atoms in the denominator can lead to a singularity when that distance is zero. In practice, you write the simulator to disallow this, but it's still there in the equations, you're just ignoring it.

If you model your atoms as finite-sized but incompressible billiard balls, then when they hit each other it's a discontinuity, since they instantly change direction when they collide. These collisions conserve total momentum and energy, but they're unphysical because real physical quantities can't jump from one value to another (in classical physics).

Even if you model your atoms as little rubber balls, the model can still be singular. Linear elasticity (the most common choice) allows you to compress a finite-sized object down to zero size with finite energy, which yields infinite energy density. Again, you'd have to disallow that in the simulator, which is very practical, but not theoretically satisfying.

1) https://en.wikipedia.org/wiki/Molecular_dynamics is the typical method of atomistic simulation.

2) https://en.wikipedia.org/wiki/Linear_elasticity

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

The incompressible Euler equations model a fluid as a two-valued field. This means that at every point in space, the field has two values, density and velocity (1).

To me (2), a singularity in a field like this means that one or more of the field values "blows up", i.e. goes to infinity as you run the time variable forward.

But how could this ever happen? The Euler equations model the "conservation" (i.e. constant-ness) of three real physical quantities: mass, momentum, and energy. If these three quantities are finite and constant when you add them up over the whole field, how can any part of it "blow up" into an infinite value?

The answer is that the blow-up must occupy a volume that shrinks as the blow-up grows, so the conserved quantities are still constant. The singularity would be infinitely small in space, and have an infinite value of density or velocity (or both).

The hard question is, are these blow-ups merely artifacts of a particular numerical simulation technique, or are they essential somehow to the incompressible Euler equations themselves? That's what these papers are trying to figure out.

To me, an "essential" (i.e. inherent-in-the-equations) blow-up seems intuitively reasonable because of the acausal nature of the field. When you simulate the incompressible Euler equations, it superficially looks like it's a physical fluid doing physical-fluid things, swirling and flowing around. But in a real fluid, a change in one part of the fluid propagates to the other parts at finite velocity, creating real cause and effect.

An Euler fluid's time evolution is not a phenomenon that ripples forward through time in a normal way. Instead, every point in the fluid responds to every other point simultaneously. If you poke a cube of incompressible Euler fluid with your finger, there is no pressure wave that ripples through it, where the fluid parcels push each other along and get out of each other's way. Instead, the whole cube of fluid somehow instantly adopts a new flow pattern that conserves mass/momentum/energy in response to that finger-poke.

1) Note that velocity is a vector, since it has a direction. This means that in 2D the velocity is two numbers, and in 3D it's three numbers. So technically the 3D incompressible Euler equations have four values at every point: one density, and three velocity components, one each in the x, y, and z directions.

2) I'm a numerical simulation guy, not a mathematician. Real math experts have rigorous definitions of a singularity, e.g. in https://arxiv.org/pdf/2203.17221.pdf "Singularity formation in the incompressible Euler equation in finite and infinite time," Theodore D. Drivas and Tarek M. Elgindi.

wwalker3 | 3 years ago | on: Computer proof ‘blows up’ centuries-old fluid equations

The question that the referenced paper (1) is trying to answer is "do the 3D incompressible Euler equations develop a finite time singularity from smooth initial data of finite energy?" This is an important question in the theory of nonlinear partial differential equations, but is probably not as relevant to real fluid flow as a lay reader might imagine.

The incompressible Euler equations model a very strange and unphysical kind of fluid. Incompressibility means that the speed of wave propagation in such a fluid is infinite, which means that normal causality is not respected. Effects in such a fluid happen simultaneously with their causes.

For example, if you apply a force to one end of a pipe full of Euler fluid, the fluid instantly starts coming out of the other end of the pipe, with no time taken for this effect to propagate from one end of the pipe to the other. You could use a long pipe full of Euler fluid as a superluminal communication device!

Intuitively, it seems reasonable that in such an unphysical fluid, it would be possible to form a singularity even from smooth initial conditions. The difficulty, of course, is proving that intuition, which is what the paper is trying to do.

1) https://arxiv.org/pdf/2210.07191.pdf "Stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth data", Jiajie Chen and Thomas Y. Hou.

wwalker3 | 6 years ago | on: Deep Learning for Symbolic Mathematics

That's a good point. Guess-and-verify could be a handy additional heuristic method if Mathematica's other methods came up empty on a problem. I've also heard of machine learning being used to choose between internal algorithms available in formal proof systems, to try to pick the algorithm that's most likely to work instead of just trying them all sequentially.

wwalker3 | 6 years ago | on: Deep Learning for Symbolic Mathematics

The authors have shown a very nice and (to me) non-intuitive result. But they're playing a little fast and loose with their comparison to Mathematica. They're comparing their algorithm's accuracy (solution correctness vs. incorrectness), with Mathematica's ability to find the correct solution in less than 30 seconds. This is a very important distinction! Mathematica will never silently return an incorrect solution (barring software bugs, of course). And Mathematica can often take minutes to evaluate what appears to be a simple integral, so a 30-second timeout is far too short, unless you're simply trying to compare the computational efficiency of the two approaches.

There may be other subtleties as well. Mathematica works in the complex domain by default, which makes many operations more difficult, but the authors discard expressions which contain complex-valued coefficients as "invalid", which makes me think they're implicitly working in the real domain. Do they restrict Mathematica to the real domain when they invoke it? Perhaps, but they don't say one way or the other. And do they try common tricks like invoking FullSimplify[] on an expression/equation before attempting to operate on it? I'd like to see more details of their methodology.

wwalker3 | 13 years ago | on: Android Renderscript from the perspective of an OpenCL/CUDA/C++ AMP programmer

I suspect that at least some of these "flaws" are intentional, and are meant to make programming easier, at the expense of some performance.

For example, three of the poster's points (not allowing device property querying, not allowing the programmer to choose where a kernel runs, and not exposting local memory to the programmer) all make programming easier, though they also disallow some types of performance tuning.

One big potential reason for doing GPGPU on a mobile device is to get better energy efficiency per gigaflop, rather than to get huge overall performance like on a desktop GPGPU. In this context, squeezing out all possible performance may not be as important.

wwalker3 | 13 years ago | on: Android Renderscript from the perspective of an OpenCL/CUDA/C++ AMP programmer

It does look like mobile GPU vendors are about to start offering OpenCL support. For example, ARM submitted OpenCL 1.1 Full Profile conformance test results for the Mali-T604 last year (http://blogs.arm.com/multimedia/775-opencl-with-arm-mali-gpu...), and Imagination Technologies showed mobile OpenCL demos last year at CES (http://www.youtube.com/watch?v=sDrz-w1jzEU).

It's easy to see why OpenCL hasn't rolled out fully on mobile GPUs yet: writing and debugging a full OpenCL software stack is very expensive and time-consuming, and there's still not that much real programmer demand for OpenCL on mobile.

As for Renderscript, it's always sounded like a bit of "not invented here" syndrome Google's part -- we've already got CUDA and OpenCL, and RS doesn't really bring much new to the table. They've already deprecated the 3D graphics part of Renderscript in Android 4.1, so perhaps they'll do the same to Renderscript Compute soon.

wwalker3 | 13 years ago | on: A letter to the TEDx community on TEDx and bad science

When I saw the TEDx guys' caution about a "physics-related speaker [who] has a degree in engineering, not physics", it struck a bit of a nerve for me.

My Ph.D. is in electrical engineering, not physics. I recently got a computational physics paper accepted in a peer-reviewed journal, and I'm hard at work on a second paper. And in a different world, I could speak at a conference and no one would have to worry about my bona fides.

But I think the TEDx caution is well-founded.

Engineers are often accustomed to knowing more about science than the average person. It can be very easy for them, with the best of intentions, to convince themselves and others that they know more than they really do. It's easy to think you've got some great new idea if you don't engage existing experts in the field via peer review and reading papers.

This is not to say electrical engineers can't be authorities on physics topics -- quite the contrary! But I agree with the TEDx guys that it does merit a bit of extra checking, especially in their situation.

wwalker3 | 13 years ago | on: Are You Doing Research?

I work in the R&D division of a microprocessor company. Most of what we do, I would call "research", but not quite "science". We investigate things that are too risky or time-consuming for our product design groups to look into, with an eye towards making our company money in the future. We fund Ph.D. students in electrical engineering and computer science at university labs, and collaborate with them. But we don't (generally) push back the frontiers of human knowledge in our day-to-day work.

Things might be different at (say) IBM Research Zürich, doing work on atomic force microscopy, but that sort of thing seems to be the exception rather than the rule in industrial R&D. I don't know if I would have expected to see hardcore science at Sun Labs where the article writer's friend worked.

wwalker3 | 13 years ago | on: How low (power) can you go?

Stross' comparison between Cray 1 performance and that of a modern smartphone seems off. He says: "A regular ARM-powered smartphone, such as an iPhone 4S, is some 12-13 orders of magnitude more powerful as a computing device than a late 1970s-vintage Cray 1 supercomputer."

A Cray 1 could peak at about 250 MFLOPs/s (http://en.wikipedia.org/wiki/Cray-1), and a modern smart phone like the Galaxy Nexus peaks at about 9.6 GFLOPs/s (using ARM Neon instructions on both cores). That's less than two orders of magnitude difference.

Floating-point power efficiency seems to have improved by about 6-7 orders of magnitude in that time though, which is very nice :)

wwalker3 | 14 years ago | on: Engineers boost AMD CPU by 20% with software alone; no overclocking

It sounds like the NCSU guys are using the CPU as a prefetcher to speed up GPU kernel execution, not using the GPU to speed up normal CPU programs as the ExtremeTech article implies.

The CPU parses the GPU kernel and creates a prefetcher program that contains the load instructions of the GPU kernel. This prefetcher runs on the CPU, but slightly ahead of kernel execution on the GPU. This warms up the caches, so that when the GPU executes a load instruction, the data is already there.

wwalker3 | 15 years ago | on: Hacker with iPhone takes over Times Square screens

Humorously enough, early television in the UK (from 1929-1935) had only 30 lines of resolution and so could be recorded at audio frequencies onto a "Phonovision" disk. You can see some examples at http://www.tvdawn.com/recordng.htm of the low quality of the results.

For a little while, the BBC did broadcasts of 30-line video using two AM radio frequencies (one for the video, one for the audio) that could be picked up by special recievers.

wwalker3 | 15 years ago | on: Overcome Fear in 2011. Get Rejected On Purpose

Having to reject someone can be a very stressful experience. I don't like the idea of purposefully inflicting that stress on others unless there's a chance we both could benefit.

"Getting rejected on purpose" sounds like I'd be picking situations where I know for sure I'll be rejected -- an inappropriate proposition, an undeserved request. I don't want to do that to someone else just to try to desensitize myself to rejection.

page 1