The frame rate is incredible given the number of particles. JavaScript interpreters really have come a long way.
The frame rate for JS particle simulation beats the frame rate you get when you tell a browser to append content, which is usually native C/C++ optimized to death.
To the author: great work. It looked very much alive on my screen when I first loaded the page.
Does anyone have the theoretical description of what is happening? I'm curious about the fractal patterns being generated and the increasing in entropy and the that results in a catastrophic failure of the stable system. Is this some kind of chaotic system or is just an force field being applied on the particules.
half a million particles are rendered to a texture, which is then blurred, and the gradients are used to update the velocity of the particles in a feedback loop. There's no direct interaction between individual particles. There are two words to describe the haphazard in the behavior. Use Google or Wikipedia to learn about "dissipative systems" and "stigmergy".
I moved the mouse very slightly at the edge, which caused all the particles to eventually settle in two separate spinning groups. Then I started making fast circles around one of the groups, and once the group started following the pattern, added a slowly moving center of rotation towards the other undisturbed group. This caused it to slowly move towards the stationary one, until they collided, each ripping from the other, until they merged.
Galaxies would require coupling of the particles, interactions through gravity... this would kill the efficiency of the solver. N-Particle simulations are always O(N^2) (or more).
I'm the original author of this little WebGL experiment and i want to try to answer some questions that came up here.
1) implementation
there's quite some boilerplate in JS to set up all the textures and the main animation loop, but if you look closely the CPU is mostly idle and the "heavy lifting" is all done on the GPU by several shader programs. There are no libraries used and you can take a copy of the html file and simply start by breaking things apart.
For the massive speed I'm updating the particle data with a clever fragment shader trick that i've learned from https://twitter.com/BlurSpline/status/161806273602519040
And in a DIY fashion, I've mashed this up with my own texture feedback loop.
The main idea is that the particle positions (and the velocity vectors too, each 2D only) are stored in a texture's rgba values (float32). So updating the particles data is in fact a render process of a quad in one draw call. Then I had also rendered the particles to another texture to sum up the "density" of the primitive 1 pixel point projections.
2) complexity
when it comes to the mere particle count, the complexity really is O(n), but there's a wee bit more to it. The projection of the particles to the framebufferobject or the screen is the most costly in this setup and it's fillrate-limited by the graphics card. There's a noticeable difference when the particles are evenly distributed or when they overlap, but it must stay in the O(n) realm i suppose. Then there's another texture feedback loop system that is also directly dependent on the pixel count.
The particles are stored in a 1024x512 pixels wide texture and the hidden texture feedback layer is also of that size, but it could differ too.
There is absolutely no direct interaction between any two particles here. I project the particles to a density texture that is then diffused with an optimized two-pass Gaussian blur calculation and several resolution reduction steps.
All the textures from the different steps are available as in put sampler to the shader programs, in particular "fs-advance" for the Turing patterns and the density projection (hey there, the blue channel is unused ^^) and "fs-move-particles" where i simply grab the gradient from the diffused density to update the particle's velocity vector and do the verlet integration.
The concepts used here also have names - just ask google or wikipedia for "dissipative systems/structures" and "stigmergy".
3) the fluid simulation code is not by me!
Evgeny Demidov is the original author of the WebGL shaders for that: http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm
I'm only adding to the current advection matrix
4) code size
this could possibly fit into a 4k demo but i have no interest in that kind of challenge. i rather like to share something that is easily readable by others.
This is really very impressive and thanks for sharing!
How many single particles are visible at every moment?
Would it be possible to control the movement of the particles by the fluid field to form predefined shapes? to make them cluster into predefined (even moving) areas?
For some time now, i wonder if it would be possible to visualize some population statistics (think the percentile wealth distribution f.e. ;) by using some thing like this and make every single person "visible" within the statistic.
I believe it would make many "distributions" more intuitively understandable. Imagine some several thousand single "very wealthy" entities contrasted by several thousands or millions of "average" entities ;)
Obviously such interactive diagrams easily hit (hardware/software) limits, but even if displayed with some mapping like 1 point equals several (hundreds/thousands) real world entities, i think such display of magnitude in the real world would be very engaging.
It's nice to see this up here, I've been a fan of your work since the milkdrop days. Is that when you started making shaders ? I gotta say MD2 was one of the best shader playgrounds i've ever used and it's how i was introduced to shader programming. You should do cinematic special effects :)
Took me a long time to notice that my cursor movements were injecting disturbances into the fluid. What physical laws govern these points, and how is the cursor perturbing them?
[+] [-] DanielRibeiro|12 years ago|reply
Three.js also has some pretty nice to follow particles samples:
http://threejs.org/examples/#webgl_particles_random
http://threejs.org/examples/#webgl_particles_sprites
http://threejs.org/examples/#webgl_buffergeometry_particles
[+] [-] theGimp|12 years ago|reply
To the author: great work. It looked very much alive on my screen when I first loaded the page.
[+] [-] EpicEng|12 years ago|reply
[+] [-] ye|12 years ago|reply
[+] [-] arturventura|12 years ago|reply
[+] [-] Flexi23|12 years ago|reply
[+] [-] technotony|12 years ago|reply
[+] [-] clarkmoody|12 years ago|reply
Also, a way to reset the simulation without reloading the page would be nice.
[+] [-] ep103|12 years ago|reply
[+] [-] acadien|12 years ago|reply
[+] [-] joshu|12 years ago|reply
[+] [-] DanBC|12 years ago|reply
Using Version 31.0.1650.57 m Google Chrome is up to date
[+] [-] gilgoomesh|12 years ago|reply
[+] [-] d23|12 years ago|reply
[+] [-] girvo|12 years ago|reply
[+] [-] iLoch|12 years ago|reply
[+] [-] Flexi23|12 years ago|reply
1) implementation there's quite some boilerplate in JS to set up all the textures and the main animation loop, but if you look closely the CPU is mostly idle and the "heavy lifting" is all done on the GPU by several shader programs. There are no libraries used and you can take a copy of the html file and simply start by breaking things apart. For the massive speed I'm updating the particle data with a clever fragment shader trick that i've learned from https://twitter.com/BlurSpline/status/161806273602519040 And in a DIY fashion, I've mashed this up with my own texture feedback loop. The main idea is that the particle positions (and the velocity vectors too, each 2D only) are stored in a texture's rgba values (float32). So updating the particles data is in fact a render process of a quad in one draw call. Then I had also rendered the particles to another texture to sum up the "density" of the primitive 1 pixel point projections.
2) complexity when it comes to the mere particle count, the complexity really is O(n), but there's a wee bit more to it. The projection of the particles to the framebufferobject or the screen is the most costly in this setup and it's fillrate-limited by the graphics card. There's a noticeable difference when the particles are evenly distributed or when they overlap, but it must stay in the O(n) realm i suppose. Then there's another texture feedback loop system that is also directly dependent on the pixel count. The particles are stored in a 1024x512 pixels wide texture and the hidden texture feedback layer is also of that size, but it could differ too. There is absolutely no direct interaction between any two particles here. I project the particles to a density texture that is then diffused with an optimized two-pass Gaussian blur calculation and several resolution reduction steps. All the textures from the different steps are available as in put sampler to the shader programs, in particular "fs-advance" for the Turing patterns and the density projection (hey there, the blue channel is unused ^^) and "fs-move-particles" where i simply grab the gradient from the diffused density to update the particle's velocity vector and do the verlet integration.
The concepts used here also have names - just ask google or wikipedia for "dissipative systems/structures" and "stigmergy".
3) the fluid simulation code is not by me! Evgeny Demidov is the original author of the WebGL shaders for that: http://www.ibiblio.org/e-notes/webgl/gpu/fluid.htm I'm only adding to the current advection matrix
4) code size this could possibly fit into a 4k demo but i have no interest in that kind of challenge. i rather like to share something that is easily readable by others.
cheers!
[+] [-] pointernil|12 years ago|reply
How many single particles are visible at every moment? Would it be possible to control the movement of the particles by the fluid field to form predefined shapes? to make them cluster into predefined (even moving) areas?
For some time now, i wonder if it would be possible to visualize some population statistics (think the percentile wealth distribution f.e. ;) by using some thing like this and make every single person "visible" within the statistic.
I believe it would make many "distributions" more intuitively understandable. Imagine some several thousand single "very wealthy" entities contrasted by several thousands or millions of "average" entities ;)
Obviously such interactive diagrams easily hit (hardware/software) limits, but even if displayed with some mapping like 1 point equals several (hundreds/thousands) real world entities, i think such display of magnitude in the real world would be very engaging.
Thanks again.
[+] [-] darsham|12 years ago|reply
[+] [-] pwnna|12 years ago|reply
It's pretty slow on my computer. Would ASM.js work better?
[+] [-] fekberg|12 years ago|reply
It uses OpenGL ES it seems, doesn't work in IE 11.
Edit: It would be interesting to know if this would perform better using WebGL and running it in IE 11, anyone has any thoughts on that?
[+] [-] mistercow|12 years ago|reply
[+] [-] tumes|12 years ago|reply
[+] [-] spectre256|12 years ago|reply
[+] [-] vjoel|12 years ago|reply
[+] [-] vjoel|12 years ago|reply
[+] [-] ffrryuu|12 years ago|reply
[+] [-] mistercow|12 years ago|reply
[+] [-] crashandburn4|12 years ago|reply
I've had a look here: http://creativejs.com/2013/11/coupled-turing-pattern-and-219...
but there's no details on the specifics
[+] [-] Flexi23|12 years ago|reply
[+] [-] stuartd|12 years ago|reply
[+] [-] Bhel|12 years ago|reply
It'd be nice if the color gradually changed.
[+] [-] lectric|12 years ago|reply
[+] [-] adcuz|12 years ago|reply
[+] [-] nni|12 years ago|reply