"Thinking quickly, I made a megaphone by taking a squirrel and tying it to a megaphone I had."
Seriously: This is an interesting article because it's about more than just rendering video, it's about creating a new video procedurally and then rendering it with that pre-existing rendering program. It would be nice if the title reflected that.
Another way to do it on the fly is to feed image data into stdin of ffmpeg. I haven't tried with BMP with that method, so the intermediary encoding might not be spared.
Im gonna need to dive further into this, I do js canvas and webgl rendering to videos with the chrome/puppeteer via 'puppeteer-screen-recorder'.
But they do at times cause some issues. Latest 3d renders have been causing memory issues that I think would be solved with bigger boxes but havent need to investigate for a little while.
Have thought about it just outputting frames and then later having ffmpeg sticth them into a video, but havent gotten around to really testing it.
Im guessing this limited to 2d canvas, but excited to
check it out. Thanks!
WebGL/3D works fine, just with some additional dependencies (e.g. mesa drivers) and a little more setup in Nodejs to create the context and copy the framebuffer to node-canvas to do the image encoding.
Here's a little 3D animation I've rendered using a similar technique (plus WebGL) in a docker container:
The main thing to watch out for is whether you need specific WebGL extensions that might not be supported. Array instancing is the main one I use, which is supported.
[+] [-] msla|3 years ago|reply
Seriously: This is an interesting article because it's about more than just rendering video, it's about creating a new video procedurally and then rendering it with that pre-existing rendering program. It would be nice if the title reflected that.
[+] [-] notpushkin|3 years ago|reply
Two improvements you can borrow:
1. FFmpeg can read frames from HTTP URL, which you can (ab)use to generate frames on the fly without hitting the disk
2. If you do that, you can also switch from PNG to BMP, since CPU is now your bottleneck
[+] [-] brianshaler|3 years ago|reply
[+] [-] wartron|3 years ago|reply
But they do at times cause some issues. Latest 3d renders have been causing memory issues that I think would be solved with bigger boxes but havent need to investigate for a little while.
Have thought about it just outputting frames and then later having ffmpeg sticth them into a video, but havent gotten around to really testing it.
Im guessing this limited to 2d canvas, but excited to check it out. Thanks!
[+] [-] brianshaler|3 years ago|reply
WebGL/3D works fine, just with some additional dependencies (e.g. mesa drivers) and a little more setup in Nodejs to create the context and copy the framebuffer to node-canvas to do the image encoding.
Here's a little 3D animation I've rendered using a similar technique (plus WebGL) in a docker container:
https://www.carvana.com/shareyourcarvana/MjM2MTc2OTpBbGV4YW5...
The main thing to watch out for is whether you need specific WebGL extensions that might not be supported. Array instancing is the main one I use, which is supported.
[+] [-] sbarre|3 years ago|reply
0: https://www.remotion.dev/
[+] [-] undershirt|3 years ago|reply
[+] [-] sandreas|3 years ago|reply
https://code.videolan.org/jbk/vlc.js
[+] [-] midasuni|3 years ago|reply
If you start your article with something so blatantly untrue why should I believe the rest?