top | item 42046743

(no title)

robbie-c | 1 year ago

Yeah, I wrote my CS dissertation on this. It started as me writing a GPGPU video codec (for a simplified h264), and turned into me writing an explanation of why this wouldn't work. I did get somewhere with a hybrid approach (use the GPU for a first pass without intra-frame knowledge, followed by a CPU SIMD pass to refine), but it wasn't much better than a pure CPU SIMD implementation and used a lot more power.

discuss

order

astrange|1 year ago

x264 actually gets a little use out of GPGPU - it has a "lookahead" pass which does a rough estimate of encoding over the whole video, to see how complex each scene is and how likely parts of the picture are to be reused later. That can be done in CUDA, but IIRC it has to run like 100 frames ahead before the speed increase wins over the CPU<>GPU communication overhead.