It's still just doing exactly what shaders do, which is crazy.
Explain to me exactly why, other than 'I guess someone already implemented some kind of basic version of it' that you would have to have custom CPU code rendering glyphs instead of a shader rendering SDF's like literally everyone does with shaders already?
It's not a good solution. It's a bad, easy solution.
We have a solution for running arbitrary GPU accelerated graphics instructions; it has a cross platform version with webGPU.
This font thing... looks a lot like 'not invented here' syndrome to me, as an uninvolved spectator.
Why would you chose or want not to use GPU acceleration to render your glyphs?
What 'arbitrary code' does a font need to do that couldn't be implemented in a shader?
Maybe the horse has already bolted, yes, I understand programmable fonts already exist.. but geez, its incomprehensible to me, at least from what I can see.
> Explain to me exactly why, other than 'I guess someone already implemented some kind of basic version of it' that you would have to have custom CPU code rendering glyphs instead of a shader rendering SDF's like literally everyone does with shaders already?
Shaping is different compared to rendering glyphs themselves. SDF renderers (and other GPU text renderers like Slug) still do shaping on the CPU, not in shaders. Maybe some experiments have been done in this area, but I doubt anyone shapes text directly in the GPU in practice.
Think of it like a function that takes text as input, and returns positions as output. Shaders don't really know anything about text. Sure you could probably implement it if you wanted to, but why would you? I think it would add complexity for no benefit (not even performance).
> CPU code rendering glyphs instead of a shader rendering SDF's
1) Because SDFs suck badly (and don't cover the whole field) when you want to render sharp text. SDFs are fine when used in a game where everything is mapped to textures and is in motion at weird angles. SDFs are not fine in a static document which is rendered precisely in 2D.
2) Because GPUs handle "conditional" anything like crap. GPUs can apply a zillion computations as long as those computations apply to everything. The moment you want some of those computations to only apply to these things GPUs fall over in a heap. Every "if" statement wipes out half your throughput.
3) Because "text rendering" is multiple problems all smashed together. Text rendering is vector graphics--taking outlines and rendering them to a pixmap. Text rendering is shaping--taking text and a font and generating outlines. Text rendering is interactive--taking text and putting a selection or caret on it. None of these things parallelize well except maybe vector rendering.
It has nothing to do with shaders? Despite the name, shaping is not the same thing as a shader, shaping selects and places individual glyphs given a collection of code points.
No part of the rasterizer or renderer is configurable here. As mentioned above, the rasterizer is already programmable with up to two different bespoke stack bytecode languages, but that has nothing to do with shaping through wasm.
wokwokwok|1 year ago
Explain to me exactly why, other than 'I guess someone already implemented some kind of basic version of it' that you would have to have custom CPU code rendering glyphs instead of a shader rendering SDF's like literally everyone does with shaders already?
It's not a good solution. It's a bad, easy solution.
We have a solution for running arbitrary GPU accelerated graphics instructions; it has a cross platform version with webGPU.
This font thing... looks a lot like 'not invented here' syndrome to me, as an uninvolved spectator.
Why would you chose or want not to use GPU acceleration to render your glyphs?
What 'arbitrary code' does a font need to do that couldn't be implemented in a shader?
Maybe the horse has already bolted, yes, I understand programmable fonts already exist.. but geez, its incomprehensible to me, at least from what I can see.
vg_head|1 year ago
Shaping is different compared to rendering glyphs themselves. SDF renderers (and other GPU text renderers like Slug) still do shaping on the CPU, not in shaders. Maybe some experiments have been done in this area, but I doubt anyone shapes text directly in the GPU in practice.
Think of it like a function that takes text as input, and returns positions as output. Shaders don't really know anything about text. Sure you could probably implement it if you wanted to, but why would you? I think it would add complexity for no benefit (not even performance).
bsder|1 year ago
1) Because SDFs suck badly (and don't cover the whole field) when you want to render sharp text. SDFs are fine when used in a game where everything is mapped to textures and is in motion at weird angles. SDFs are not fine in a static document which is rendered precisely in 2D.
2) Because GPUs handle "conditional" anything like crap. GPUs can apply a zillion computations as long as those computations apply to everything. The moment you want some of those computations to only apply to these things GPUs fall over in a heap. Every "if" statement wipes out half your throughput.
3) Because "text rendering" is multiple problems all smashed together. Text rendering is vector graphics--taking outlines and rendering them to a pixmap. Text rendering is shaping--taking text and a font and generating outlines. Text rendering is interactive--taking text and putting a selection or caret on it. None of these things parallelize well except maybe vector rendering.
Jasper_|1 year ago
No part of the rasterizer or renderer is configurable here. As mentioned above, the rasterizer is already programmable with up to two different bespoke stack bytecode languages, but that has nothing to do with shaping through wasm.
behdad|1 year ago
fyrn_|1 year ago