There is a lot valid concern on accessibility and abuse this could result in, but it think it's important to see the other side of the argument.
There was a really good thread on Twitter a couple of days ago:
> In light of recent Figma news, lemme reiterate that of all the goods that can happen to the web, 90% of them can't happen due to not having access to font rendering & metrics in JS
> t’s kind of crazy that a platform specifically designed for presenting text doesn’t provide functionality to manipulate text at a detail level
> Brute forcing text measurement in tldraw breaks my heart
Love it or hate it, the web is a platform for application development, making this easer is only good for everyone.
My argument on web APIs is what we should continue to go lower level, and so font and text metrics APIs for canvas would be awesome and an alternative to this. But I'm also a proponent of "using the platform" and for text layout, web engines are incredible, and very performant. Extending that capability to layout inside a canvas enables many awesome features.
One that I've repeatedly gone back to over the years is paginated rich text editing. It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
> of all the goods that can happen to the web, 90% of them can't happen due to not having access to font rendering & metrics in JS
I’d be interested to see a representative excerpt of this person’s “goods that can happen to the web”, because it sounds pretty ridiculous to me. Not much needs that stuff, and a lot of that stuff is exposed in JS these days, and a lot of the rest you can work around it without it being ruinous to performance.
It’s also pretty irrelevant here (that is, about HTML-in-Canvas): allowing drawing HTML to canvas doesn’t shift the needle in these areas at all.
> One that I've repeatedly gone back to over the years is paginated rich text editing. It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine.
Why would you want world's least performant layout/UI engine infect canvas? This literally just cements the situation you quote about having no access to good APIs.
> It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
Why would it enable contenteditable for rich text if you yourself are saying that it doesn't work, and Google had to implement its own engine?
canvas first sites suck. They can't use any system services as it would all be a privacy issue. They can't use the system dictionary for correction since to do so they'd need the contents of the dictionary or at least a way to query user customized corrections. Similarly they can't offer the system level accessibility but end up having to roll their own in which case, every app that uses canvas has a completely different UI.
What if you want an HTML-first page with a canvas in it, but then you realize you want some layout/styling for the text within the canvas? Seems unnecessary to propagate that situation up to the type of top-level page.
Where does SVG's `foreignObject` fit into this? It seems that SVG supports all of thelproposal already? As is evidenced by projects like https://github.com/zumerlab/snapdom that can take "screenshots" of the webpage by copying the DOM with inlined styles into a `foreignObject` tag in an SVG. Then of course that SVG can be rendered to a canvas.
This proposal is a lot like an easier way to draw foreign object into canvas. This proposal supports new features too, such as updating the canvas when the content changes, and interactivity.
Please correct me if I'm wrong, but I feel rendering html overtop of canvas solves this with vanilla just fine. Canvas is for rendering things you can't with html, and not replacement for the dom.
Here's a simple example that's currently very hard to do and requires all kinds of hacky and unsatisfying workarounds:
1. A 3d model, say of a statue in a museum
2. Add annotations to the model drawing attention to specific features (especially if the annotations are not just a single word or number)
If you want the annotations to be properly occluded by the model as you move the camera around, it's hard - you can't use HTML. If you do use HTML, you'll have to do complex calculations to make it match the correct place in the 3d scene, and it will always be a frame delayed, and occlusion is bad - usually just show or hide the entire HTML annotation based on the bounding box of the 3d model (I have seen better solutions but they took a ton of work).
So you could use 3d text, maybe SDF, but now you've created a entire text rendering system without accessibility or anything like that. Also, if you want anything more than very simple annotations (for example, videos, lists, select menus, whatever) you either have to reinvent them or fall back HTML.
That only works if the html stuff is on top of everything that's rendered in the canvas, otherwise you need to add another canvas on top of the html (etc pp for each separate z-layer).
IMHO this step finally starts to fix the "inverted api layer stack" in browsers. All browser rendering should build on top of a universal canvas api.
It should already work if the nested canvas uses the same approach. It's not cyclic, though. To make cyclic canvases work, you need to manually draw the parent canvas to a nested canvas.
This would make the entire visible page into a canvas-like drawing surface which also renders DOM elements as per usual. At some level there's a process which rasterizes the DOM - opening drawing APIs into that might be a better solution.
It's sort of the same thing as HTML in canvas conceptually, but architecturally it makes DOM rendering and canvas rendering overlapping equals with awareness going both ways. E.g., a line drawn on the page will cause the DOM elements to reflow unless told to ignore it.
I support this, as odd as it is. There’s times when you’re needing something drawn but can easily reuse an html element from elsewhere. Previously you’d have to render that to a bitmap offscreen and then copy that to a full screen quad or draw it on the canvas. Up until recently, even if you tried to z-index elements with position absolute it would be visually overwritten by the canvas (I think this is mostly fixed though).
I don’t know if this is the best solution but it’s better than previous hacks. IF you need to go that route. Basically html2canvas.
There is a real problem using canvas to replace HTML.
Not all but most HTML. I have not found a good solution for the issue of doing something like MDX in canvas. I have tried SDF, looked at 2D canvas Text, Troika, MSDF. You can get text, it is just that laying it out is very difficult. React three drei has the ability to put HTML into the threejs ecosystem, but there are issues about CSS and text that make that impractical.
For me the use case is very simple. I would like to take an MDX file and show it in a mesh. Laid out. Maybe I am missing something because I am new to the whole threejs thing, but I really tried.
This shows it can be done, I gave up trying to reproduce it in React-three-fiber.
Why? Personally, I think the use of 3D graphics produces an interface for users that is an order or magnitude better for users. The real question (and an interesting one to consider) is why are we still building HTML first websites?
I read the title and said "shut the fuck up, don't do that." but then I read the rationale and it's fair. It's true there is no layout engine inside canvas, and that is a pain, but I'm not sure it's such a pain as to invite this recursive hell.
One of the more senior engineers I worked with told me: "Every real-life data structure I encountered was tree-like".
It would be easiest to just ask the browser to render a fragment of HTML onto a canvas, or onto some invisible bitmap, like you can with most other UI toolkits.
Having this type of control, for certain use cases can be perfectly valid.
It also feels Flash like.
The javascriptists began on a journey 15 years ago to replace Flash. Things have gotten more complicated before becoming simpler, but maybe things will head in a direction soon.
Flash itself was actionscript (ECMAScript) which is the same syntax as Javascript.
45kb gzipped is pretty beefy but incredibly small when you consider just what it takes to make this work today. If I understand correctly, it’s basically a DOM and CSS renderer.
[+] [-] samwillis|7 months ago|reply
There was a really good thread on Twitter a couple of days ago:
> In light of recent Figma news, lemme reiterate that of all the goods that can happen to the web, 90% of them can't happen due to not having access to font rendering & metrics in JS
https://x.com/_chenglou/status/1951481453046538493
And a few choice replies:
> t’s kind of crazy that a platform specifically designed for presenting text doesn’t provide functionality to manipulate text at a detail level
> Brute forcing text measurement in tldraw breaks my heart
Love it or hate it, the web is a platform for application development, making this easer is only good for everyone.
My argument on web APIs is what we should continue to go lower level, and so font and text metrics APIs for canvas would be awesome and an alternative to this. But I'm also a proponent of "using the platform" and for text layout, web engines are incredible, and very performant. Extending that capability to layout inside a canvas enables many awesome features.
One that I've repeatedly gone back to over the years is paginated rich text editing. It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
I hope it lands in the browsers.
[+] [-] chrismorgan|7 months ago|reply
I’d be interested to see a representative excerpt of this person’s “goods that can happen to the web”, because it sounds pretty ridiculous to me. Not much needs that stuff, and a lot of that stuff is exposed in JS these days, and a lot of the rest you can work around it without it being ruinous to performance.
It’s also pretty irrelevant here (that is, about HTML-in-Canvas): allowing drawing HTML to canvas doesn’t shift the needle in these areas at all.
[+] [-] MartinMond|7 months ago|reply
As do we at Nutrient, we use Harfbuzz in WASM plus our own layouting - see the demo here: https://document-authoring-demo.nutrient.io/
Getting APIs for that into the Platform would make life significantly easier, but thanks to WASM it’s not a total showstopper.
Btw, I saw you’re working on sync at ElectricSQL - say hi to Oleksii :)
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] unknown|7 months ago|reply
[deleted]
[+] [-] troupo|7 months ago|reply
Why would you want world's least performant layout/UI engine infect canvas? This literally just cements the situation you quote about having no access to good APIs.
A reminder that Figma had to "create a browser inside a browser" to work around DOM limitations: https://www.figma.com/blog/building-a-professional-design-to...
> It's simply impossible to do with contenteditable in a product level way - one of the reasons Google docs has a custom layout engine. This proposal would enable full use of contenteditable for rich text, but with full page/print layout control.
Why would it enable contenteditable for rich text if you yourself are saying that it doesn't work, and Google had to implement its own engine?
[+] [-] ha1zum|7 months ago|reply
To make it make sense in my opinion canvas should already be a first class format for web browsers, so it doesn't have to be inside a HTML.
Then we would have a choice of HTML-first page with canvas elements in it, or a canvas-first page with HTML elements in it.
But what do I know.
[+] [-] bapak|7 months ago|reply
If you have a canvas-first page, where do you store the title? Right, in <title>, so
In reality they should really just allow content in the canvas element and call it a day:[+] [-] socalgal2|7 months ago|reply
[+] [-] jonplackett|7 months ago|reply
[+] [-] Liron|7 months ago|reply
[+] [-] lenkite|7 months ago|reply
[+] [-] BobbyTables2|7 months ago|reply
[+] [-] sangeeth96|7 months ago|reply
[+] [-] xdennis|7 months ago|reply
[1]: https://www.destroyallsoftware.com/talks/the-birth-and-death...
[+] [-] socalgal2|7 months ago|reply
[+] [-] mkoubaa|7 months ago|reply
[+] [-] alexhixon|7 months ago|reply
[+] [-] mmastrac|7 months ago|reply
> TODO: Expand on fingerprinting risks
[+] [-] rapnie|7 months ago|reply
[+] [-] tombh|7 months ago|reply
[+] [-] progers7|7 months ago|reply
[+] [-] mircerlancerous|7 months ago|reply
[+] [-] esperent|7 months ago|reply
1. A 3d model, say of a statue in a museum
2. Add annotations to the model drawing attention to specific features (especially if the annotations are not just a single word or number)
If you want the annotations to be properly occluded by the model as you move the camera around, it's hard - you can't use HTML. If you do use HTML, you'll have to do complex calculations to make it match the correct place in the 3d scene, and it will always be a frame delayed, and occlusion is bad - usually just show or hide the entire HTML annotation based on the bounding box of the 3d model (I have seen better solutions but they took a ton of work).
So you could use 3d text, maybe SDF, but now you've created a entire text rendering system without accessibility or anything like that. Also, if you want anything more than very simple annotations (for example, videos, lists, select menus, whatever) you either have to reinvent them or fall back HTML.
[+] [-] flohofwoe|7 months ago|reply
IMHO this step finally starts to fix the "inverted api layer stack" in browsers. All browser rendering should build on top of a universal canvas api.
[+] [-] ttoinou|7 months ago|reply
[+] [-] bastawhiz|7 months ago|reply
[+] [-] codelikeawolf|7 months ago|reply
[+] [-] russellbeattie|7 months ago|reply
It's sort of the same thing as HTML in canvas conceptually, but architecturally it makes DOM rendering and canvas rendering overlapping equals with awareness going both ways. E.g., a line drawn on the page will cause the DOM elements to reflow unless told to ignore it.
[+] [-] pyrolistical|7 months ago|reply
[+] [-] koolala|7 months ago|reply
[+] [-] reactordev|7 months ago|reply
I don’t know if this is the best solution but it’s better than previous hacks. IF you need to go that route. Basically html2canvas.
[+] [-] talkingtab|7 months ago|reply
Not all but most HTML. I have not found a good solution for the issue of doing something like MDX in canvas. I have tried SDF, looked at 2D canvas Text, Troika, MSDF. You can get text, it is just that laying it out is very difficult. React three drei has the ability to put HTML into the threejs ecosystem, but there are issues about CSS and text that make that impractical.
For me the use case is very simple. I would like to take an MDX file and show it in a mesh. Laid out. Maybe I am missing something because I am new to the whole threejs thing, but I really tried.
A good article about text https://css-tricks.com/techniques-for-rendering-text-with-we...
And an example from the above article: https://codesandbox.io/p/sandbox/css-tricks-msdf-text-fks8w
This shows it can be done, I gave up trying to reproduce it in React-three-fiber.
Why? Personally, I think the use of 3D graphics produces an interface for users that is an order or magnitude better for users. The real question (and an interesting one to consider) is why are we still building HTML first websites?
[+] [-] squidbeak|7 months ago|reply
[+] [-] SeanAnderson|7 months ago|reply
[+] [-] nine_k|7 months ago|reply
It would be easiest to just ask the browser to render a fragment of HTML onto a canvas, or onto some invisible bitmap, like you can with most other UI toolkits.
[+] [-] teaearlgraycold|7 months ago|reply
[+] [-] hyperhello|7 months ago|reply
[+] [-] c-smile|7 months ago|reply
1. By painting on it using Canvas/Graphics API:
Where _painter_ is a function used for paining on the image surface using Canvas/Graphics reference.2. By making snapshot of the existing DOM element:
Such images can be used in DOM, rendered by other Canvas/Graphics as also in WebGL as textures.See: https://docs.sciter.com/docs/Graphics/Image#constructor
[+] [-] TheRealPomax|7 months ago|reply
I wonder if the working groups are still run by that attitude.
[+] [-] j45|7 months ago|reply
Having this type of control, for certain use cases can be perfectly valid.
It also feels Flash like.
The javascriptists began on a journey 15 years ago to replace Flash. Things have gotten more complicated before becoming simpler, but maybe things will head in a direction soon.
Flash itself was actionscript (ECMAScript) which is the same syntax as Javascript.
[+] [-] fullstackwife|7 months ago|reply
[+] [-] tantalor|7 months ago|reply
[+] [-] Waterluvian|7 months ago|reply
[+] [-] kizer|7 months ago|reply
[+] [-] pacifika|7 months ago|reply
[+] [-] flohofwoe|7 months ago|reply
[+] [-] wg0|7 months ago|reply
Nothing such is available.
[+] [-] formula1|7 months ago|reply