very interesting... i am literally in the middle of building my own canvas rendering engine so this article really made me excited
it saddens me a bit that the author doesn't talk about the performance of their implementation... one of the great benefits of using canvas to render ui is that you are able to render far more elements than using the dom with a much smaller memory footprint
im curious to know more about their rendering strategy... does it just redraw everything at 60hz or do they have a smarter way to trigger redraws? do they always redraw the entire canvas or do they somehow only redraw updated regions?
CanvaSX currently doesn't have its own rendering event loop or anything like that, it simply renders everything that is passed in. For now this makes the implementation very simple and naive.
Internally we are using this for our Aha! Whiteboards product, which is built on top of Fabric.js The Fabric framework provides a lot of the rendering optimizations out of the box: it will only render shapes that are visible in the current viewport. Fabric resets and re-renders the entire canvas every time, but it will only re-render as needed. There are also instances where we need to manually trigger a re-render based on state changes to the whiteboard.
So the rendering performance thus far has not been an issue. We are always looking for improvements to our product and the implementation of CanvaSX, but we didn't want to prematurely optimize or over-engineer the framework.
I don’t understand the point of this. I love canvas, but if your goal is to render buttons and other UI components then why not use the DOM? That’s what it’s there for.
Perhaps I could have shared more details on this. We did explore regular DOM/HTML for some of the functionality, but we needed to build it into our existing Whiteboards stack which uses Canvas. Rendering as HTML would not have worked for use case.
`react-canvas` definitely has a lot of similarities, but it is very old, uses class-based components, uses React elements, and most importantly, it doesn't provide any auto-layout functionality out of the box.
The main driving factor behind CanvaSX was the auto-layout functionality: if you look at the example code for rendering a button using CanvaSX, you'll noticed that there are no positional or coordinate properties being defined. In `react-canvas`, the coordinates/dimensions of each shape must be manually defined, which becomes problematic for shapes with dynamic content being rendered inline. `react-canvas` did not attempt to solve any of those problems, CanvaSX does.
I think a custom renderer for react would much more powerful. Currently JSX is just used as syntax sugar. Wasn’t some react ink project that more or less made this?
By a custom renderer, are you referring to replacing the rendering process (e.g. ReactDOM) rather than replacing JSX?
It's not clear to me how this would be better than swapping out the @jsx pragma. For rendering content to the Canvas, we have no need for all the overhead and complexity of React elements/components. There is no need or benefit in using the `React.createElement` (or its newer counterparts) to create React elements. But in theory, you're right that CanvaSX could have used React elements and rendered those instead of replacing the JSX pragma.
nasso_dev|1 year ago
it saddens me a bit that the author doesn't talk about the performance of their implementation... one of the great benefits of using canvas to render ui is that you are able to render far more elements than using the dom with a much smaller memory footprint
im curious to know more about their rendering strategy... does it just redraw everything at 60hz or do they have a smarter way to trigger redraws? do they always redraw the entire canvas or do they somehow only redraw updated regions?
percyhanna|1 year ago
Internally we are using this for our Aha! Whiteboards product, which is built on top of Fabric.js The Fabric framework provides a lot of the rendering optimizations out of the box: it will only render shapes that are visible in the current viewport. Fabric resets and re-renders the entire canvas every time, but it will only re-render as needed. There are also instances where we need to manually trigger a re-render based on state changes to the whiteboard.
So the rendering performance thus far has not been an issue. We are always looking for improvements to our product and the implementation of CanvaSX, but we didn't want to prematurely optimize or over-engineer the framework.
joshmarinacci|1 year ago
percyhanna|1 year ago
bobbylarrybobby|1 year ago
kabes|1 year ago
https://github.com/Flipboard/react-canvas
percyhanna|1 year ago
The main driving factor behind CanvaSX was the auto-layout functionality: if you look at the example code for rendering a button using CanvaSX, you'll noticed that there are no positional or coordinate properties being defined. In `react-canvas`, the coordinates/dimensions of each shape must be manually defined, which becomes problematic for shapes with dynamic content being rendered inline. `react-canvas` did not attempt to solve any of those problems, CanvaSX does.
yuchi|1 year ago
pavlov|1 year ago
I’ve written a React renderer that has both a canvas path for graphics and an automatically generated acceleration path for video layers:
https://www.daily.co/blog/new-beta-dailys-video-component-sy...
It’s open source, the code is here:
https://github.com/daily-co/daily-vcs
percyhanna|1 year ago
It's not clear to me how this would be better than swapping out the @jsx pragma. For rendering content to the Canvas, we have no need for all the overhead and complexity of React elements/components. There is no need or benefit in using the `React.createElement` (or its newer counterparts) to create React elements. But in theory, you're right that CanvaSX could have used React elements and rendered those instead of replacing the JSX pragma.
itronitron|1 year ago