localvoid's comments

localvoid | 8 months ago | on: The time is right for a DOM templating API

There is a lot of interesting research outside of the webdev bubble in the incremental computation problem space, and self-adjusting computations (signals) aren't even that interesting.

localvoid | 8 months ago | on: The time is right for a DOM templating API

If I understand it correctly, the main argument in favor of tagged templates is that it doesn't require any changes to the js engine and that is why it will be way much easier to push forward. Browser implementation should be quite straightforward and it will be possible to implement a semi-efficient polyfill.

Personally, I don't think that it will have any significant impact, everyone will continue using React,Vue,Svelte and it is highly unlikely that they are going to adapt this new API.

localvoid | 8 months ago | on: The time is right for a DOM templating API

Just want to add that even though ivi is using tagged templates, I am strongly against using tagged templates to describe UIs as a Web Standard.

One of the most useful features that could make a lot of incremental computation problems easier is "value types"[1], but unfortunately it seems that isn't going to happen anytime soon. The biggest constraint when developing an efficient UI framework with good DX is JavaScript. Also, it would be nice to have `Node.prototype.insertAfter()` :)

1. https://github.com/tc39/proposal-record-tuple

localvoid | 2 years ago | on: Learn how modern JavaScript frameworks work by building one

I've stopped paying close attention to the web framework scene in the past couple of years, as most of the interesting ideas on this topics are usually coming from different communities. But as I understand, the majority of popular web frameworks (React, Vue3, Angular) are still using tree diffing or hybrid "signals"+tree diffing strategies.

In my opinion, one of the most interesting ideas to explore in this problem space is a hybrid solution: differential dataflow[1][2](model) + self-adjusting computations(view-model + view).

1. https://github.com/vlcn-io/materialite

2. https://timelydataflow.github.io/differential-dataflow/

localvoid | 2 years ago | on: Learn how modern JavaScript frameworks work by building one

If anyone is interested in this topic, I would recommend to start from fundamentals, so it would provide some answers on why some "not so modern" frameworks aren't jumping on a "signals" hype-train.

- Incremental computing - https://en.wikipedia.org/wiki/Incremental_computing

- Self-Adjusting Computation (Umut A. Acar) - https://www.cs.cmu.edu/~rwh/students/acar.pdf

- Introducing incremental (JaneStreet) - https://blog.janestreet.com/introducing-incremental/

- Incremental computation and the web (JaneStreet) - https://blog.janestreet.com/incrementality-and-the-web/

- Self Adjusting DOM (JaneStreet) - https://blog.janestreet.com/self-adjusting-dom/

- Self Adjusting DOM and Diffable Data (JaneStreet) - https://blog.janestreet.com/self-adjusting-dom-and-diffable-...

- Incremental Computation (Draft of part 1) (Rado Kirov) - https://rkirov.github.io/posts/incremental_computation/

- Incremental Computation (Draft of part 2) (Rado Kirov) - https://rkirov.github.io/posts/incremental_computation_2/

- Incremental Computation (Draft of part 3) (Rado Kirov) - https://rkirov.github.io/posts/incremental_computation_3/

- Towards a unified theory of reactive UI (Raph Levien) - https://raphlinus.github.io/ui/druid/2019/11/22/reactive-ui....

localvoid | 3 years ago | on: Kobold, a new web UI crate with zero-cost static DOM

There was a bug in ivi 2.0.0 with `shouldComponentUpdate` optimization, it was completely ignored. This benchmark submission rerenders and diffs everything on each change, all other `f(state) => UI` libraries in this benchmark are implemented with `shouldComponentUpdate` optimization. Also, unlike majority of libraries on the left side of the table, ivi doesn't use any type of event delegation(implicit or explicit) to get better results, just plain old `addEventListener()` is used internally to attach event listeners.

localvoid | 3 years ago | on: The self-fulfilling prophecy of React

> Duplicate vnodes doesnt work, so you HAVE to use some function to generate it instead?

Yes. Mutable vnodes is a huge mistake that I've done long time ago when I've tried to figure out how to write efficient diffing algorithm (2014-2015), and a lot of libraries copied that terrible idea without deep understanding why I've done it in the first place.

localvoid | 3 years ago | on: The self-fulfilling prophecy of React

> (and Solid which is quite similar to React)

How is it similar when React lets you write non-incremental algorithms when you working with your state and with Solid you are forced to write incremental algorithms? Simple aggregate (GROUP BY) use cases that your average junior developer will be able to solve in 5 minutes with React will be a huge problem even for experienced Solid developers. The only similarity is that it is also using JSX syntax, but with completely different semantics.

localvoid | 3 years ago | on: Preact Signals

> I stated this very clearly, saying "complex implicit reactive effects seem fragile and difficult to debug and reason about".

It is definitely easier to reason about dataflow in a good incremental library with dependency autotracking ("Self-Adjusting Computation"[1]) than to reason about nondeterministic concurrent rendering in React :)

1. https://www.cs.cmu.edu/~rwh/students/acar.pdf

localvoid | 3 years ago | on: Preact Signals

> for example I hate dependency arrays

It is also an optimization and I agree that it is worse in terms of DX than autotracking dependencies.

localvoid | 3 years ago | on: Preact Signals

> I guess performance in some cases, but mainly better developer experience.

It has better developer experience when you apply it to optimize performance. It is impossible to beat from-scratch recomputation in terms of DX.

localvoid | 3 years ago | on: Show HN: I made React with a faster Virtual DOM

> And if there are any other good resources out there, then do share! :)

Unfortunately there aren't any good resources on this topics. Everyone is just focusing on a diffing and unable to see a bigger picture. In the end, all feature-complete libraries implement diffing algorithms for dynamic children lists and attribute diffing for "spread attributes", so with this features we are kinda already implementing almost everything to work with DOM and create a vdom API, everything else are just slight optimizations to reduce diffing overhead. But working with DOM is only a part of a problem, it is also important how everything else is implemented, all this different features are going to be intertwined and we can end up with combinatorial explosion in complexity if we aren't careful enough. Svelte is a good example of a library that tried to optimize work with DOM nodes at the cost of everything else. As an experiment, I would recommend to take any library from this[1] benchmark that makes a lot of claims about its performance, and start making small modifications to the benchmark implementation by wrapping DOM nodes into separate components, add conditional rendering, add more dynamic bindings, etc and look how different features will affect its performance. Also, I'd recommend to run tests in a browser with ublock and grammarly extensions.

And again, it is possible to implement a library with a declarative API that avoids vDOM diffing and it will be faster that any "vdom" library in every possible use cases, but it shouldn't be done at the cost of everything else. But unfortunately, some authors of popular libraries are spreading a lot of misinformation about "vdom overhead" and even unable to compete with the fastest ones.

1. https://github.com/krausest/js-framework-benchmark

localvoid | 3 years ago | on: Show HN: I made React with a faster Virtual DOM

It is an old idea with reactive graph created at runtime and direct bindings (knockout.js, etc), but as always, implementation is way more important than some abstract idea, and ~6 years ago Adam showed that it is actually possible to implement this idea quite efficiently (S.js+Surplus). Then Ryan started working on Solid.js and it became one of the most popular implementation of this idea.

There are a lot of things that I don't like in Solid.js implementation, like it seems that he still don't care about performance in general and only focuses on getting high score in js-framework-benchmark (optimizing library for two cases: DOM template cloning and one/many-to-one reactive bindings). But I believe that it is not something that is inherently wrong with an idea and there are a lot of room for improvements in implementations.

I guess the main tradeoff with such idea is that it has a slightly higher learning curve than something like React with its top-down recompute/rerender approach (as long as we don't care about performance). But when we start to add reactive systems to react/svelte/etc to improve performance, at that point it becomes more complex than just using UI library specifically designed for reactive system.

Right now I am trying some experiments with new algorithms and datastructures for reactive system, that I specifically designed for UI problem space, to actually beat vdom implementations in microbenchmarks that were heavily biased towards vdom-like libraries (reimplementing top-down dataflow+diffing in reactive system with derived computations, it is super useful when building something like https://lexical.dev/ )

EDIT: Also, in such libraries it becomes quite hard to implement features like reparenting or DOM subtree recycling. But it seems that nobody cares about reparenting in web UI libraries (Flutter supports reparenting). DOM subtree recycling is quite useful in use cases with occlusion culling (virtual lists), but it should be optional with different strategies to reclaim memory (not how it is done in Imba library).

localvoid | 3 years ago | on: Show HN: I made React with a faster Virtual DOM

> then diffed with the real DOM

Diffing with real DOM is slow, majority of vdom libraries aren't diffing with real DOM. As an author of a "vdom" library, I don't like to think about "reconciler" as a diffing algorithm because it is a useless constraint, I like to think about it as a some kind of a VM that uses different heuristics to map state to different operations represented as a tree data structure.

> What I wonder is whether the reason for virtual DOM is really just historic, is there anything else that has caused its persistence other than inertia?

As a thought experiment try to imagine how would you implement such features:

- Declarative and simple API

- Stateful components with basic lifecycle like `onDispose()`

- Context API

- Components that can render multiple root DOM nodes or DOMless components

- Inside out rendering or at least inside out DOM mounting

- Conditional rendering/dynamic lists/fragments without marker DOM nodes

Here are just some basics that you will need to consider when building a full-featured and performant web UI library. I think that you are gonna be surprised by how many libraries that make a lot of claims about their performance or that "vdom is a pure overhead" are actually really bad when it comes to dealing with complex use cases.

I am not saying that "vdom" approach is the only efficient way to solve all this problems, or every "vdom" library is performant(majority of vdom libraries are also really bad with complex use cases), but it is not as simple as it looks :)

localvoid | 3 years ago | on: Show HN: I made React with a faster Virtual DOM

No, solid.js is building reactive graph at runtime and in theory should be able to also detect static inputs at runtime (not sure how much effort he put into reactive graph optimization techniques). Personally, nowadays I prefer S.js/solid.js approach, but it has different tradeoffs, like it is essential to understand the difference between solid.js and React/Svelte/lit/etc :)

localvoid | 3 years ago | on: Show HN: I made React with a faster Virtual DOM

> It's a very simple approach and very, very hard to beat in the fast/small/simple/buildless tradeoff space.

Author of the ivi library here. Completely agree with an idea that such approach could lead to a better performance, but there is a huge difference between an idea and actual implementation. Also, I just don't get it why a lot developers that work in this problem space still think like "virtual DOM" API and tagged template APIs are mutually exclusive, I've actually have an experimental implementation that supports both APIs and it is not so easy to beat efficient full diff vdom algo. Tagged template APIs are useful when we are working with mostly static HTML chunks, but when it comes to building a set of reusable components (not expensive web components), pretty much everything inside this components becomes dynamic and we are back to diffing everything.

page 1