top | item 17981270

Intent to Implement: Display Locking

87 points| bpierre | 7 years ago |docs.google.com | reply

43 comments

order
[+] jenscow|7 years ago|reply
[+] tmpmov|7 years ago|reply
Interesting. My, perhaps wrong, visualization of the janking effect, without the display update, concerns resizing, or showing, a column or pane in a web app. During the resize, the content in both frames are not updated until the resize is complete under certain (most?) conditions.

"If an undue delay is likely to be caused, the work already completed is processed and the update phase yields to other update phases for unlocked content."

My interpretation for a pane/column resize or hidden to visible operation: display-locking reduces jank if I could operate on these elements with the display lock tools. This jank reduction produces more fluid updates in elements not affected by the lock.

Question: If sub elements have complicated draw/render cycles, how will the interplay of locks at different layers affect the result? Composing objects with libraries that use these locks makes me wonder about this issue (or if I make sub-components myself).

[+] CharlesW|7 years ago|reply
Wow! When I first learned web development way back when, I was surprised that something like HyperCard's lock screen didn't exist/wasn't possible. Being able to do this on an element level is a huge improvement over that. This is great.
[+] jessedhillon|7 years ago|reply
Has there been substantial discussion about this feature? Neither of the two links provided lead to deep discussions about this. It doesn't seem like this is related to any agreed upon standards process.

As far as I can tell, this is being presented as a change to the programming model which will be undertaken unilaterally. Are we back to the days where one browser gets to decide how the web works, and everyone else can catch up if they like?

[+] bcoates|7 years ago|reply
Is the point of this just to reduce visible jank, or to obsolete shadow-dom implementations by replacing them with lock->modify->unlock?
[+] jedberg|7 years ago|reply
TIL jank is an actual term people use in formal documents.
[+] duskwuff|7 years ago|reply
Both as a noun ("causes jank") and as a verb ("the page janks")!
[+] _jal|7 years ago|reply
Unike formal languages, human language doesn't evolve via RFCs or ItIs.

(OK, formal languages don't, really, either. But they do tend to stick stakes in the ground via those mechanisms, for which most human languages don't have any analogous concept.)

[+] vermilingua|7 years ago|reply
Jank is an actual term, it’s the time-derivative of acceleration. Page jank looks how motion jank feels; unexpected and uncomfortable.
[+] mbrumlow|7 years ago|reply
Will this allow content providers to lock content so that ad blockers can't change the content ?
[+] emsy|7 years ago|reply
No, this is about increasing performance by taking elements out of the slow DOM update cycle
[+] mcmatterson|7 years ago|reply
For someone who doesn’t follow lower level browser dev that closely, this seems like an optimization pretty specifically for a react style ‘nested redraws’ use case. Is this at all correct?
[+] ramses0|7 years ago|reply
Basically "<ul id='foo'><li>...</li></ul>" && #foo.append( ...li... ) may cause a redraw, recalc, reflow, of layout + styles for each LI you add to the UL, as well as the rest of the dom containing the UL.

However if you can do: "BEGIN TRANSACTION; ...#foo.append(...)...; END TRANSACTION;" it gives you a mechanism to "freeze" all or some of the display (think of it as a subset double-buffer), and "blit" the changed dom to the UI when you're done (with the possibility of: " && CONTINUE TRANSACTION".

Imagine a simple list of search results with alternating row colors (white / grey backgrounds) specified by css (nth-child %2 == 0).

If you prepend elements, instead of appending elements it might cause all elements to change color and re-render, on each insertion.

If you append elements instead then it's likely a more efficient on an individual element basis than prepending, but if you had the ability to "lock" the affected display area until you're done with your for-loop, then the browser can avoid updating the area at all until the "COMMIT" call (multiple actions, with a single commit resolution at the end).

Think of it as "batch these dom updates..." Git/SVN multi-file commits vs CVS commits (single-file).

[+] janekm|7 years ago|reply
I would actually say that non-React-style code would benefit even more. Suppose the user clicked a different sort tab on a table, and the code is busily removing and re-inserting elements in the dom, which the browser attempts to layout and display at the same time, leading to a lot of wasted calculations and visible "jank".
[+] delinka|7 years ago|reply
"...without jank."

Is this a technical term?

[+] jjcm|7 years ago|reply
I've heard "jank" used almost always when referring to visual stutter caused by mass-style changes, both in my current role at Atlassian as well as Microsoft. I think it's pretty standard.
[+] Serow225|7 years ago|reply
It's a term of art in the graphics community, you'll see articles and talks by people like John Carmack on VR mentioning it quite a bit.
[+] bcoates|7 years ago|reply
I think what they're describing is actually a glitch in the technical sense -- an untended output state caused by not being able to change all inputs exactly at the same time
[+] Kenji|7 years ago|reply
Why not just double-buffering the DOM? That'd be much simpler. Mutate your DOM all you want, then swap it in when it's ready.
[+] spankalee|7 years ago|reply
This allows the browser to delay all style, layout, and paint work on attached DOM, and then also spread that work across multiple frames during a commit, reducing jank for the unlocked portions of the page.

If you synchronously swap a large portion of DOM, style, layout, and paint will likely blow your frame budget.

[+] unilynx|7 years ago|reply
We can already do that: clone first and replaceChild afterwards.

It breaks already attached event handlers, undo buffers for input elements, and any existing references you have to DOM nodes.

[+] pjc50|7 years ago|reply
The DOM isn't a buffer, it's a tree; doing a deep copy would be an expensive operation incurring lots of little allocations.
[+] derefr|7 years ago|reply
That's essentially what is already done, without the browser's assistance, with "virtual DOM"-style libraries. The point of this work is to obviate the need for a virtual DOM.
[+] a_t48|7 years ago|reply
Possibly race conditions between scripts which see different versions of the DOM.
[+] tinus_hn|7 years ago|reply
What is the difference? This allows you to mutate the DOM all you want and then have it redrawn when it’s ready.
[+] stephengillie|7 years ago|reply
Gamers know triple-buffering is obviously superior.