They have been deprecated because they perform poorly:
> Firefox, for example, when it realizes that a mutation event has been turned on, instantly goes into an incredibly-slow code path where it has to fire events at every single DOM modification. This means that doing something like .innerHTML = "foo" where it wipes out 1000 elements would fire, at least 1000 + 1 events (1000 removal events, 1 addition event).
> Mutation Events are widely acknowledged as “slow” in terms of the real performance degradation that occurs on websites that use them heavily for tracking changes to the DOM
The spec says "A new specification is under development with the aim of addressing the use cases that mutation events solves, but in more performant manner." so hopefully it won't be too hard to port things like this over... I hope so anyway, because the mutation events are mighty handy!
Why are you comparing this to `jQuery(document).ready()`? Apples and oranges.
You say it’s faster, but fail to provide any numbers / a jsPerf test case. I’m sure it’s faster to execute `waituntilexists()` initially, but if you take into account it uses an 5ms interval in which it traverses the DOM for the same elements over and over again until they’re finally found, it seems it probably has a negative impact on performance overall.
I like the idea and it does seem that it could reduce the time that your JavaScript code has to wait before it can run. However, in reality most real JavaScript front end code frameworks require not just one element but multiple elements, and nested elements. I would be hesitant to start running JavaScript just because one element was ready unless I was sure that other elements that my code referenced or manipulated were also ready. I'm sure it would be possible to make a version that allowed you to specify a list of elements to wait for before running code, but then that creates a nightmare for code maintenance, keeping track of the entire list of elements that your JavaScript requires. For me the jQuery(document).ready() method seems easier and more reliable.
* I could test it myself, but I expected to have in the article detailed information on the different possible states the document can have when my callback is called. The questions I know the answer when waiting for the whole document to be loaded : Is it half loaded ? What is the DOM state ? Is completely loaded ? If not, have the document JavaScript code been entirely ran ? and so on. This kind of question left unanswered often lead to hard bites.
* why using "itself" ? I would have used a "sensible default" : when no context is given, use the waited DOM object instead of an arbitrary value (document), and use something else when it is explicitly given. The "itself" thing should be left as an internal flag IMHO.
1) The dom state is that everything inside the HTML element that you were waiting for already exists when the function is executed.
2) When i am writing javascript is very common for me to use the global object (window) as the context; so in my case is not a good idea to use the HTML element as context by default.
Does someone have a really good example of where this would be useful? The description is basically, "When this element exists, do something" but what is the something you'd do? If it's styling, that should be done in CSS. If it's event handlers, that can be done with `.live()` or `.delegate()`.
I'd also like to see some benchmarks to back up the assertion that this is faster. The DOM modification events are generally accepted as slow and it seems like this is leaving a handler attached for the duration of the page's existence.
some UI stuff could benefit from this: things like turning a list into an accordion. (Normally there is at least a little work to be done to ensure you don't get a flash of unstyled/unscripted content, if this fires fast enough you could avoid that)
[+] [-] spjwebster|14 years ago|reply
http://www.w3.org/TR/DOM-Level-3-Events/#event-type-DOMNodeI...
They have been deprecated because they perform poorly:
> Firefox, for example, when it realizes that a mutation event has been turned on, instantly goes into an incredibly-slow code path where it has to fire events at every single DOM modification. This means that doing something like .innerHTML = "foo" where it wipes out 1000 elements would fire, at least 1000 + 1 events (1000 removal events, 1 addition event).
http://lists.w3.org/Archives/Public/www-dom/2009AprJun/0072....
> Mutation Events are widely acknowledged as “slow” in terms of the real performance degradation that occurs on websites that use them heavily for tracking changes to the DOM
http://www.w3.org/2008/webapps/wiki/MutationReplacement
[+] [-] mrspeaker|14 years ago|reply
[+] [-] cstuder|14 years ago|reply
Don't use 1.49mb large GIF animations as background images...
[+] [-] latch|14 years ago|reply
[+] [-] snorkel|14 years ago|reply
co;dr = cpu overheated; didn't read
[+] [-] josscrowcroft|14 years ago|reply
[+] [-] Goosey|14 years ago|reply
[+] [-] mathias|14 years ago|reply
You say it’s faster, but fail to provide any numbers / a jsPerf test case. I’m sure it’s faster to execute `waituntilexists()` initially, but if you take into account it uses an 5ms interval in which it traverses the DOM for the same elements over and over again until they’re finally found, it seems it probably has a negative impact on performance overall.
What’s wrong with event delegation anyway?
[+] [-] AltIvan|14 years ago|reply
It uses getElementById to traverse the DOM and it is said to be the faster selector in javascript.
[+] [-] tszming|14 years ago|reply
But your code should be faster -because you are polling at the 5ms interval!
[+] [-] adamdecaf|14 years ago|reply
[+] [-] Jencha|14 years ago|reply
[+] [-] locci|14 years ago|reply
[+] [-] NathanKP|14 years ago|reply
[+] [-] biot|14 years ago|reply
[+] [-] valisystem|14 years ago|reply
* I could test it myself, but I expected to have in the article detailed information on the different possible states the document can have when my callback is called. The questions I know the answer when waiting for the whole document to be loaded : Is it half loaded ? What is the DOM state ? Is completely loaded ? If not, have the document JavaScript code been entirely ran ? and so on. This kind of question left unanswered often lead to hard bites.
* why using "itself" ? I would have used a "sensible default" : when no context is given, use the waited DOM object instead of an arbitrary value (document), and use something else when it is explicitly given. The "itself" thing should be left as an internal flag IMHO.
[+] [-] AltIvan|14 years ago|reply
2) When i am writing javascript is very common for me to use the global object (window) as the context; so in my case is not a good idea to use the HTML element as context by default.
[+] [-] dmethvin|14 years ago|reply
I'd also like to see some benchmarks to back up the assertion that this is faster. The DOM modification events are generally accepted as slow and it seems like this is leaving a handler attached for the duration of the page's existence.
[+] [-] flamingbuffalo|14 years ago|reply
[+] [-] AltIvan|14 years ago|reply
I used this a lot when developing a Chrome extension for Facebook because i can identify when an HTML element is created via ajax.
[+] [-] acangiano|14 years ago|reply
You can do $(document).ready(), why not, the admittedly syntax sugared, $("div#main").ready().
[+] [-] jterenzio|14 years ago|reply
http://plugins.jquery.com/project/available
[+] [-] tszming|14 years ago|reply
[+] [-] unknown|14 years ago|reply
[deleted]
[+] [-] ck2|14 years ago|reply
I'd like to know if there is any performance hit though, NodeInserted fires on basically on every single fraction of page creation.
[+] [-] AltIvan|14 years ago|reply
[+] [-] dawjan|14 years ago|reply
/Cufon.DOM.ready(function() {/ /$(window).load(function () {/ /$(document).ready(function() {/ /Cufon.DOM.ready(function() {/
And now is 40% faster
[+] [-] unknown|14 years ago|reply
[deleted]