I still think we're missing the point; we're focusing on the wrong benchmarks. We're putting too much emphasis on JS speed, when in reality most web apps are behind native because of lack of (perceived) graphics performance.
The shortcut out of this requires a greater variety of GPU-accelerated CSS transitions/DOM changes, as well as easier computations and DOM manipulations off the main thread, which cause horribly noticeable UI hiccups. Web Workers are still too primitive (e.g. no DOM access whatsoever) and slow (e.g. no shared memory).
Not saying it's unimportant to improve JS's CPU performance; just saying that we're focusing too much on the wrong battle in the war against native.
WebGL anyone? This is what does most of the heavy lifting in the main asm.js use case: games.
I am still waiting to see a UI framework built using asm.js and WebGL. Maybe even a existing native framework ported over, something like WPFe(Silverlight) might be about as difficult as a game engine to port.
Asm.js still has no solution to shared memory threads, which will be an issue for any modern game engine I would imagine.
The 'war on native' has multiple fronts and this post is just reporting on one of them. For many apps, I agree that the items you've mentioned are the most significant and progress is also being made on these fronts (e.g., GPU-acceleration of some CSS transitions are already in Firefox; more and more DOM APIs are being exposed to workers). For other apps, though, especially ones using WebGL like games or authoring tools, raw computational throughput is the most significant.
Web workers will never have DOM access. They run in a separate process with a separate URL. That's not an issue. You use postMessage to communicate with the workers.
1) I understand the C/C++ -> LLVM -> Emscripten -> asm.js process. But I heard they're also working on supporting GC languages (like Java and Go). How would this work exactly? Wouldn't they first have to port the entire JVM or Go runtime into asm.js? And every time a Java/Go -> asm.js program is downloaded, it would basically also download the entire JVM/Go runtime as well?
2) Would it be possible to use GUI frameworks (like Qt for C++ and maybe Swing for Java in the future) to build GUIs and directly output to canvas?
For (1): You're right that an option is to compile the VM itself to asm.js (since the VM is usually written C/C++ code; JITs are an obvious exception since they generate machine code at runtime). This has already been done, e.g., for Java [1] and Lua [2]. What is meant by "supporting GC languages" is translating GC objects in the source language to real JavaScript objects so that the JS GC may be used on them. For statically-typed source languages like .NET/JVM, the proposed Typed Objects extension to JavaScript [3] could be naturally integrated into asm.js to describe classes. This is all a little ways off since Typed Objects needs to be standardized first. Also, the lack of finalizers in JS would limit the fidelity of the translation.
I wouldn't say asm.js is a spin-off. In fact it's a strict subset of JavaScript, though designed to be used as a compiler target rather than directly coded. The web browser can then execute the code in a highly optimized (hopefully, eventually near native performance) fashion.
Could someone please explain me where mankind is going currently in software development:
Here is my current understanding (approximated heavily to allow seeing the big picture) and presented in non-linear chronological order:
== Note: I am genuinely trying to construct the big picture. Please help me understand and not offer just plain criticism that does not teach me. ==
1. There were multiple processors and operating systems. Programming languages like C created with the idea of writing programs once and just compiling for different platforms. But alas, systems were still too different to handle with a common code base. Many languages like C++ came along, but do not provide cross-platform libraries for many things like GUI development. Third parties libraries did, but native UI/UX not achieved still?
2. Java created with the idea of writing programs once (and even compiling to the byte code once) and running everywhere, thus solving compatibility issues C and C++ had across platforms. But alas, writing once and running everywhere were still too different in a common code base (Is this true?), especially with mobile devices coming along that bring different UI/UX requirements. Also native UI/UX was not achievable (Is this true?).
In the meanwhile:
A. Installation of software on the local machine (aka caching it locally) and constantly updating it was a pain. Automatic updates popularly used but unpatched machines still exist.
B. Browsers came along with static pages, then gradually added more and more interactivity.
C. Machines and networks became fast enough to install the software live from a website when a user visits and remove it when the browser cache is cleared.
So now:
3. Applications start getting developed in Javascript (and other technologies) inside a browser. Native UI/UX still performs better on mobile (and on desktops too in my experience). Browsers still struggle somewhat with standardization.
4. The legacy code from #1 above is now being connected to #3 above allowing even operating systems and VMs to run inside the browser.
So now we may have C++ code (originally designed to compile natively) now converted to Javascript running inside a browser, that interprets/JIT's that Javascript to run natively on the machine/OS, where the browser itself is written in C++ or so, runs as a visible or hidden layer on the top of the OS or VM, which by itself is written on C++ or so, and finally runs on the processor on which the original C++ code was designed to run on.
While I certainly appreciate the flexibility of flows this offers, I am still trying to make sense of all this from the progress made by mankind as a whole. What is the ultimate problem that we are trying to solve? Compatibility between platforms and uniformity of experience across them? Improvements made to the software development processes (better languages)? Loading programs securely into the local machine (caching) instead of installing?
No matter which direction I think, it seems to me that if mankind were to think these through, and were not to have the purely historical elements in the picture, the problems we have should "technically" have been solvable more easily than the path mankind has taken.
Again, I am seeking help and inputs to understand this better, not underestimating the value of efforts made by people around the world, or criticize any such effort including asm.js.
There is no concerted master plan, only least resistance paths when trying to build something.
Programmer uses language X (JavaScript) and need to do something sufficiently complex he doesn't want to write again (image processing / game dynamics / crypto). Programmer writes a transpiler.
Or programmer A wants to do something heretic for the sake of it, programmer B sees an opportunity for his sufficiently complex library.
Note: this response deals with this question from a consumer OS perspective, the answer with regard to servers is quite different.
The funny thing about the history of programming languages is that much like the properties of human societies, if you don't know the actual history its easy to get it completely wrong by logical analysis. For example, someone not knowing anything about our history may naturally assume that we started with monarchies then moved on to republics, etc. Only after reading about Rome would they see how messy it really was to get to where we are today.
Similarly, its easy to think that C was the "first" language, and then gradually we developed more dynamic ideas. But the reality is that C showed up 14 years after Lisp.
All this to say that if you want to truly understand "where we're going", you have to kind of understand that most the problems we are trying to "fix" are really "fake" or "man made problems" (from an engineering perspective that is). Making a program that runs on every architecture is not difficult. C++ wasn't invented to solve this problem nor did it make things any easier in this department. The reason a program that runs on Mac doesn't run on Windows is due to the fact that the companies that own the underlying systems don't want them to (I suppose you could thus argue that the reason is that we allow laws to make this possible). Its not that Windows can't handle drawing Macintosh looking buttons or something.
Yes, yes, its certainly the case that at some point it would have theoretically been annoying to ship a PPC bundle and an x86 bundle if the API problem would have magically not existed -- but the reality is that today basically everyone is running the same architecture for all intents and purposes (90+% of your consumer desktop base is on x86 and 90+% of your smartphone base is on ARM).
So the real problem now becomes making a program that runs on Linux/Mac/PC (or alternatively iOS/Android) that "feels" right on both. This remains an unsolved issue, and is arguably why Java failed. Java figured out the technical aspect just fine (again, it is NOT hard to make something that runs on any piece of metal), however, Java apps "felt" terrible. Similarly, Microsoft correctly understood that Java was a threat and hampered it. As you can see this has nothing to do with engineering.
So why is this not the case with JavaScript? Again, for no technical reason: the answer is purely cultural. The trick with JS was that it snuck in through the browser where two important coincidences took place:
1. People did't have an existing expectation of homogeneity in the browser. It started as a place to post a bunch of documents, and so people got used to different websites looking different. No one complains if website A has different buttons or behaviors than website B, so as an accident of history developers were able to increasingly add functionality without complaints of a website "not feeling right on Mac OS".
2. No large corporation was able to properly assess the danger of JS early enough to kill it. Again, since the web was a place for teens to make emo blogs, it wasn't kept in check like Java was. Hell, Microsoft added XMLHttpRequest: arguably the most important component in the rise of web apps.
So now everything is about trying to leverage this existing foothold, that's why we bend over backwards to get low level languages to compile into high level languages and so forth. Don't try to think about it like a logical progression or how you would design the whole system from the ground up, just treat it more like a challenge or an unecessarily complex logic puzzle.
What advantages would those be? Do they still apply when compared to a modern strongly-typed language with type inference and typeclass-like functionality, like Scala?
If you disagree with the following opinion please don't down vote me instead enlighten me.
Frankly I don't see the point of compiling C/C++ to JavaScript. If you'r using C/C++ you might as well compile to machine code, it's not like you gain anything by compiling to JavaScript.
What tom said; also typing a url in a browser is a lot easier than installing something. This might sound silly but for the typical computer user (and even many nerds!) it makes a big difference.
Ask a question rather than state a potentially incorrect opinion which sounds like you think you know what you are talking about, but suggests that you don't.
With my limited knowledge, you point initially makes sense to me. But clearly you and I don't know enough. Yes, it does on the face of it seem odd to compile C to JS. In fact, it seems positively mental. But then now I have read the replies, I can now see why. Trick is to acknowledge one's knowledge limits.
[+] [-] AndrewDucker|12 years ago|reply
https://hacks.mozilla.org/2013/12/gap-between-asm-js-and-nat...
[+] [-] arturadib|12 years ago|reply
I still think we're missing the point; we're focusing on the wrong benchmarks. We're putting too much emphasis on JS speed, when in reality most web apps are behind native because of lack of (perceived) graphics performance.
The shortcut out of this requires a greater variety of GPU-accelerated CSS transitions/DOM changes, as well as easier computations and DOM manipulations off the main thread, which cause horribly noticeable UI hiccups. Web Workers are still too primitive (e.g. no DOM access whatsoever) and slow (e.g. no shared memory).
Not saying it's unimportant to improve JS's CPU performance; just saying that we're focusing too much on the wrong battle in the war against native.
[+] [-] SigmundA|12 years ago|reply
I am still waiting to see a UI framework built using asm.js and WebGL. Maybe even a existing native framework ported over, something like WPFe(Silverlight) might be about as difficult as a game engine to port.
Asm.js still has no solution to shared memory threads, which will be an issue for any modern game engine I would imagine.
[+] [-] andhow|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] belluchan|12 years ago|reply
[+] [-] cpprototypes|12 years ago|reply
1) I understand the C/C++ -> LLVM -> Emscripten -> asm.js process. But I heard they're also working on supporting GC languages (like Java and Go). How would this work exactly? Wouldn't they first have to port the entire JVM or Go runtime into asm.js? And every time a Java/Go -> asm.js program is downloaded, it would basically also download the entire JVM/Go runtime as well?
2) Would it be possible to use GUI frameworks (like Qt for C++ and maybe Swing for Java in the future) to build GUIs and directly output to canvas?
[+] [-] andhow|12 years ago|reply
For (2): yes, it is already being worked on [4].
[1] xmlvm.org [2] http://kripken.github.io/lua.vm.js/lua.vm.js.html [3] http://wiki.ecmascript.org/doku.php?id=harmony:typed_objects [4] http://badassjs.com/post/43158184752/qt-gui-toolkit-ported-t...
[+] [-] camus2|12 years ago|reply
http://badassjs.com/post/43158184752/qt-gui-toolkit-ported-t...
JVM running in the browser :
http://plasma-umass.github.io/doppio/about.html
Anything is possible in the browser.
[+] [-] gizmo|12 years ago|reply
[+] [-] belluchan|12 years ago|reply
[+] [-] Torn|12 years ago|reply
[+] [-] eonil|12 years ago|reply
[+] [-] azakai|12 years ago|reply
[+] [-] daigoba66|12 years ago|reply
Brendan Eich talks a lot about this in a recent presentation: http://www.infoq.com/presentations/web-evolution-trends
[+] [-] johnbm|12 years ago|reply
[+] [-] Herald_MJ|12 years ago|reply
[+] [-] azakai|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] zurn|12 years ago|reply
[+] [-] alok-g|12 years ago|reply
Here is my current understanding (approximated heavily to allow seeing the big picture) and presented in non-linear chronological order:
== Note: I am genuinely trying to construct the big picture. Please help me understand and not offer just plain criticism that does not teach me. ==
1. There were multiple processors and operating systems. Programming languages like C created with the idea of writing programs once and just compiling for different platforms. But alas, systems were still too different to handle with a common code base. Many languages like C++ came along, but do not provide cross-platform libraries for many things like GUI development. Third parties libraries did, but native UI/UX not achieved still?
2. Java created with the idea of writing programs once (and even compiling to the byte code once) and running everywhere, thus solving compatibility issues C and C++ had across platforms. But alas, writing once and running everywhere were still too different in a common code base (Is this true?), especially with mobile devices coming along that bring different UI/UX requirements. Also native UI/UX was not achievable (Is this true?).
In the meanwhile:
A. Installation of software on the local machine (aka caching it locally) and constantly updating it was a pain. Automatic updates popularly used but unpatched machines still exist.
B. Browsers came along with static pages, then gradually added more and more interactivity.
C. Machines and networks became fast enough to install the software live from a website when a user visits and remove it when the browser cache is cleared.
So now:
3. Applications start getting developed in Javascript (and other technologies) inside a browser. Native UI/UX still performs better on mobile (and on desktops too in my experience). Browsers still struggle somewhat with standardization.
4. The legacy code from #1 above is now being connected to #3 above allowing even operating systems and VMs to run inside the browser.
So now we may have C++ code (originally designed to compile natively) now converted to Javascript running inside a browser, that interprets/JIT's that Javascript to run natively on the machine/OS, where the browser itself is written in C++ or so, runs as a visible or hidden layer on the top of the OS or VM, which by itself is written on C++ or so, and finally runs on the processor on which the original C++ code was designed to run on.
While I certainly appreciate the flexibility of flows this offers, I am still trying to make sense of all this from the progress made by mankind as a whole. What is the ultimate problem that we are trying to solve? Compatibility between platforms and uniformity of experience across them? Improvements made to the software development processes (better languages)? Loading programs securely into the local machine (caching) instead of installing?
No matter which direction I think, it seems to me that if mankind were to think these through, and were not to have the purely historical elements in the picture, the problems we have should "technically" have been solvable more easily than the path mankind has taken.
Again, I am seeking help and inputs to understand this better, not underestimating the value of efforts made by people around the world, or criticize any such effort including asm.js.
[+] [-] zimbatm|12 years ago|reply
Programmer uses language X (JavaScript) and need to do something sufficiently complex he doesn't want to write again (image processing / game dynamics / crypto). Programmer writes a transpiler.
Or programmer A wants to do something heretic for the sake of it, programmer B sees an opportunity for his sufficiently complex library.
[+] [-] yetanotherphd|12 years ago|reply
Google's PNaCl is more elegant since it doesn't involve js-that-is-also-bytecode but it accomplishes essentially the same task.
Creating a new cross-platform UI framework would be essentially recreating HTML5 anyway, so I don't think there would be a big advantage there.
[+] [-] tolmasky|12 years ago|reply
The funny thing about the history of programming languages is that much like the properties of human societies, if you don't know the actual history its easy to get it completely wrong by logical analysis. For example, someone not knowing anything about our history may naturally assume that we started with monarchies then moved on to republics, etc. Only after reading about Rome would they see how messy it really was to get to where we are today.
Similarly, its easy to think that C was the "first" language, and then gradually we developed more dynamic ideas. But the reality is that C showed up 14 years after Lisp.
All this to say that if you want to truly understand "where we're going", you have to kind of understand that most the problems we are trying to "fix" are really "fake" or "man made problems" (from an engineering perspective that is). Making a program that runs on every architecture is not difficult. C++ wasn't invented to solve this problem nor did it make things any easier in this department. The reason a program that runs on Mac doesn't run on Windows is due to the fact that the companies that own the underlying systems don't want them to (I suppose you could thus argue that the reason is that we allow laws to make this possible). Its not that Windows can't handle drawing Macintosh looking buttons or something.
Yes, yes, its certainly the case that at some point it would have theoretically been annoying to ship a PPC bundle and an x86 bundle if the API problem would have magically not existed -- but the reality is that today basically everyone is running the same architecture for all intents and purposes (90+% of your consumer desktop base is on x86 and 90+% of your smartphone base is on ARM).
So the real problem now becomes making a program that runs on Linux/Mac/PC (or alternatively iOS/Android) that "feels" right on both. This remains an unsolved issue, and is arguably why Java failed. Java figured out the technical aspect just fine (again, it is NOT hard to make something that runs on any piece of metal), however, Java apps "felt" terrible. Similarly, Microsoft correctly understood that Java was a threat and hampered it. As you can see this has nothing to do with engineering.
So why is this not the case with JavaScript? Again, for no technical reason: the answer is purely cultural. The trick with JS was that it snuck in through the browser where two important coincidences took place:
1. People did't have an existing expectation of homogeneity in the browser. It started as a place to post a bunch of documents, and so people got used to different websites looking different. No one complains if website A has different buttons or behaviors than website B, so as an accident of history developers were able to increasingly add functionality without complaints of a website "not feeling right on Mac OS".
2. No large corporation was able to properly assess the danger of JS early enough to kill it. Again, since the web was a place for teens to make emo blogs, it wasn't kept in check like Java was. Hell, Microsoft added XMLHttpRequest: arguably the most important component in the rise of web apps.
So now everything is about trying to leverage this existing foothold, that's why we bend over backwards to get low level languages to compile into high level languages and so forth. Don't try to think about it like a logical progression or how you would design the whole system from the ground up, just treat it more like a challenge or an unecessarily complex logic puzzle.
[+] [-] ilaksh|12 years ago|reply
[+] [-] lmm|12 years ago|reply
[+] [-] jorgecastillo|12 years ago|reply
Frankly I don't see the point of compiling C/C++ to JavaScript. If you'r using C/C++ you might as well compile to machine code, it's not like you gain anything by compiling to JavaScript.
[+] [-] jamesjporter|12 years ago|reply
[+] [-] alan_cx|12 years ago|reply
Ask a question rather than state a potentially incorrect opinion which sounds like you think you know what you are talking about, but suggests that you don't.
With my limited knowledge, you point initially makes sense to me. But clearly you and I don't know enough. Yes, it does on the face of it seem odd to compile C to JS. In fact, it seems positively mental. But then now I have read the replies, I can now see why. Trick is to acknowledge one's knowledge limits.
[+] [-] tomdale|12 years ago|reply
[+] [-] zimbatm|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] clavalle|12 years ago|reply