andhow's comments

andhow | 10 years ago | on: From Asm.js to WebAssembly

I wouldn't say this is "tied" to ES6, but rather intends to integrate nicely. If a developer has no interest in being called by or calling JS, they should be able to ignore the ES6 module aspect. For workers, it should (eventually, probably not in the MVP v.1) be possible to pass a URL (which, with Blob + Object URL needn't be a remote fetch and can be explicitly cached (Cache API or IndexedDB) or dynamically generated (Blob constructor)) to a worker constructor.

andhow | 11 years ago | on: What does “asm.js optimizations” mean?

asm.js now allows the heap to be resized (by replacing the heap's ArrayBuffer with a newer, bigger ArrayBuffer that was either copied or produced via ES7-proposed ArrayBuffer.transfer [1]). Heap resizing currently has to be enabled in Emscripten (by passing -s ALLOW_MEMORY_GROWTH=1), but may become the default in the future. The main asm.js spec page hasn't been updated yet, but the extension was discussed publicly with comments from Microsoft engineers [2].

[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... [2] http://discourse.specifiction.org/t/request-for-comments-swi...

andhow | 12 years ago | on: The Birth and Death of JavaScript [video]

On x64 in Firefox, at least, there are no bounds checks; the index is a uint32; the entire accessible 4GB range is mapped PROT_NONE with only the accessible region mapped PROT_READ|PROT_WRITE; out-of-bounds accesses thus reliably turn into SIGSEGVs which are handled safely after which execution resumes. Thus, bounds checking is effectively performed by the MMU.

andhow | 12 years ago | on: Gap between asm.js and native gets narrower with float32 optimizations

For (1): You're right that an option is to compile the VM itself to asm.js (since the VM is usually written C/C++ code; JITs are an obvious exception since they generate machine code at runtime). This has already been done, e.g., for Java [1] and Lua [2]. What is meant by "supporting GC languages" is translating GC objects in the source language to real JavaScript objects so that the JS GC may be used on them. For statically-typed source languages like .NET/JVM, the proposed Typed Objects extension to JavaScript [3] could be naturally integrated into asm.js to describe classes. This is all a little ways off since Typed Objects needs to be standardized first. Also, the lack of finalizers in JS would limit the fidelity of the translation.

For (2): yes, it is already being worked on [4].

[1] xmlvm.org [2] http://kripken.github.io/lua.vm.js/lua.vm.js.html [3] http://wiki.ecmascript.org/doku.php?id=harmony:typed_objects [4] http://badassjs.com/post/43158184752/qt-gui-toolkit-ported-t...

andhow | 12 years ago | on: Gap between asm.js and native gets narrower with float32 optimizations

The 'war on native' has multiple fronts and this post is just reporting on one of them. For many apps, I agree that the items you've mentioned are the most significant and progress is also being made on these fronts (e.g., GPU-acceleration of some CSS transitions are already in Firefox; more and more DOM APIs are being exposed to workers). For other apps, though, especially ones using WebGL like games or authoring tools, raw computational throughput is the most significant.

andhow | 12 years ago | on: Obscure C++ Features

I've always thought that this ranked among the most obscure:

  typedef int F(int);
  class C { F f; };
  int C::f(int i) { return i; }
I've never seen it used in practice...

andhow | 13 years ago | on: Asm.js: a strict subset of js for compilers – working draft

(Luke Wagner from Mozilla here.)

> This seems very much targeted at emscripten and not to cross-compilers that start with GC'ed languages like GWT, Dart, ClojureScript, et al.

That's correct, although one could implement a garbage collector on top of the typedarray heap (we have working examples of this). There are some limitations to GCing the typed array manually, though, such as not taking advantage of the browser's ability to better schedule GCs.

Looking further in the future, though, it would be completely reasonable to extend asm.js to allow the super-optimizable use of the upcoming BinaryData API [1] in the style of JVM/CLR-style objects. Again, though, this is speculative; BinaryData isn't even standardized yet.

> It's also unclear to me how this solves the problem of startup time on mobile.

We have several strategies to improve this. The "use asm" directive, in addition to allowing us to produce useful diagnostic messages to devs when there is an asm.js type error, allows us to confidently attempt eager compilation which can happen on another thread while, e.g., the browser is downloading art assets. Looking farther in the future again, we could make some relatively simple extensions to the set of Transferrable [2] objects that would allow efficient programmer-controlled caching of asm.js code including jit code using the IndexedDB object store.

> In all likelihood, the majority of asm.js outputs would be actually be non-human readable output of optimizing cross-compilers, so there isn't much benefit from having a readable syntax that humans could read, so what's the real justification for using JS as an intermediate representation over say, a syntax specifically designed for minimum network overhead and maximum startup speed?

Before minification, asm.js is fairly readable once you understand the basic patterns, assuming your compiler keeps symbolic names (Emscripten does). The primary benefit is that asm.js runs right now, rather efficiently, in all major browsers. It also poses zero standardization effort (no new semantics) and rather low implementation effort (the asm.js type system can be implemented with a simple recursive traversal on the parse tree the generates IR using the JS VM's existing JIT backend). This increases the chances that other engines will adopt the same optimization scheme. A solution for native performance is only a solution if it is portable and we want to maximize that probability.

> The usual response is minify + gzip, but it's not a panacea.

In addition to minify+gzip, one can also write a decompressor in asm.js that unpacks the larger program. Also, see [3] for how minified gzipped Emscripten code is comparable to gzipped object files.

[1] http://wiki.ecmascript.org/doku.php?id=harmony:binary_data [2] http://www.whatwg.org/specs/web-apps/current-work/multipage/... [3] http://mozakai.blogspot.com/2011/11/code-size-when-compiling...

andhow | 14 years ago | on: Open Web Device

You're right, if they left out the + sign on the beta keyboard UI they probably forgot all the calls to free(), WebSockets, and maybe CSS. Outlook is bleak.

andhow | 14 years ago | on: Mozilla is building an operating system

Boot To Gecko is not a project to build an operation system.

IIUC, Mozilla is planning to reuse Linux and core components of Android. If anything, I would think Boot To Gecko would be better thought of as a shell around the OS. Using process separation (underway for a while now in the Electrolysis project) bugs/leaks/crashes in Gecko should be isolated and not significantly more disruptive than in normal Firefox.

I think the major value provided by (and work required for) Boot To Gecko will be on creating open standards (in the open) for access to the device so that web platform can be a first class citizen of the device.

page 1