Shameless plug: my implementation of Sublime's syntax highlighting engine in Rust has similar optimizations and more. I'm not at my computer to benchmark on the same files but it should be >2x as fast as their "after" numbers just based on lines/second for JS-like files.
This evening I'm even trying to port it to a pure Rust regex engine that should eliminate non-Rust code and make it substantially faster.
It also implements the sublime-syntax format which is a superset of tmlanguage that allows even nicer highlighting.
So I ran the sqlite3.c benchmark on my machine (a comparable "somewhat powerful" machine) and it took 6.7s with my engine vs 10.9s with the new VSCode one. Both are doing tokenization+theme matching but the machines and exact grammars are not necessarily the same. I'm using ST3's C grammar but they are using a different one. It could be that I ran it on a faster computer with an easier grammar, or it could be the opposite. It's close enough I'm not willing to claim my engine is substantially faster.
For context Sublime Text 3 takes 2 seconds with the same grammar and same file on the same computer due to using a better custom regex engine written specifically for highlighting.
Given what alexdima mentioned in a different comment about spending most of the time in the regex engine, I'm not sure that my engine would be substantially faster under exactly identical conditions since I'm also bottlenecked by Oniguruma.
However, maybe after I port my engine to https://github.com/google/fancy-regex I'll be substantially faster. And if I do it is likely they could also benefit from a fancy-regex port.
Thanks for this. Clearly the original post describes good work, but I can't help feeling the JS community is slacking off when it comes to performance.
Just eyeballing the cited numbers, they take 3939ms to handle a 1.18MB input on "a somewhat powerful desktop machine". Assuming that that means a chip running at 2GHz, we're talking about over 6300 cycles per byte!
That's quite frankly ridiculous. An improvement by at least one order of magnitude should be possible. Where's the ambition?
(Yes, there's always a trade-off with these things. But I feel someone has to point this out when the OP is explicitly about getting kudos for performance work.)
I once dabbled inside vscode tokenizer code. There is a lot going but putting tokens in a buffer took me by a surprise. It was a very smart implementation.
I think it's their focus on performance and good architecture that makes vscode stand out.
Sure sublime also has great engineering behind it, but being able to contribute and look under the hood of tools, we developers use is very exciting. It feels like a very democratic process.
I remember filing the mini-map bug in Monaco, I'm so glad to see they are working on implementing it in a performant way even though it will require large rewrites of their editor rendering code.
Does VS Code support highlighting for "non-regexp" cases. For example: in code i can reference a Class by it's name, but in the same time if could be a function - how you can distinct one from another by just regexp when you don't use capitalization marker for class names? Some times Class is an Object in some languages (Scala, Kotlin...), how this case is handled?
- all the regular expressions in TM grammars are based on oniguruma, a regular expression library written in C.
- the only way to interpret the grammars and get anywhere near original fidelity is to use the exact same regular expression library (with its custom syntax constructs)
in VSCode, our runtime is node.js and we can use a node native module that exposes the library to JavaScript
- in the Monaco Editor, we are constrained to a browser environment where we cannot do anything similar
- we have experimented with Emscripten to compile the C library to asm.js, but performance was very poor even in Firefox (10x slower) and extremely poor in Chrome (100x slower).
- we can revisit this once WebAssembly gets traction in the major browsers, but we will still need to consider the browser matrix we support. i.e. if we support IE11 and only Edge will add WebAssembly support, what will the experience be in IE11, etc.
Yeah I'm not totally sure what was meant by that. They're plain text, thus parsable.
Maybe they meant that browsers don't usually have access to the file system, but that's changing and also not applicable since they're using Electron and have NodeJS at their disposal.
So VSCode is great in many ways, and the article might be interesting. But I would never call it fast. It's still really really slow. Just see this comparision:
https://www.youtube.com/watch?v=nDRBxtEUOFE
That's not a perfectly fair comparison. Vim's syntaxes are often super simple and do a much less nice job at highlighting than most tmLanguage syntaxes.
Also all that video tests for is the presence of an optimization where it updates the on-screen colours as soon as that part of the file is done instead of after the entire file is done. It tells nothing about the underlying speed of the highlighting engines. Perhaps an important optimization, but not much information here.
When we started the project, we did write tokenizers by hand. I mention that in the blog post. You can write some very fast tokenizers by hand, even in JavaScript. Of course they won't be as fast as hand written tokenizers in C, but you'd be surprised how well the code of a hand written tokenizer in JavaScript can be optimized by a JS engine, at least I was :). IR Hydra 2 is a great tool to visualise v8's IR representation of JS code [1]. It is a shame it is not built into the Chrome Dev Tools.
In the end, we simply could not write tokenizers for all languages by hand. And our users wanted to take their themes with them when switching to VS Code. That's why we added support for TM grammars and TM themes, and in hindsight I still consider it to be a very smart decision.
The article only makes the claim that they have made it faster that the previous technique. Most people say VSCode is faster than other electron editors like Atom. I am not sure who has said that VSCode is faster than Vim and it would be safe to assume that Vim would be faster.
So with the comparisons at the end of the article, does this mean that there were a lot of edge cases where the theming wasn't being correctly applied prior to 1.9? Were there themes that incorporated less stylistic choices because of the limitations?
Yes, those comparisons at the end show differences in rendering caused by the "approximations" used prior to VS Code 1.9. They were all caused by the difference between the ranking rules of CSS selectors and the ranking rules of TM scope selectors
Those who switched from ST to VS Code, did you stick with VS Code? Do you have any advice for making the transition easier - key bindings, packages etc.?
I stuck with VS Code. To be honest, I don't think it's even in the same league as ST or Atom due to the amount of integrated tooling. The integrated debugger, git, and task runner management is a god send. VS Code is also the only editor I've used where javascript type lookups and auto completion/code doc parsing Just Works out of the box.
I've switched over in the past couple of months. The motivating factor for me was my switch to writing more JS (the debugger works really well.) But I'm increasingly finding myself using it for everything I used to use Sublime for. It's come a long way in the past year or so since I last gave it a try.
[+] [-] trishume|9 years ago|reply
This evening I'm even trying to port it to a pure Rust regex engine that should eliminate non-Rust code and make it substantially faster.
It also implements the sublime-syntax format which is a superset of tmlanguage that allows even nicer highlighting.
https://github.com/trishume/syntect
[+] [-] trishume|9 years ago|reply
For context Sublime Text 3 takes 2 seconds with the same grammar and same file on the same computer due to using a better custom regex engine written specifically for highlighting.
Given what alexdima mentioned in a different comment about spending most of the time in the regex engine, I'm not sure that my engine would be substantially faster under exactly identical conditions since I'm also bottlenecked by Oniguruma.
However, maybe after I port my engine to https://github.com/google/fancy-regex I'll be substantially faster. And if I do it is likely they could also benefit from a fancy-regex port.
[+] [-] nhaehnle|9 years ago|reply
Just eyeballing the cited numbers, they take 3939ms to handle a 1.18MB input on "a somewhat powerful desktop machine". Assuming that that means a chip running at 2GHz, we're talking about over 6300 cycles per byte!
That's quite frankly ridiculous. An improvement by at least one order of magnitude should be possible. Where's the ambition?
(Yes, there's always a trade-off with these things. But I feel someone has to point this out when the OP is explicitly about getting kudos for performance work.)
[+] [-] octref|9 years ago|reply
And, do you have an issue for the regex port? Would love to see the benchmark.
[+] [-] HeyImAlex|9 years ago|reply
[+] [-] nojvek|9 years ago|reply
I think it's their focus on performance and good architecture that makes vscode stand out.
Sure sublime also has great engineering behind it, but being able to contribute and look under the hood of tools, we developers use is very exciting. It feels like a very democratic process.
I remember filing the mini-map bug in Monaco, I'm so glad to see they are working on implementing it in a performant way even though it will require large rewrites of their editor rendering code.
[+] [-] ex3ndr|9 years ago|reply
[+] [-] mrgalaxy|9 years ago|reply
That doesn't sound right... but then again I don't know enough about TextMate grammars to argue.
[+] [-] alexdima|9 years ago|reply
- the only way to interpret the grammars and get anywhere near original fidelity is to use the exact same regular expression library (with its custom syntax constructs) in VSCode, our runtime is node.js and we can use a node native module that exposes the library to JavaScript
- in the Monaco Editor, we are constrained to a browser environment where we cannot do anything similar
- we have experimented with Emscripten to compile the C library to asm.js, but performance was very poor even in Firefox (10x slower) and extremely poor in Chrome (100x slower).
- we can revisit this once WebAssembly gets traction in the major browsers, but we will still need to consider the browser matrix we support. i.e. if we support IE11 and only Edge will add WebAssembly support, what will the experience be in IE11, etc.
[+] [-] trishume|9 years ago|reply
[+] [-] jbmorgado|9 years ago|reply
Can anyone explain why this is the case? It's not only in VSCode, I remember seeing something about TextMate grammars also in other editors.
[+] [-] mikewhy|9 years ago|reply
Maybe they meant that browsers don't usually have access to the file system, but that's changing and also not applicable since they're using Electron and have NodeJS at their disposal.
[+] [-] iveqy|9 years ago|reply
[+] [-] trishume|9 years ago|reply
Also all that video tests for is the presence of an optimization where it updates the on-screen colours as soon as that part of the file is done instead of after the entire file is done. It tells nothing about the underlying speed of the highlighting engines. Perhaps an important optimization, but not much information here.
[+] [-] alexdima|9 years ago|reply
In the end, we simply could not write tokenizers for all languages by hand. And our users wanted to take their themes with them when switching to VS Code. That's why we added support for TM grammars and TM themes, and in hindsight I still consider it to be a very smart decision.
[1] http://mrale.ph/irhydra/2/
[+] [-] zitterbewegung|9 years ago|reply
[+] [-] eduren|9 years ago|reply
[+] [-] alexdima|9 years ago|reply
[+] [-] joshschreuder|9 years ago|reply
[+] [-] bicubic|9 years ago|reply
[+] [-] laurencerowe|9 years ago|reply
[+] [-] pjmlp|9 years ago|reply
[+] [-] Animats|9 years ago|reply
Syntax highlighting is eye candy. Automatic indentation is what turns a text editor into a code editor.
[+] [-] farnsworth|9 years ago|reply