On a tangent, are there any test frameworks focused on runtime performance? With esbuild speeding up everything else, jest has become the main bottleneck in my dev workflow.
Is it the overhead from Jest itself, or your tests?
At the end of the day, a JS test runner has to run a bunch of tests written in JS. That can't be optimized away, and I'd assume it's always going to dominate performance.
Yes, Jest is slow. Performance-focused test frameworks like ava and uvu prominently display benchmarks demonstrating this.
Jest’s slowness, in my experience, is primarily rooted in three issues:
- The transformations it performs to support auto-mocking (AFAIK still its flagship feature) are constant overhead.
- Its process model is per-module, rather than per-test, limiting concurrency. A hypothetical test run with one test module with 100 tests is ~100x slower than 100 modules with 1 test each.
- Its process model is per-module (yes, I’m repeating this factor because it has another major impact on perf), but Jest destroys its dependency cache between tests, causing memory leaks to accumulate. Memory leaks in Node especially are common and trivial, where long-running libraries (like, say, a logger) set up some config-driven singleton on require/import.
I’ve sunk literally weeks if not months into trying to work around issues like these on teams that insisted on keeping either Jest or other tools with aforementioned memory leaks. And while I value performance, I primarily spent that time trying to mitigate problems (false positives/negatives in test runs, intermittent failures) caused by the poor isolation model.
mhagemeister|4 years ago
brundolf|4 years ago
At the end of the day, a JS test runner has to run a bunch of tests written in JS. That can't be optimized away, and I'd assume it's always going to dominate performance.
eyelidlessness|4 years ago
Jest’s slowness, in my experience, is primarily rooted in three issues:
- The transformations it performs to support auto-mocking (AFAIK still its flagship feature) are constant overhead.
- Its process model is per-module, rather than per-test, limiting concurrency. A hypothetical test run with one test module with 100 tests is ~100x slower than 100 modules with 1 test each.
- Its process model is per-module (yes, I’m repeating this factor because it has another major impact on perf), but Jest destroys its dependency cache between tests, causing memory leaks to accumulate. Memory leaks in Node especially are common and trivial, where long-running libraries (like, say, a logger) set up some config-driven singleton on require/import.
I’ve sunk literally weeks if not months into trying to work around issues like these on teams that insisted on keeping either Jest or other tools with aforementioned memory leaks. And while I value performance, I primarily spent that time trying to mitigate problems (false positives/negatives in test runs, intermittent failures) caused by the poor isolation model.