WDYM this seems very familiar. At commit deadbeef I don't need to materialize the full tree to build some subcomponent of the monorepo. Did I miss something?
And as for pricing... are there really that many people working on O(billion) lines of code that can't afford $TalkToUs? I'd reckon that Linux is the biggest source of hobbyist commits and that checks out on my laptop OK (though I'll admit I don't really do much beyond ./configure && make there...)
Well they also claim to be able to cache build steps somehow build-system independently.
> As the build runs, any step that exactly matches a prior record is skipped and the results are automatically reused
> SourceFS delivers the performance gains of modern build systems like Bazel or Buck2 – while also accelerating checkouts – all without requiring any migration.
Builds were audited by somehow intercepting things like open(2) and getenv(3) invoked by a compiler or similar tool, and each produced object had an associated record listing the full path to the tool that produced it, its accurate dependencies (exact versions), and environment variables that were actually used.
Anything that could affect the reproducibility was captured.
If an object was about to be built with the exact same circumstances as those in an existing record, the old object was reused, or "winked-in", as they called it.
It also provided versioning at filesystem level, so one could write something like file.c@@/trunk/branch/subbranch/3 and use it with any program without having to run a VCS client. The version part of the "filename" was seen as regular subdirectories, so you could autocomplete it even with ancient shells (I used it on Solaris).
Meh, content marketing for a commercial biz. There are no interesting technical details here.
I was a build engineer in a previous life. Not for Android apps, but some of the low-effort, high-value tricks I used involved:
* Do your building in a tmpfs if you have the spare RAM and your build (or parts of it) can fit there.
* Don't copy around large files if you can use symlinks, hardlinks, or reflinks instead.
* If you don't care about crash resiliency during the build phase (and you normally should not, each build should be done in a brand-new pristine reproducible environment that can be thrown away), save useless I/O via libeatmydata and similar tools.
* Cross-compilers are much faster than emulation for a native compiler, but there is a greater chance of missing some crucial piece of configuration and silently ending up with a broken artifact. Choose wisely.
The high-value high-effort parts are ruthlessly optimizing your build system and caching intermediate build artifacts that rarely change.
Hey everyone. I’m Serban, co-founder of Source.dev. Thanks for the upvotes and thoughtful discussion. I’ll reply to as many comments as I can. Nothing means more to an early-stage team than seeing we’re building something people truly value - thanks from all of us at Source.dev!
While I’m sure it’s much more advanced, out of interest is this similar to the Python tool ‘fabricate’, which would use strace to track all files a program read, and wrote?
> Fast builds are what truly makes a difference to developer productivity. With SourceFS builds complete over 9x faster on a regular developer machine. This sets a new standard as it enables developers to get their sword fighting time back and speeds-up the lengthy feedback loop on CI pipelines.
Objection! Long build times are better for sword-fighting time. The longer it takes, the more sword-fighting we have time for!
The world desperately needs a good open source VFS that supports Windows, macOS, and Linux. Waaaaay too many companies have independently reinvented this wheel. Someone just needs to do it once, open source it, and then we can all move on.
This. Such a product also solves some AI problems by matting you version very large amounts of training data in a VCS like git, which can then be farmed out for distributed unit testing.
Please fill in this form: https://www.source.dev/demo . We’re prioritizing cloud deployments but are keen to hear about your use case and see what we can do.
Tldr: your build system is so f'd that you have gigs of unused source and hundreds of repeated executions of the same build step. They can fix that. Or, you could, I dunno, fix your build?
You could just have a mono-repo with a large amount of assets that aren't always relevant to pull.
Incremental builds and diff only pulls are not enough in a modern workflow. You either need to keep a fleet of warm builders or you need to store and sync the previous build state to fresh machines.
Games and I'm sure many other types of apps fall into this category of long builds, large assets, and lots of intermediate build files. You don't even need multiple apps in a repo to hit this problem. There's no simple off the shelf solution.
ongy|4 months ago
Looks like it's similar in some ways. But they also don't tell too much and even the self-hosting variant is "Talk to us" pricing :/
7e|4 months ago
jonnrb|4 months ago
And as for pricing... are there really that many people working on O(billion) lines of code that can't afford $TalkToUs? I'd reckon that Linux is the biggest source of hobbyist commits and that checks out on my laptop OK (though I'll admit I don't really do much beyond ./configure && make there...)
Ericson2314|4 months ago
zokier|4 months ago
> As the build runs, any step that exactly matches a prior record is skipped and the results are automatically reused
> SourceFS delivers the performance gains of modern build systems like Bazel or Buck2 – while also accelerating checkouts – all without requiring any migration.
Which sounds way too good to be true.
jonnrb|4 months ago
sudahtigabulan|4 months ago
Builds were audited by somehow intercepting things like open(2) and getenv(3) invoked by a compiler or similar tool, and each produced object had an associated record listing the full path to the tool that produced it, its accurate dependencies (exact versions), and environment variables that were actually used. Anything that could affect the reproducibility was captured.
If an object was about to be built with the exact same circumstances as those in an existing record, the old object was reused, or "winked-in", as they called it.
It also provided versioning at filesystem level, so one could write something like file.c@@/trunk/branch/subbranch/3 and use it with any program without having to run a VCS client. The version part of the "filename" was seen as regular subdirectories, so you could autocomplete it even with ancient shells (I used it on Solaris).
bityard|4 months ago
I was a build engineer in a previous life. Not for Android apps, but some of the low-effort, high-value tricks I used involved:
* Do your building in a tmpfs if you have the spare RAM and your build (or parts of it) can fit there.
* Don't copy around large files if you can use symlinks, hardlinks, or reflinks instead.
* If you don't care about crash resiliency during the build phase (and you normally should not, each build should be done in a brand-new pristine reproducible environment that can be thrown away), save useless I/O via libeatmydata and similar tools.
* Cross-compilers are much faster than emulation for a native compiler, but there is a greater chance of missing some crucial piece of configuration and silently ending up with a broken artifact. Choose wisely.
The high-value high-effort parts are ruthlessly optimizing your build system and caching intermediate build artifacts that rarely change.
7e|4 months ago
serbancon|4 months ago
CJefferson|4 months ago
MarsIronPI|4 months ago
Objection! Long build times are better for sword-fighting time. The longer it takes, the more sword-fighting we have time for!
DuckConference|4 months ago
cogman10|4 months ago
theossuary|4 months ago
jeffbee|4 months ago
vzaliva|4 months ago
everlier|4 months ago
rs186|4 months ago
forrestthewoods|4 months ago
7e|4 months ago
ctoth|4 months ago
We're going to 1 billion LoC codebases and there's nothing stopping us!
yencabulator|4 months ago
unknown|4 months ago
[deleted]
_1tan|4 months ago
serbancon|4 months ago
api|4 months ago
ongy|4 months ago
Though from what I gather form the story, part of the spedup comes from how android composes their build stages.
I.e. speeding up by not downloading everything only helps if you don't need everything you download. And adds up when you download multiple times.
I'm not sure they can actually provide a speedup in a tight developer cycle with a local git checkout and a good build system.
zar22|4 months ago
[deleted]
jeffrallen|4 months ago
jayd16|4 months ago
Incremental builds and diff only pulls are not enough in a modern workflow. You either need to keep a fleet of warm builders or you need to store and sync the previous build state to fresh machines.
Games and I'm sure many other types of apps fall into this category of long builds, large assets, and lots of intermediate build files. You don't even need multiple apps in a repo to hit this problem. There's no simple off the shelf solution.