Without this beta github offer: 2-core CPU (x86_64), 7 GB of RAM, 14 GB of SSD space
Your last development machine that was thrown away is so much faster than this (and we do have great tools for administer stuff like this nowadays). Hell, the computer I bought as a student in 2008 is comparable(!) (though it didn't have an SSD). And it will have so much better network connectivity with everything else on your network.
Whenever you hear cloud, realize that the dedicated time you will get (unless you specifically pay for it (in which case it will be more expensive than self-host)) is in the same order the phone you had two generations ago. That is why gmail search sucks. Because they can't afford to really search your messages and can not even do exact matching properly.
So yes, apparently github is fixing this now and if paying for this problem makes sense for you do it. But this is a problem that was partly invented by the cloud in the first place.
Actually, CI is really one of the canonical reasons to use the cloud: short, bursts of load that can scale well (generally), but you don’t need 24/7. Using twice as big an instance means you need - roughly - half the time, which means costs stay about the same.
I find that using regular cloud instances (eg EC2), with a custom runner for some CI platform (Gitlab, teamcity, whatever you prefer) is a really sweet spot.
At QuasarDB, our C++ builds only take about 20 minutes this way, as long as we’re using a 128 vCPU instance. It’s a decent sweet spot for us.
Sounds like your average VPS offering though; that means, it's not physical hardware but virtualized. CI is still considered to be something that is allowed to take a while, since it's an async process etc.
Self-hosting is an option for sure, but an in-between one would be to run your own gitlab and set up your own runners. I actually found out it's pretty easy to configure a runner on your own device, but with the same process you can rent a server or VPS at higher specs than the default offering and make your builds faster.
> 2-core CPU (x86_64), 7 GB of RAM, 14 GB of SSD space
This is comparable to what several of my friends are using right now, two of whom are trying to get into IT (so mostly doing frontend development), because the economy fucked them over for the third time in their lives. They're also in their 30s. This is nothing unusual at all.
It's pretty sad how out of touch this place can be. Not everyone in this world makes $20k a month and can buy macbooks as they come out.
I recently moved our GutHub CI to a self-hosted runner and it reduced CI times for all jobs by ~5x.
One frustrating part is that a single GitHub runner can handle one job at a time, and our repos have 4-8 highly parallelized jobs. So we need multiple runners. To do this in an economic way, I made a Docker image and run 10 instances of it a single dedicated host from Hetzner. For ~$50/mo, we have unlimited CI and job run as fast as theoretically possible.
The CI box has no inbound ports open to the internet, and the OS is set to auto-update, so the maintenance burden is low.
Developer velocity has been my project for the last quarter at work: we just switched from GitHub runners to Hetzner machines, and saw similar improvements to average build times. Between self hosting our runners and switching to bazel which caches aggressively, we've driven down average CI test time from 23 min to 2m40.
a) Developer builds locally
b) Developer tests locally
c) Developer pushes to a repository
d) CI starts
e) CI ends
f) Wait for human code review and approval
g) Merge and deploy
h) Observe that nothing broke / no need to revert.
Is it even possible for (e) - (d) to be short enough, let alone (f) - (d), to keep the developer's attention instead of context switching? Most devs I know just context switch immediately after (c). If you care about developer productivity, you're much more likely to get results from focusing on (a) and (b), by hooking development laptops into caching and restricting system/test scope, than you are by reducing CI times, unless your CI takes some ungodly amount of time to run (several hours).
For point of comparison: article examines how using monster machines can reduce the build time of Fedora to 27 minutes (not really a comparable example to most companies, but OK). My devs complain about an (admittedly unoptimized) CI time of 20 minutes (on a much simpler project than Fedora) that introduces context switching. Is the article really trying to get me to believe that Fedora developers wouldn't context switch on a 27 minute build, twiddling their thumbs for 27 minutes, but that a 35 or 55 minute build would? Something about getting under the 30 minute bar gets developers to keep their focus? I call bullshit.
> It’s cheaper—and less frustrating for your developers—to pay more for better hardware to keep your team on track.
I agree with this point, 100%. And not just in the context of build hardware. It's often cheaper to pay for specialized services rather than paying in-house developers to do those things. I've lost count of the number of times in my career that I've spent days of effort (at $200/hr) so that the company I was working for could avoid paying for some $15/mo SaaS.
Build is one of those things that I can't see the economy in buying cloud VMs for if you have lots of load.
Buy a rack and a dozen cheap-ish rack chassis with some middle range CPUs with lots of cores but not necessarily the highest specs otherwise (Last gen we used 9700K and for a recent refresh it's AMD 7700X). You can get quite far at $1k per node and $12k for a rack. Then you can run 3-4 build agents per node so you have a 48 agent cluster in your rack. Electricity and management will be on top of the purchase price of course, but it's still a bargain.
Even if you do use cloud compute for builds, it's worth having some self-hosted nodes for the base need and just using the cloud for peak use scaling.
The cost of someone to manage and support this in my org would cost more than it is worth.
Gitlab runners on an autoscaling group handles load for me and scales up and down with need. Took 2-3 hours to set up, and has lasted 2 years without incident.
The value for me is cost (especially in time) to implement, mental burden, and risk of delaying projects.
I can't even follow the math and reasoning in this post. If you're gonna do content marketing at least make it somewhat convincing. If a dev has a fixed cost for context-switching and their other task isn't a waste of time, then any build longer than a few minutes imposes that cost.
So many times it is not even 50, but 1 line. and those changes fail too.
Also, more hardware does not make builds faster magically all the times. one needs to spend time to actually use more/ better hardware.
Finally, build load comes in spikes. end of the working day, you have lots of builds all of a sudden. Few runners get overloaded in peak or many runners waste a lot of money. May be we need builds being done in some other timezones where its off-peak time, and even better, if people could pool their resources (and thats what SaaS runners do for us.)
One of my pet peeves at my last job was that the CI build and the local build commands were different, and produced notably different results sometimes.
I just finished some changes and they are in review. Now I have to pull master, and build that before I can start the next set of changes. (and if the review finds something I have to switch branches and rebuild a large part of the code)
Focusing on the cost side of software development is not very convincing for managers who need to make purchasing decisions. I believe most software is a high-margin product because manufacturing costs are close to zero. Even a relatively high difference in cost is not as much as a small difference in value. The one cost-side metric that has a big effect on value is time-to-market.
What could be more convincing is the effect on product value that build times could have, and they certainly could (and I, for one, believe that they do, especially if they can be made to be less than 15 seconds or so). For example, it's certainly conceivable that very short build times make it easier to write more tests, which could result in more correct software with better features and/or a shorter time-to-market.
Some products targeted at developers also make the mistake of focusing their marketing message on the cost, perhaps because that's the part of the business that developers personally feel, but it's not the message that would convince their employers.
Definitely, it's also important to remember how salaries vs. infrastructure costs show up on a P&L sheet.
Salaries are going to be considered largely immovable and they won't "go down" or show as a lower number on the P&L as a result of your devs getting some time back. Whereas your infrastructure costs jumping up (even by a small amount, esp if you have any amount of scale) will set of alarms, rightly IMO, in the Finance department.
I my experience if you get more builders/faster hardware someone higher up will end up asking for more items in the build-matrix, and the CI time will balloon up again.
It's rarely only a question of just allocating money to more/better hardware, it's also a question of policy and willingness of your organisation to keep CI time short/feedback fast.
That's why in my opinion you should always have the option of building locally, just what you need when you want it, instead of having to go through a slow CI/CD pipeline.
> will end up asking for more items in the build-matrix, and the CI time will balloon up again.
I don't think is has to end like that. You can have separate queues and separate levels of assurance. For example, does every commit have to be tested in each of 20 possible configs? You can run one common config by default and allow unblocking the rest on demand. Then enforce all of them only as a merge gate.
If you can also split them into separate queues that don't block each other, you get both larger matrix and faster dev turnaround.
While more cores can certainly help with certain types of projects, such as those that can be easily parallelized, this is not always the case. For example, web app projects won't benefit as much from additional cores.
Another important factor to consider is the single-core performance of each vCPU. Many server-class CPUs, such as those used by GitHub, are built with a very high-core count but with a very low single-core speed. In contrast, BuildJet uses consumer CPUs, such as the 5950x, which offer slightly less core count but an excellent single-core speed.
It's quite astonishing how slow "the cloud"/server-class CPUs can be, we compared my old MacBook Pro 2015 vs. a 2vCPU GitHub actions runner and the MBP 2015 won most of the time.
BuildJet's bet is that single-core performance is critical for a fast CI, and it appears that the self-hosting comments here on HN also agree.
(We're working our own CI, DM me if you're interested in the fastest CI on the market)
Super interesting, not seen BuildJet before, seems like a great concept. Sent it to a few friends that might find it useful. I wonder if there are other CI systems you could make runners for, maybe Gitlab/Buildkite? "The ultimate compute solution for your CI"
> BuildJet's bet is that single-core performance is critical for a fast CI,
Yes, but an even larger impact often comes from caching dependencies (node_modules/vendors/lint checks/etc). Caching via GitHub actions is slow, and if BuildJet would offer a custom GHA action with _local_ SSD caching, it would give a big advantage to people chasing fast CI.
We use GitLab CI and solve this with tags. We have runners allocated with set resources and tagged with a t-shirt size. Developers can then tag their job based on this size. Single threaded jobs (curl-uploading a file) can be tagged "small" to get 1 CPU core. Building projects can be medium or large based on how big they are and developers can choose whether the large runner is needed. Because its simple to switch tags, developers can create a feature branch to swap the tag and observe how long it takes.
All our runners are self hosted. There are fewer large runners, so developers are inclined to go small. Hypothetically, you can run all your jobs in large but then you may have to wait for availability but in practice we haven't seen any real contention.
We run over 15,000 jobs a week on 6 physical servers with this pattern.
Are there really people who still need convincing of this? At the same time you probably shouldn't be throwing more hardware at the problem without trying to understand it first. I've witnessed numerous times throughout my career what can be saved by tuning a few parameters.
It's the resource curse. Why think about it too much if you can just throw resources at the problem? This is not just an issue with builds. Seems like developers treat everything as "free", from HTTP round-trips to database queries.
Not to mention the tendency to craft convoluted and heavy solutions to rather standard problems. Because "this is how it's done now".
Standing here, holding my hammer. Lo, on the horizon: a nail!
> Are there really people who still need convincing of this?
Some might just not care that much, or nobody might speak up about the problem, not wanting to make it seem like they're just complaining, in the case of nobody being able to provision more resources, or optimize the build. For example, in some environments, getting a better CI node might be associated with red tape, lots of back and forth with a different department or something like that, especially if it's all on-prem instead of using managed services or cloud resources.
> At the same time you probably shouldn't be throwing more hardware at the problem without trying to understand it first. I've witnessed numerous times throughout my career what can be saved by tuning a few parameters.
This is an excellent point!
In a context of containers (just an example, though illustrates some pitfalls nicely) there's a world of difference between using a base image that has the runtime and tools you need, vs installing them every time. There's a lot of difference between pulling in your dependencies every time, vs using a cache for it or an instance of Nexus that acts as a caching proxy (though that's not very nice to configure in most languages). There's even difference between making sure that packages install correctly from the cache, vs having them already be installed in some intermediate layer that you can just reuse when building your container.
Even without containers, there can be lots of things to take into consideration, like whether you could build your back end and front end (in the context of webdev) or other components in parallel, whether you could run any tests you might have in parallel, or whether there's not duplicated work going on in there somewhere.
This is also a reason why many large organizations recently started to adopt a matured build tool solution such as: Bazel / Pants / Buck. They did the math.
Back in ye olde we just linked a bunch of our machines in distcc cluster. Not really workable for remote work or really anything else but C/C++ but was pretty much all benefit for that.
I'd kinda love to have transparent "workplace" that builds local by default but just have "build in cloud" option for those few cases where I do need to do a ton of compute at once (say getting to bisect something) but moving completely to the cloud seems just worse experience overall compared to just relatively beefy machine
If you're in the frontend space and are migrating to or already work in a Monorepo, I can't speak highly enough of NX[0] and their cloud builds. Very very fast. The caching algorithm they use is pretty nice and a little smarter than Turbo, and the distributed caching helps both in CI and developer machines.
My only complaint is I haven't been able to quite figure out how to make the developer machine cache independent from the CI/CD one, which has caused a couple of small issues we were able to work around.
One thing I have to say though, is I don't use NX plugins except for `@nrwl/jest`. I just use their Task runner capabilities. I found the plugins were a hassle if you weren't adopting the integrated[1] approach. For whatever reason, Jest worked well in the package based monorepo (for whatever reason, our IDE of choice, JetBrains, doesn't support TSConfig Paths properly in TSX files so it killed DX. So we had to go package based)
That said, it was easy enough to get up and moving with the task runner and we've seen major benefits. Did require us to think a little bit differently in how to run tasks in a parallel friendly way though.
I have never had to run a build that takes more than a few minutes, so I'm unfamiliar with these sorts of long-running builds.
I definitely agree on investing to make builds faster, but when they're a few minutes and the build will go down to slightly fewer minutes, those 'marginal' gains don't seem worth the increased build cost.
Just throwing more compute at the problem isn't always the answer either. The build tools I've been using over the past few years (Webpack and now Vite) have gotten so much faster on their own, which shows that there's loads of slack in our code and lots of room for improvement there before we need to throw more compute at it.
Also, I'm certainly not just twiddling my thumbs while it's running...
Am I the odd one out? Feels like a bit of a straw man argument from MS/GH trying to justify a blanket switch to more powerful runners 'because the data says so'
Do the math _for your business_ and _your use-case_
If you're doing fullstack web dev and writing lots of end to end tests, you'll run build times into dozens of minutes without even trying (since those involve spawning web browsers…)
This makes languages such as Rust a tough sell for startups that can't yet afford $3000 M1 laptops for each Rust developer. Rust has incremental compilation and it helps to make incremental builds faster. You also have to wait for compilation every time you run tests. Further, you wait for CI test suites to finish. Optimizing builds in CI, taking advantage of caching at the right steps, isn't widely understood nor does optimization completely address the problem with compile times. Throwing hardware at the problem makes sense, if you can afford to, but you still need to optimize the entire chain.
Is a remote dev environment an economically viable alternative? I suspect not but haven't run the numbers. It's not clear to me what tier of virtual server would beat an M1 macbook pro with 32gb ram and sufficient number of cores.
[+] [-] tjoff|3 years ago|reply
Without this beta github offer: 2-core CPU (x86_64), 7 GB of RAM, 14 GB of SSD space
Your last development machine that was thrown away is so much faster than this (and we do have great tools for administer stuff like this nowadays). Hell, the computer I bought as a student in 2008 is comparable(!) (though it didn't have an SSD). And it will have so much better network connectivity with everything else on your network.
Whenever you hear cloud, realize that the dedicated time you will get (unless you specifically pay for it (in which case it will be more expensive than self-host)) is in the same order the phone you had two generations ago. That is why gmail search sucks. Because they can't afford to really search your messages and can not even do exact matching properly.
So yes, apparently github is fixing this now and if paying for this problem makes sense for you do it. But this is a problem that was partly invented by the cloud in the first place.
[+] [-] stingraycharles|3 years ago|reply
I find that using regular cloud instances (eg EC2), with a custom runner for some CI platform (Gitlab, teamcity, whatever you prefer) is a really sweet spot.
At QuasarDB, our C++ builds only take about 20 minutes this way, as long as we’re using a 128 vCPU instance. It’s a decent sweet spot for us.
[+] [-] Cthulhu_|3 years ago|reply
Self-hosting is an option for sure, but an in-between one would be to run your own gitlab and set up your own runners. I actually found out it's pretty easy to configure a runner on your own device, but with the same process you can rent a server or VPS at higher specs than the default offering and make your builds faster.
[+] [-] Delemono|3 years ago|reply
There is a significant IPC increase with every generation.
[+] [-] 5e92cb50239222b|3 years ago|reply
This is comparable to what several of my friends are using right now, two of whom are trying to get into IT (so mostly doing frontend development), because the economy fucked them over for the third time in their lives. They're also in their 30s. This is nothing unusual at all.
It's pretty sad how out of touch this place can be. Not everyone in this world makes $20k a month and can buy macbooks as they come out.
[+] [-] CyberDildonics|3 years ago|reply
[+] [-] qersist3nce|3 years ago|reply
>the computer I bought as a student in 2008 is comparable
>the phone you had two generations ago
Good for Americans (or first-world citizens)
[+] [-] e1g|3 years ago|reply
One frustrating part is that a single GitHub runner can handle one job at a time, and our repos have 4-8 highly parallelized jobs. So we need multiple runners. To do this in an economic way, I made a Docker image and run 10 instances of it a single dedicated host from Hetzner. For ~$50/mo, we have unlimited CI and job run as fast as theoretically possible.
The CI box has no inbound ports open to the internet, and the OS is set to auto-update, so the maintenance burden is low.
[+] [-] how_gauche|3 years ago|reply
[+] [-] Delemono|3 years ago|reply
You might not want to overload one instance.
And running it like you do, is a no brainer anyway
[+] [-] jokethrowaway|3 years ago|reply
[+] [-] solatic|3 years ago|reply
For point of comparison: article examines how using monster machines can reduce the build time of Fedora to 27 minutes (not really a comparable example to most companies, but OK). My devs complain about an (admittedly unoptimized) CI time of 20 minutes (on a much simpler project than Fedora) that introduces context switching. Is the article really trying to get me to believe that Fedora developers wouldn't context switch on a 27 minute build, twiddling their thumbs for 27 minutes, but that a 35 or 55 minute build would? Something about getting under the 30 minute bar gets developers to keep their focus? I call bullshit.
[+] [-] yosito|3 years ago|reply
> It’s cheaper—and less frustrating for your developers—to pay more for better hardware to keep your team on track.
I agree with this point, 100%. And not just in the context of build hardware. It's often cheaper to pay for specialized services rather than paying in-house developers to do those things. I've lost count of the number of times in my career that I've spent days of effort (at $200/hr) so that the company I was working for could avoid paying for some $15/mo SaaS.
[+] [-] alkonaut|3 years ago|reply
Buy a rack and a dozen cheap-ish rack chassis with some middle range CPUs with lots of cores but not necessarily the highest specs otherwise (Last gen we used 9700K and for a recent refresh it's AMD 7700X). You can get quite far at $1k per node and $12k for a rack. Then you can run 3-4 build agents per node so you have a 48 agent cluster in your rack. Electricity and management will be on top of the purchase price of course, but it's still a bargain.
Even if you do use cloud compute for builds, it's worth having some self-hosted nodes for the base need and just using the cloud for peak use scaling.
[+] [-] hhh|3 years ago|reply
Gitlab runners on an autoscaling group handles load for me and scales up and down with need. Took 2-3 hours to set up, and has lasted 2 years without incident.
The value for me is cost (especially in time) to implement, mental burden, and risk of delaying projects.
[+] [-] fefe23|3 years ago|reply
Notably absent from the comparison is the cost for buying a build host yourself.
[+] [-] yellow_lead|3 years ago|reply
[+] [-] nottorp|3 years ago|reply
Build servers are fine for the main branch, but why does a dev have to build the whole thing to quickly check the last 50 lines that they've written?
[+] [-] treis|3 years ago|reply
I die a little inside every time my build is waiting for a runner while my M1 Mac is just sitting there.
[+] [-] amtamt|3 years ago|reply
Also, more hardware does not make builds faster magically all the times. one needs to spend time to actually use more/ better hardware.
Finally, build load comes in spikes. end of the working day, you have lots of builds all of a sudden. Few runners get overloaded in peak or many runners waste a lot of money. May be we need builds being done in some other timezones where its off-peak time, and even better, if people could pool their resources (and thats what SaaS runners do for us.)
[+] [-] kqr|3 years ago|reply
[+] [-] bluGill|3 years ago|reply
[+] [-] chillfox|3 years ago|reply
[+] [-] pron|3 years ago|reply
What could be more convincing is the effect on product value that build times could have, and they certainly could (and I, for one, believe that they do, especially if they can be made to be less than 15 seconds or so). For example, it's certainly conceivable that very short build times make it easier to write more tests, which could result in more correct software with better features and/or a shorter time-to-market.
Some products targeted at developers also make the mistake of focusing their marketing message on the cost, perhaps because that's the part of the business that developers personally feel, but it's not the message that would convince their employers.
[+] [-] chasinglogic|3 years ago|reply
Salaries are going to be considered largely immovable and they won't "go down" or show as a lower number on the P&L as a result of your devs getting some time back. Whereas your infrastructure costs jumping up (even by a small amount, esp if you have any amount of scale) will set of alarms, rightly IMO, in the Finance department.
[+] [-] DrScientist|3 years ago|reply
What matters is development velocity and quality - these are the things you will be competing on - not really cost of production.
Slow builds impact both of the above of course.
[+] [-] carreau|3 years ago|reply
It's rarely only a question of just allocating money to more/better hardware, it's also a question of policy and willingness of your organisation to keep CI time short/feedback fast.
[+] [-] mgaunard|3 years ago|reply
[+] [-] viraptor|3 years ago|reply
I don't think is has to end like that. You can have separate queues and separate levels of assurance. For example, does every commit have to be tested in each of 20 possible configs? You can run one common config by default and allow unblocking the rest on demand. Then enforce all of them only as a merge gate.
If you can also split them into separate queues that don't block each other, you get both larger matrix and faster dev turnaround.
[+] [-] nitwit005|3 years ago|reply
Effort and money will only be expended to make things faster when the build times are perceived as intolerable.
[+] [-] substation13|3 years ago|reply
[+] [-] thinkafterbef|3 years ago|reply
While more cores can certainly help with certain types of projects, such as those that can be easily parallelized, this is not always the case. For example, web app projects won't benefit as much from additional cores.
Another important factor to consider is the single-core performance of each vCPU. Many server-class CPUs, such as those used by GitHub, are built with a very high-core count but with a very low single-core speed. In contrast, BuildJet uses consumer CPUs, such as the 5950x, which offer slightly less core count but an excellent single-core speed.
It's quite astonishing how slow "the cloud"/server-class CPUs can be, we compared my old MacBook Pro 2015 vs. a 2vCPU GitHub actions runner and the MBP 2015 won most of the time.
BuildJet's bet is that single-core performance is critical for a fast CI, and it appears that the self-hosting comments here on HN also agree.
(We're working our own CI, DM me if you're interested in the fastest CI on the market)
[+] [-] lukeramsden|3 years ago|reply
[+] [-] e1g|3 years ago|reply
[+] [-] tasn|3 years ago|reply
We use Rust, so I think a sane caching story is more important than anything else. Not sure about single vs multi core tbh, I can add both helping.
Do you have any experience with customers building rust and docker rust images?
[+] [-] tikkabhuna|3 years ago|reply
All our runners are self hosted. There are fewer large runners, so developers are inclined to go small. Hypothetically, you can run all your jobs in large but then you may have to wait for availability but in practice we haven't seen any real contention.
We run over 15,000 jobs a week on 6 physical servers with this pattern.
[+] [-] tjpnz|3 years ago|reply
[+] [-] papito|3 years ago|reply
Not to mention the tendency to craft convoluted and heavy solutions to rather standard problems. Because "this is how it's done now".
Standing here, holding my hammer. Lo, on the horizon: a nail!
[+] [-] KronisLV|3 years ago|reply
Some might just not care that much, or nobody might speak up about the problem, not wanting to make it seem like they're just complaining, in the case of nobody being able to provision more resources, or optimize the build. For example, in some environments, getting a better CI node might be associated with red tape, lots of back and forth with a different department or something like that, especially if it's all on-prem instead of using managed services or cloud resources.
> At the same time you probably shouldn't be throwing more hardware at the problem without trying to understand it first. I've witnessed numerous times throughout my career what can be saved by tuning a few parameters.
This is an excellent point!
In a context of containers (just an example, though illustrates some pitfalls nicely) there's a world of difference between using a base image that has the runtime and tools you need, vs installing them every time. There's a lot of difference between pulling in your dependencies every time, vs using a cache for it or an instance of Nexus that acts as a caching proxy (though that's not very nice to configure in most languages). There's even difference between making sure that packages install correctly from the cache, vs having them already be installed in some intermediate layer that you can just reuse when building your container.
Even without containers, there can be lots of things to take into consideration, like whether you could build your back end and front end (in the context of webdev) or other components in parallel, whether you could run any tests you might have in parallel, or whether there's not duplicated work going on in there somewhere.
[+] [-] Hardwired8976|3 years ago|reply
[+] [-] titzer|3 years ago|reply
[+] [-] sluongng|3 years ago|reply
[+] [-] moffkalast|3 years ago|reply
[+] [-] ilyt|3 years ago|reply
I'd kinda love to have transparent "workplace" that builds local by default but just have "build in cloud" option for those few cases where I do need to do a ton of compute at once (say getting to bisect something) but moving completely to the cloud seems just worse experience overall compared to just relatively beefy machine
[+] [-] avodonosov|3 years ago|reply
[+] [-] no_wizard|3 years ago|reply
My only complaint is I haven't been able to quite figure out how to make the developer machine cache independent from the CI/CD one, which has caused a couple of small issues we were able to work around.
One thing I have to say though, is I don't use NX plugins except for `@nrwl/jest`. I just use their Task runner capabilities. I found the plugins were a hassle if you weren't adopting the integrated[1] approach. For whatever reason, Jest worked well in the package based monorepo (for whatever reason, our IDE of choice, JetBrains, doesn't support TSConfig Paths properly in TSX files so it killed DX. So we had to go package based)
That said, it was easy enough to get up and moving with the task runner and we've seen major benefits. Did require us to think a little bit differently in how to run tasks in a parallel friendly way though.
[0]: https://nx.dev/
[1]: https://nx.dev/concepts/integrated-vs-package-based
[+] [-] simonhamp|3 years ago|reply
I definitely agree on investing to make builds faster, but when they're a few minutes and the build will go down to slightly fewer minutes, those 'marginal' gains don't seem worth the increased build cost.
Just throwing more compute at the problem isn't always the answer either. The build tools I've been using over the past few years (Webpack and now Vite) have gotten so much faster on their own, which shows that there's loads of slack in our code and lots of room for improvement there before we need to throw more compute at it.
Also, I'm certainly not just twiddling my thumbs while it's running...
Am I the odd one out? Feels like a bit of a straw man argument from MS/GH trying to justify a blanket switch to more powerful runners 'because the data says so'
Do the math _for your business_ and _your use-case_
[+] [-] 5e92cb50239222b|3 years ago|reply
[+] [-] Dowwie|3 years ago|reply
Is a remote dev environment an economically viable alternative? I suspect not but haven't run the numbers. It's not clear to me what tier of virtual server would beat an M1 macbook pro with 32gb ram and sufficient number of cores.
[+] [-] malinens|3 years ago|reply