top | item 41732415

Why and how we’re migrating many of our servers from Linux to the BSDs

334 points| msangi | 1 year ago |it-notes.dragas.net

226 comments

order

kev009|1 year ago

I use Linux, FreeBSD, NetBSD, and OpenBSD all for fun, learning, and profit(the first two).

At the very least it is nice to make acquaintance with at least one BSD because it will probably expand your knowledge on Linux in ways you wont be able to anticipate.

For example, FreeBSD got me into kernel development, full system debugging, network stack development, driver development, and understanding how the whole kit fits together. Those skills transferred back and forth with reasonable fidelity to Linux, and for me jumping into Linux development cold would have been too big a leap.. especially in confidence and developing a mental model.

For my personal infrastructure, I tend to use FreeBSD because in many ways it is simpler and less surprising, especially when accounting for the passage of time. ifconfig is still ifconfig, and it works great. rc.d is all I need for my own stuff. I like the systematic effects of things like tunables and sysctl for managing hardware and kernel configuration. The man pages are forever useful to new and old users. The kernel APIs and userland APIs are extremely stable akin to commercial operating systems and unlike Linux.

There are warts. There are community frictions. The desktop story and some developer experiences will be perpetually behind Linux due to the size of the contributor base and user base. The job market for BSD is very limited compared to Linux. But I don't think it's an all or nothing affair, and ideally in a high stakes operation you would dual stack for availability and zero-day mitigation (Verisign once gave a great talk on this).

graemep|1 year ago

> tend to use FreeBSD because in many ways it is simpler and less surprising, especially when accounting for the passage of time. ifconfig is still ifconfig, and it works great. rc.d is all I need for my own stuff.

That sounds very appealing to me. I have to keep a small number of servers running, but its not my main focus and I would like to spend as little time on it as possible.

I have started using Alpine Linux for servers (not for my desktop, yet) because it is light and simple. Maybe BSD will be the next step.

cientifico|1 year ago

I completely agree! If you're looking to deeply understand how Linux works under the hood, I highly recommend trying out Linux From Scratch. It gave me invaluable insight into the system, especially when I first explored it 20 years ago. Building everything from the ground up—without relying on prepackaged distros or libc—was a game changer.

Check it out: https://www.linuxfromscratch.org/

torstenvl|1 year ago

Yeah, you know, a lot of us talk about FreeBSD being great because you aren't surprised by design choices—it mostly just makes sense.

But I don't think we talk enough about the joy of not being surprised by updates. I'm about to do an upgrade from 13.2 to 14.1 this weekend and I am very confident that I won't have to worry about configuring a new audio subsystem, changing all my startup scripts to work with a new service manager paradigm, or half my programs suddenly being slow because they're some new quasi-container package format.

ab71e5|1 year ago

What sort of projects did you do to get into network stack and driver development on BSD?

v1ne|1 year ago

I also still use FreeBSD on my NAS. But after many years, the desktop experience was pretty sad and made me switch to Windows + Linux for my hardware tinkering. On one side, the lack of manpower shows in many places, unfortunately. I'm talking modern WiFi, GPU support, or power-save mechanisms. On the other side, many Open Source projects only support Linux and getting their projects to compile + run on FreeBSD was a pain, too.

I mean, in addition to what kev009 mentioned, FreeBSD has so many great things to offer: For example, a full-featured "ifconfig" instead of ip + ethtool + iwconfig. Or consistent file-system snapshots since like forever on UFS (and ZFS, of course). I never understood how people in a commercial setup could run filesystem-level backups on a machine without that, like on Linux with ext4. It's just asking for trouble.

So, I'm happy to see this thread about FreeBSD here! Maybe we can make the Open Source scene a bit more diverse again with regards to operating systems…

knowitnone|1 year ago

I see the reason for dual stack but I would rather focus my efforts into securing one OS. If you buy into that, why not 3 or 4 different OSes?

MuffinFlavored|1 year ago

> it will probably expand your knowledge

It really just fragments my knowledge to be honest.

Say "I gotta get things done".

Get me to a terminal. You've got Mac OS command line flags, GNU, BSD. Great.

Then it's some kind of asinine config to interact with some piece of software, all to achieve "generally the same thing", just a different way/flavor.

I really don't see the benefits.

thoroughburro|1 year ago

> The largest failure was with btrfs — after a reboot, a 50 TB filesystem (in mirror, for backups) simply stopped working. No more mounting possible. Data was lost, but I had further backups. The client was informed and understood the situation. Within a few days, the server was rebuilt from scratch on FreeBSD with ZFS — since then, I haven’t lost a single bit.

As someone who admins a lot of btrfs, it seems very unlikely that this was unrecoverable. btrfs gets itself into scary situations, but also gets itself out again with a little effort.

In this instance “I solve problems” meant “I blow away the problem and start fresh”. Always easier! Glad the client was so understanding.

lproven|1 year ago

> As someone who admins a lot of btrfs, it seems very unlikely that this was unrecoverable.

As someone who used it all day every day in my day job for 4 years, I find it 100% believable.

I am not saying you're wrong: I'm saying, experiences differ widely, and your patterns of use are not be universal.

It's the single most unreliable untrustworthy filesystem I've used in the 21st century.

herzzolf|1 year ago

FWIW, this wasn't always the case. I recall that BTRFS reliability was much different, say, 10–15 years ago. The post touched those ancient times as well, so that isn't that much of a stretch.

Around that time, SLES made btrfs their default filesystem. It caused so many problems for users that they reversed that decision almost immediately.

guilhas|1 year ago

I was pleased with my home lab btrfs, had a 12TB raid1, and the PSU rail connected to the backplane sometimes would go down under load. Many scary errors but never lost anything. Took me 2 months to debug and replace the PSU

curt15|1 year ago

If btrfs knows the data is intact, shouldn't btrfs recover automatically?

sidewndr46|1 year ago

Why do people use btrfs and similar filesystems for production use? They are by no means dumpster fires. But the internet is littered with stories of "X happened, then I realized Y & that I wasn't getting my data back"

viraptor|1 year ago

I like the idea and would like to learn more, but it looks like "migrated stuff without testing ahead of time and it turned out faster for some reason". Was it the memory allocations? Was it the disk latency? The hypervisor? Could it be replicated by other means? It was a fun read, but the reasoning/understanding was missing. I hope people investigate deeper before making changes like that.

If you look for benchmarks comparing databases on Linux/BSDs you'll find lots of nuance in practice and results going both ways depending on configuration and what's being tested.

draga79|1 year ago

No, after 20 years of use and comparative testing of similar setups. Frankly, I have always placed little importance on benchmarks as I consider them extremely specific. I am interested in real-world use cases.

The goal of the talk and the article is not to urge people to migrate all their setups, but simply to share my experience and the results achieved. To encourage the use of BSDs for their own purposes as well. It’s not to say that they are the best solution; there is no universal solution to all problems, but having a range of choices can only be positive.

vfclists|1 year ago

> “If nothing is working, what am I paying you for? If everything’s working, what am I paying you for?”

Bloke is not acquainted with Keynesian economics.

https://www.youtube.com/watch?v=9OhIdDNtSv0

https://www.youtube.com/watch?v=NO_tTnpof_o

All a man needs is food in his stomach and a place to rest at the end of the day. Everything else is vanity

What proportion of global GDP is dedicated to fulfilling our basic material needs?

It is mostly unnecessary. Inspite of the huge productivity gains made since the seventies, the current generation of young Americans are poorer than their parents and grandparents were at their age.

So what does all the IT optimization bring? Just more wealth for the owners and redundancies for their employees, including Joe Bloggs here.

It is time people in IT got to understand this. In the long term their activities are not going to improve their wealth. They are one of the few professions whose job is to optimize themselves out of a living, unless they own the means of the production their are optimizing, which they don't.

It is their employers that do.

consp|1 year ago

The problem with the IT crowd is they think they are ahead of the optimization curve, and since everyone does, nobody is.

grisBeik|1 year ago

> So what does all the IT optimization bring? Just more wealth for the owners [...] It is time people in IT got to understand this

I understand it alright, but I'm trapped. Closer to 50 than to 40, I've got a family to run. I could be interested in another profession, but our daily lives & savings would tank if I stopped working, for learning another profession. Also, there's no other profession that I could realistically learn that would let me take nearly the same amount of money home every month. If someone lives alone, they could adjust their standard of living (-> downwards, of course); how do you do that for a family?

Furthermore, there is no switchover between "soulless software job for $$$" and "inspiring software job for $". There are only soulless jobs, only the $ varies. Work sucks absolutely everywhere; the only variable is compensation -- at best we can get one that "sucks less".

When I was a teenager, I could have never dreamt that programming would devolve into such a cruel daily grind for me. Mid-life crisis does change how we look at things, doesn't it. We want more meaning to our work (society has extremely decoupled livelihood from meaning), but there's just no way out. Responsibilities, real or imaginary, keep us trapped. I'd love to reboot my professional life, but the risks are extreme.

FWIW, I still appreciate interesting tasks at work; diving into the details lets me forget, at least for a while, how meaningless it all is.

kortilla|1 year ago

The current generation of Americans are absolutely rich enough to just get food and have shelter. The ones that struggle are the ones that want to live in popular cities precisely because of the available improvements beyond shelter and food.

The houses of the 50s were shit tier and spread around the entire US. You can go buy them today for cheap in the 98% of locations people don’t want to live in.

jdbernard|1 year ago

> I would work for myself, following my own philosophy.

Sounds like he understood it just fine. He owns the means of production.

jbverschoor|1 year ago

I switched from FreeBSD to Linux, mainly because of the bad Java support and the simple fact that Linux became way more popular, which added to the difference in software availability.

jsiepkes|1 year ago

FreeBSD has pretty good Java support? Sure if there is a new LTS release it takes a couple of months before it is ported to FreeBSD but that's about it?

linuxandrew|1 year ago

I've recently discovered systemd-nspawn which is an alternative to LXC, builtin and integrated into systemd. Much lighter than full VMs and it's quite similar to Solaris Zones and FreeBSD jails. One way to use it is to extract an OCI (Docker) image to a path, that way you can reuse the container tooling provided by Docker, Podman et al.

I've barely touched the BSDs and it's been a few years since I last used Solaris so I can't make much of a comparison as a user myself.

rtp4me|1 year ago

Thanks for this! I have been using LXC/LXD for a long time and never knew about systemd-nspawn. Time to go learn something new!

rbc|1 year ago

I've become a fairly loyal OpenBSD user in the last 3-4 years. The base OpenBSD load includes a substantial amount of network capabilities, and is cleanly implemented. It's almost too cleanly implemented, to the point of making me feel sort of guilty when I start to clutter up an install with a bunch of packages...

If my needs for storage were more complicated, I would probably use FreeBSD ZFS, but UFS suffices for my rather modest needs.

I use OpenBSD for desktop, web and mail services. There are some limitations, but none that are serious enough to warrant dealing with running another BSD, or Linux distribution.

tracker1|1 year ago

I would differ on a small point. For SOHO usage, I think that docker compose is perfectly viable and often simplifies backup, migration and moving to a new server. Just my own take on this. A lot of apps really only need one instance with a good backup strategy and not hot failover instances and can handle an hour of down time once a year or two as needed, which I rarely experience.

As mentioned in the article, it also serves as a decent set of instructions, assuming the actual dockerfile(s) for the services and dependencies are broadly available. You can swap out the compose instance of PostgreSQL for your dedicated server with a new account/db, relatively easily. Similar for other non-app centered services (redis, rabbitmq, etc). You can go all in, or partly in and in any case it does serve as self-documenting to a large degree.

Borg3|1 year ago

I wish he could write up a bit about XFS failure he had. Im using it from many many years and there is no issues at all.

Tor3|1 year ago

I'm interested too. I'm using XFS only, and have for many years. On my own boxes, but my company also uses XFS for all the data on customer computers. We did extensive testing many years back, and XFS was the only filesystem at the time which gave a linear, constantly very high performance when writing and reading huge amounts of data (real-time data, dips in performance is a 100% no-no), and also not degrading when having huge numbers of files. We've never had a customer lose data due to XFS problems, and at this point I can't imagine how much data that would be, except that it's astronomical.

When that's said, we had routine XFS losses on SGI boxes. That was a very well known scenario: Write constantly to a one-page text file, say, every few seconds, then power cycle the machine. The file would be empty afterwards. This doesn't happen on Linux, I vaguely recall discussing this with someone some years ago (maybe here on HN) and something was changed at some point, maybe when SGI migrated XFS to Linux, or shortly after.

lotharcable|1 year ago

It hard to know the timeline with his data loss, but I am assuming it was a long time ago.

XFS is originally from SGI Irix and was designed to run on higher end hardware. SGI donated it to Linux in 1999 and it carried a lot of its assumptions over.

For example on SGI boxes you had "hardware raid" with cache, which essentially is a sort of embedded computer with it's own memory. That cache had a battery backup so that if the machine had a crash or sudden power loss the hardware raid would live on long enough to finish its writes. SGI had tight control over the type of hardware you could use and it was usually good quality stuff.

In the land of commodity PC-based servers this isn't often how it worked. Instead you just had regular IDE or SATA hard drives. And those drives lied.

On cheap hardware the firmware would report back it had finished writes when in fact it didn't because it made it seem faster in benchmarks. And consumers/enterprise types looking to save money with Linux mostly bought whatever is the cheapest and fastest looking on benchmarks.

So that if there was a hardware failure or sudden power loss there would could be several megs of writes that were still in flight when the file system thought they were safely written to disk.

That meant there was a distinct chance of dataloss when it came to using Linux and XFS early on.

I experienced problems like that in early 2000s era Linux XFS.

This was always a big benefit to sticking with Ext4. It is kinda dumb luck that Ext4 is as fast as it is when it came to hosting databases, but the real reason to use it is because it had a lot of robust recovery tools. It was designed from the ground up with the assumption that you were using the cheapest crappiest hardware you can buy (Personal PCs).

However modern XFS is a completely different beast. It has been rewritten extensively and improved massively over what was originally ported over from SGI.

It is different enough that a guy's experience with it from 2005 or 2010 isn't really meaningful.

I have zero real technical knowledge on file systems except as a end-user, but from what I understand FreeBSD uses UFS that uses a "WAL" or "write ahead log".. where it records writes it is going to do before it does it. I think this is a simpler but more robust solution then the sort of journalling that XFS or Ext4 uses. The trade off is lower performance.

As far as ZFS vs Btrfs... I really like to avoid Btrfs as much as possible. A number of distros use it by default (OpenSUSE, Fedora, etc), but I just format everything as a single partition as Ext4 or XFS on personal stuff. I use it on my personal file server, but it really simple setup with UPS. I don't use ZFS, but I strongly suspect that btrfs simply failed to rise to its level.

One of the reasons Linux persists despite not having something up to the level of ZFS is that most of ZFS features are redundant to larger enterprise customers.

They typically use expensive SAN or more advanced NAS that has proprietary storage solutions that provide ZFS-like features long before ZFS was a thing. So throwing something as complicated as ZFS on top of that really provides no benefit.

Or they use one of Linux clustered file system solutions, of which there is a wide selection.

hggigg|1 year ago

I had one a few years back where we ran out of inodes on a Jenkins machine on CentOS 7 and it crashed and couldn’t remount the filesystem. I had to restore a backup which was time consuming on a 4TB volume with crazy amounts of files.

blipvert|1 year ago

Used it since the late 90s on IRIX, think there were a few issues early on with the endian swap, but no issues for the best part of twenty years for me!

binkHN|1 year ago

Glad to see all the major BSDs used here; I use OpenBSD whenever it makes sense.

systems_glitch|1 year ago

Indeed, most of our public-facing services are hosted on OpenBSD, and all of our routers and firewalls run it. We started managing everything with Ansible to make it easier to ignore what the host OS is for a deployment, and that has worked well both in moving things to [Open,Net]BSD for experiments, and also standing up tests on various Linux distros just to make sure we're not running into a BSD vs. GNU issue, or even a "problem only on this specific distro" issue.

#1 reason we chose Ansible over other tools was support for the BSDs.

SoftTalker|1 year ago

It's what I use at home and at work.

ksec|1 year ago

I dont think the article is even about BSDs, but generally really bad things in the tech sectors and philosophy to tech.

>my priority is solving my clients’ specific problems, not selling a predefined solution.

>It’s better to pay for everything to work than to pay to fix problems.

>computing should solve problems and provide opportunities to those who use it.

>The trend is to rush, to simplify deployments as much as possible, sweeping structural problems under the rug. The goal is to “innovate”, not necessarily improve — just as long as it’s “new” or “how everyone does it, nowadays.”

>Some people are used to thinking that the ideal solution is X — and believe that X is the only solution for their problems. Often, X is the hype of the moment

>When I ask, “Okay, but why? Who will manage it? Where will your data really be, and who will safeguard it?”, I get blank faces. They hadn’t considered these questions. No one had even mentioned them.

>We’ve lost control of the data. For many, it’s unnecessary to complicate things. And with every additional layer, we’re creating more problems.

Hopefully someday more people will wake up.

DeathArrow|1 year ago

All is fine and dandy and BSDs can solve many use cases. Unfortunately for the solution we are working on, which imply many microservices we need Kubernetes and no BSD equivalent to Kubernetes exists.

kev009|1 year ago

I've helped build two top 10 service provider networks (10s of Tbps). One on FreeBSD, and one on Linux with Kubernetes.

I don't really see Kubernetes as being a game changer. The biggest pro, it makes it easier to onboard both development and operations personnel having a quasi-standard for how a lot of things like scheduling and application networking work.

But it also seems to come with a magnitude of accidental and ornamental complexity. I would imply the same about microservices versus, say, figuring out your repository, language, and deployment pipelines to provide a smooth developer and operator experience. Too much of this industry is fashion and navel gazing instead of thinking about the core problems and standing behind a methodology that works for the business. Unless google moves its own infrastructure to Kubernetes, then maybe there's something to be had that couldn't reasonably be done otherwise :)

roydivision|1 year ago

Same here, otherwise I'd be considering the BSDs.

INTPenis|1 year ago

Can't you just run k8s on bsd? You might have to build and maintain your own release of it, but I'm sure someone has done it already.

sangnoir|1 year ago

I love it when the answer to a question posed in a headline is provided in the second sentence of the article.

> I’m the founder and Barista of the BSD Cafe, a community of *BSD enthusiasts

Did the original article change it's title (currently "I Solve Problems"), or did the submitter editorialize it?

iluvcommunism|1 year ago

FreeBSD is great as a server. Wifi performance still sucks. The author praised byhve but byhve is not all it’s cracked up to be. Both Xen and Linux virtualization perform better, VMware as well. I like FreeBSD, but the other day I found it still uses sendmail. Rc.conf is simple to use, and the ports system is great…I just feel the author was pushing his “X” solution. Hardware support is important as well. I’ve used BSD for a SAN, and FW. Would I use it for a virtualization host? No.

nikisweeting|1 year ago

I'm surprised no one is talking about FreeNAS / TrueNAS and their interesting history in this area in the comments.

There's probably more collective writing about the various tradeoffs between Debian and FreeBSD in their forums and communities than anywhere else on the internet.

Personally I love ZFS and ZFS on root so much I can never go back to not having it. It's a shame more cloud providers like DigitalOcean/AWS/etc. don't offer it natively.

theamk|1 year ago

> [users ..] began explicitly requesting “jails” instead of Docker hosts. They started using BastilleBSD to clone “template” jails and deploy them.

huh, were they running persistent docker containers and modifying them in-place? If that's the case, they were missing the best part of Docker - the Dockerfile and "container is a cattle". The power of Docker is there no ad-hoc system customization possible, it's all in Dockerfile which is source-controlled and versioned, and artifacts (like built images) are read-only.

To go from this to all-manual "use bastille edit TARGET fstab to manually update the jail mounts from 13.1 to 13.2 release path." [0] seems like a real step back. I can understand why one might want to go to BSD if they prefer this kind of workflow, but for all my projects, I am now convinced that functional-like approach (or at least IaaC-like one) is much more powerful than manually editing individual files on live hosts.

[0] https://bastille.readthedocs.io/en/latest/chapters/upgrading...

knowitnone|1 year ago

Running 3 different BSDs is not my idea of solving problems.

KronisLV|1 year ago

They might be really good for specific tasks, but someone else will also need to maintain the setups, which will make things harder when most of the folks in the job market have experience with the various Linux distros, but finding BSD experience might be a tad more difficult.

I am personally on board with using the various BSDs when it makes sense (though maybe just pick FreeBSD and stick with it, as opposed to fragmenting the install base, the same way how I've settled on Ubuntu LTS wherever possible; it's not ideal but it works), except the thing is that most job ads and such call for Linux experience in particular, same with tooling like Kubernetes and OCI/Docker containers and such. Ergo, that's where most of my time goes, I want to remain employable and also produce solutions that will be understandable to most people, instead of getting my DMs pinged whenever someone is confused by what they're seeing.

rstuart4133|1 year ago

All I can say is experiences differ. I'm a long time Debian user, and now use FreeBSD for work. Both are far better than the proprietary competition, but I'd take Debian/Linux over FreeBSD when building a random server.

To give but one example, I recently reported a bug when FreeBSD didn't boot after upgrade from 13 to 14. Worse the disk format was somehow altered so when the reboot tried to boot off 13 due to zfs bootonce flag (supposedly a failsafe), it refused to boot for the same reason. I believe it's due to a race condition in geom/cam. The same symptoms were reported 6 years ago, but the bug report has seen no activity. Making your system irrecoverable without a rescue image and console access strikes me as pretty serious. He waxes lyrical about zfs, but it's slower and more resource hungry than it's simpler competition and it's not difficult to find numerous serious zfs bug reports over the years. (But not slower than FreeBSD's UFS, oddly. It's impressively slow.) Another thing that sticks in my mind is a core zfs contributor saying it's encryption support should never have been merged.

This sounds too disparaging because the simplicity and size of FreeBSD has its own charms, but the "it's all sunshine and roses" picture he paints doesn't ring true to me. While it's probably fair to say stable versions of FreeBSD are better than the Linux kernels from kernel.org, and possibly Fedora and Ubuntu, they definitely trail behind the standard Debian stable releases.

Comparing FreeBSD to Debian makes throws up some interesting tradeoffs. On the one hand, FreeBSD's init system is a delight compared to systemd. Sure, systemd can do far, far more. But that added complexity makes for a steep learning curve and a lot of weird and difficult to debug problems, and as FreeBSD's drop dead simple /etc/rc.conf system proves most of the time the complexity isn't needed to get the job done. FreeBSD's jails just make more intuitive sense to me than Linux's equivalent which is built on control groups. FreeBSD's source is a joy to read compared to most I've seen elsewhere. I don't know who's responsible for that - but they deserve a medal.

On the downside - what were they thinking when they came up with the naming scheme under /dev for block devices? (Who thought withering device names was a good idea, so that /dev no longer reflects the state of attached hardware?) And a piece of free advice - just copy how Linux has does it's booting. Loading a kernel + initramfs is both simpler and far more flexible then the FreeBSD loader scheme. Hell, it's so flexible you can replace a BIOS with it.

The combination of the best parts Linux and the BSD's would make for an wonderful world. But having a healthy selection of choices is probably more important, and yes - I agree with him that if you are building an appliance that has an OS embedded in it, the simplicity of FreeBSD does give it an edge.

mijoharas|1 year ago

> Who thought withering device names was a good idea, so that /dev no longer reflects the state of attached hardware?

Sorry, could you clarify what this means? I'm not super familiar with freebsd and don't understand what withering means here.

binkHN|1 year ago

There is more than one Linux distribution that's designed to work without systemd.

jwildeboer|1 year ago

"As an experiment, I decided to migrate two hosts (each with about 10 VMs) of a client — where I had full control—without telling them, over a weekend." And that's where I draw the line. Abusing the trust of your customers is an absolute no-no in my book.

draga79|1 year ago

Not an abuse at all. I've a contract with those clients, and I can move the VMs, change the services, etc. freely as long as it doesn't cost more than the amount we've previously set.

Otherwise, I'd never dare to do something like that.

blenderob|1 year ago

> Abusing the trust of your customers is an absolute no-no in my book.

How do people on the Internet come to such random conclusions when there is no way you could have known the full terms of the contract between the author and their client?

Neil44|1 year ago

Abusing trust is a bit strong, customers pay for a service and beyond a certain level of abstraction these obscure technical details (from their perspective) are not their concern. They're paying to have that abstracted.

rcbdev|1 year ago

> Abusing the trust of your customers

Yes. I also always let my customers sign off when I change the libraries I use. Completely sane approach.

bigfatkitten|1 year ago

How is it an abuse? As long as the customer continues to receive the service they paid for, who cares?

The major providers such as GCP, AWS etc share very few details about their underlying infrastructure with their customers. They change all sorts of things all the time.

lazyant|1 year ago

I wouldn't call it abuse of trust but it's a bad idea to do a migration or any operation that can fail and cause downtime without warning the clients. Come Monday and no servers are online, what do you say, "oops, I tried to change something that didn't work"? that is fine only if they knew there was a migration over the weekend. On my end this situation would fireable offense or close to it.

appendix-rock|1 year ago

What!? Changing implementation details is not “abusing trust”. Where would you even draw the line with this attitude!? Should I be informing my customers whenever I update the version of left-pad I have installed!?

pjmlp|1 year ago

Fidonet, it was been a while since I saw that.

8fingerlouie|1 year ago

I still remember discovering FidoNet sometime in the 90s.

It was a time where sending regular mail to different countries could take weeks, and cross country phonecalls would cost between $2 and $20 per minute, and here was FidoNet that promised to allow communications across the globe with only 1-4 days delay and basically for free.

My 15-18 year old self was instantly sold. I spent countless hours reading through the "forums" on there. So much knowledge just at the tip of my fingers.

Of course some time later it was more or less replaced (for me) by email, usenet and IRC, but the memory still remains.

kopirgan|1 year ago

One of the website providers I use (Pair) for 20+ years used to be exclusively FreeBSD. I believe they use a lot of Linux now. Not sure why.

hi-v-rocknroll|1 year ago

I used them for shared personal website hosting until the Obama era when they were mostly/all FreeBSD. I moved off around the time AWS came about.

sylware|1 year ago

I really wanted to give a try to FreeBSD... because I thought it was linux from "better" times... and then I saw they are tied to the tons of similar gcc extensions... and then I say to myself "why bother since it has the same major compiler dependency issue", better try to fix linux code base or start from there.

ComputerGuru|1 year ago

It doesn’t, though? Gcc doesn’t even ship oob, the project itself uses clang, and almost all packages are also compiled with the system c/c++ toolchain.

bzmrgonz|1 year ago

Awesome writeup, thanks for that, it really puts the bsd's in perspective of today's tech industry. What's the BSD version of k8s? You mention BSDs instead of k8s in the article.

nottorp|1 year ago

If I want to run a BSD as just a file server - so I guess zfs + samba + bonjour or whatever the discovery protocol is these days - which BSD should I try?

nanolith|1 year ago

I'd recommend using FreeBSD as your first BSD. It has a more recent version of ZFS integrated in the kernel than NetBSD. OpenBSD does not have ZFS support; it's a direction they chose not to take for security and simplicity reasons.

phendrenad2|1 year ago

Is BSD really significantly more efficient than Linux? The anecdotes here seem almost unbelievable.

froh|1 year ago

the title of the talk is "Why (and how) we’re migrating many of our servers from Linux to the BSDs"

and that should be the title of this post too.

I like that the blog post shares the slides, not just the video.

dang|1 year ago

Ok, done. Thanks!

riiii|1 year ago

[deleted]

dang|1 year ago

Please don't do this here. Not that we don't appreciate a good old-fashioned flame, but that the long-term costs outweigh the benefits, and we want this site to survive long-term.

cyberax|1 year ago

> As an experiment, I decided to migrate two hosts (each with about 10 VMs) of a client — where I had full control—without telling them, over a weekend.

Yeah. That guy should not be allowed anywhere near the production workloads. "I solve problems", my ass.

draga79|1 year ago

I've a contract with those clients, and I can move the VMs, change the services, etc. freely as long as it doesn't cost more than the amount we've previously set.

Otherwise, I'd never dare to do something like that.

And I'm not so crazy as to do such an operation without the appropriate tests and foundations. Of course, when I started, I had all the conditions to be able to do it, and I had already conducted all possible tests. :-)

codezero|1 year ago

The client is paying for the VM. The underlying system is an abstraction. As long as service agreements weren’t interrupted I don’t see the problem. It sounds shady to say “without telling them,” because saying so implies they should have. I do a lot of optimizations for my customers without telling them, it’s not usually worth mentioning. I assume what they intended to convey was that this change caused no interruption of service so there was no need to contact or warn the customer.

erros|1 year ago

Ladies and gentlemen, this person solves problems. Let it be known.

knowitnone|1 year ago

he gave himself a pat on the back