I worked on Fuchsia for many years, and maintained the Go fork for a good while. Fuchsia shipped with the gvisor based (go) netstack to google home devices.
The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.
The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.
The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.
pjmlp|2 years ago
Is writing compilers, linkers, IoT and bare metal firmware systems programming?
raggi|2 years ago
The Go fork was a pain for a number of reasons, some were history, but more deeply the plan for fixing that was complicated due to the runtime making fairly core architectural assumptions that the world has fd's and epoll-like behavior. Those constraints cause challenges even for current systems, and even for Linux where you may not want to be constrained by that anymore. Eventually Fuchsia abandoned Go for new software because folks hired to rewrite the integration ran out of motivation to do so, and the properties of the runtime as-written presented atrocious performance on a power/performance curve - not suitable for battery based devices. Binary sizes also made integration into storage constrained systems more painful, and without a large number of components written in the language to bundle together the build size is too large. Rust and C++ also often produce large binaries, but they can be substantially mitigated with dynamic linking provided you have a strong package system that avoids the ABI problem as Fuchsia does.
The cost of crossing the cgo/syscall boundary remains high, and got higher over the time that Fuchsia was in major development due to the increased cost of spectre and meltdown mitigations.
The cgo/syscall boundary cost shows up in my current job a lot too, where we do things like talk to sqlite constantly for small objects or shuffle small packets of less than or equal to common mtu sizes. Go is slow at these things in the same way that other managed runtimes are - for the same reasons. It's hard to integrate foreign APIs unless the standard library already integrated them in the core APIs - something the team will only do for common use cases (reasonably so, but annoying when you're stuck fighting it constantly). There are quite a few measures like this where Go has a high cost of implementation for lower level problems - problems that involve high frequency integration with surrounding systems. Go has a lower cost of ownership when you can pass very large buffers in or out of the program and do lots of work on them, and when your concurrency models fit the channel/goroutine model ok. If you have a problem that involves higher frequency operations, or more interesting targets, you'll find the lack of broader atomics, the inability to cheaply or precisely schedule work problematic.