> The problem I believe stems from some misconceptions, and some historical precedents in the Go community.
It's not just "historical precedents in the Go community" it's "ongoing in Go itself": Go's developers will try as hard and as long as they can to perform raw syscalls even on platforms where it's not officially supported, and even on the one platform where it is genuinely actually supported they'll do raw vDSO calls (which are not) which predictably blows up (https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/).
[0] which is pretty much all of them aside from Linux, Golang 1.11 finally stops performing raw syscalls on OSX
Raw vDSO calls are absolutely supported. The thread you linked is about a Linux kernel that was miscompiled due to a demented Gentoo-hardened toolchain.
> Basically, the Go folks want to minimize external dependencies and the web of failure that can lead to. Fixing that is a goal I heartily agree with. However, we cannot eliminate our dependency on the platform. And using system calls directly is actually worse, because it moves our dependency from something that is stable and defined by standards bodies, to an interface that is undocumented, not portable, and may change at any time.
It's also for performance reasons. Calling a C function from Go is actually quite complicated and can involve copying a lot of data. Using a syscall directly avoids that cost.
"That said, this is nothing compared to the cost of a SYSTEM CALL. System calls are measured in usec, usually 10s of them, depending on the complexity of the system call you're calling. (In some cases, such as disk reads, it can even go to milliseconds.) I'd guess on average the cost of the transition is less than 1% of the system call you're making."
>How did we wind up in this ugly situation? The problem I believe stems from some misconceptions, and some historical precedents in the Go community. First the Go community has long touted static linking as one of its significant advantages. However, I believe this has been taken too far.
How about people just go out to solve their problems, in their own platforms, and don't care to support all platforms?
It's not like most of those Go projects are big in the first place, and one or a couple of contributors don't have the means to test other platforms much, or the time to write it platform-agnostically.
It's not like the author is asking for people to write a bunch of whatever Go's equivalent of IFDEF is. The author's proposed alternative code is just as concise and straightforward, it just has the benefit of using something that's portable across multiple platforms.
If the goal was "Go is a language for Linux", fine. But that wasn't the goal. From the first release it was targeted to macOS, Windows and other platforms, but implemented in a way that didn't make any sense on those platforms.
However, this created a great ecosystem where everything is usually cross-platform by default. With a lot of projects being developed in Go in the cloud ecosystem, developers can more freely choose the platform they want to develop with. Same story with tools written in rust.
I think this is likely the reality. I don't think this is necessarily a community problem so much as it is the reality of software development today. Most developers given the time and resources would probably enjoy making their libraries and applications as portable and flexible as possible -- who doesn't love to see their work reused? That being said, on a typical software delivery cycle you optimize for what you think _most_ people are using, and likely what you yourself are using -- standard flavors of Linux operating systems running in one of the big cloud providers. This is not to say Go doesnt have applications outside of this space -- it clearly does.
Performance, most likely. Go wants small stacks, which requires all dependent libraries to be compiled by the Go toolchain, or else suffer performance consequences due to stack switches. libc obviously wasn't compiled with the Go toolchain, so calling it takes the performance hit of stack switching.
It often does use libc. OP's article mentioned that they're using it more on Darwin now, but they've also been using it on Linux for a long time to do some of their DNS resolution (where implementing a stable parser for /etc/resolv.conf and friends is nobody's idea of a good time).
I don't have an authoritative source, but I think one of the big downsides of CGO for them is that it hurts their cross-compilation story. Cross-compiling a pure Go program is trivial, which is great, but by default it disables CGO.
I think the key to the rationale is in the name. libc is a library for C. Go doesn't use the same calling conventions as platform C and does a number of things that make it not really play well with C code.
The authors want a world where you can just make a read syscall without using FFI.
It's possible this is a unrealistic expectation. Linux kernel interfaces are generally quite stable. On OSX and windows the kernel interface seems to be an undocumented unsupported API which is sad.
They do you use a `libc` they use `musl-libc` [1] as apposed to `gnu-libc` (gblic) [2]. There are some differences [3], the largest (without getting into various API incompatibilities) is licensing. `musl-libc` is BSD licensed, while `gnu-libc` (glibc) is GPL licensed.
The parent post does explain, about as well as any member of the go-team has to date for their usage of `musl-libc` over `gnu-libc` (glibc). Overall `musl-libc` does focus on correctness a lot more then `glibc`, and static linking is a large benefit.
Static linking isn't an issue when you expect applications to be short lived transient items. Where system images only exist for days, or hours before being redeployed by either patches, or CI-Managered triggered updates. I think this is the largest disconnect. Go expects the development pipeline to be quick so nobody expects multi-year lifespan binaries, which is directly contradictory to how many expect executable binaries to behave.
No that I'm authority on this, just stating a common disconnect I've experienced working with ex-Googlers.
My understanding, which is certainly not authoritative, is that Go wants to be 100% statically linked. libc is usually dynamically linked. MUSL is Linux only.
[+] [-] masklinn|7 years ago|reply
It's not just "historical precedents in the Go community" it's "ongoing in Go itself": Go's developers will try as hard and as long as they can to perform raw syscalls even on platforms where it's not officially supported, and even on the one platform where it is genuinely actually supported they'll do raw vDSO calls (which are not) which predictably blows up (https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/).
[0] which is pretty much all of them aside from Linux, Golang 1.11 finally stops performing raw syscalls on OSX
[+] [-] amluto|7 years ago|reply
[+] [-] Immortalin|7 years ago|reply
[+] [-] nhooyr|7 years ago|reply
[+] [-] cdoxsey|7 years ago|reply
It's also for performance reasons. Calling a C function from Go is actually quite complicated and can involve copying a lot of data. Using a syscall directly avoids that cost.
[+] [-] gok|7 years ago|reply
[+] [-] Annatar|7 years ago|reply
Quoting the author of the article,
"That said, this is nothing compared to the cost of a SYSTEM CALL. System calls are measured in usec, usually 10s of them, depending on the complexity of the system call you're calling. (In some cases, such as disk reads, it can even go to milliseconds.) I'd guess on average the cost of the transition is less than 1% of the system call you're making."
[+] [-] coldtea|7 years ago|reply
How about people just go out to solve their problems, in their own platforms, and don't care to support all platforms?
It's not like most of those Go projects are big in the first place, and one or a couple of contributors don't have the means to test other platforms much, or the time to write it platform-agnostically.
[+] [-] cwyers|7 years ago|reply
[+] [-] gok|7 years ago|reply
[+] [-] pcwalton|7 years ago|reply
If the Go culture doesn't favor writing portable code, that's absolutely a problem with Go's culture.
[+] [-] cube2222|7 years ago|reply
[+] [-] fndrplayer13|7 years ago|reply
[+] [-] lelf|7 years ago|reply
Simple example: GNU/Hurd’s glibc works quite differently.
[+] [-] twic|7 years ago|reply
I've heard various stories about it over the years, so i would appreciate an authoritative source.
[+] [-] pcwalton|7 years ago|reply
[+] [-] oconnor663|7 years ago|reply
I don't have an authoritative source, but I think one of the big downsides of CGO for them is that it hurts their cross-compilation story. Cross-compiling a pure Go program is trivial, which is great, but by default it disables CGO.
[+] [-] shanemhansen|7 years ago|reply
The authors want a world where you can just make a read syscall without using FFI.
It's possible this is a unrealistic expectation. Linux kernel interfaces are generally quite stable. On OSX and windows the kernel interface seems to be an undocumented unsupported API which is sad.
[+] [-] valarauca1|7 years ago|reply
The parent post does explain, about as well as any member of the go-team has to date for their usage of `musl-libc` over `gnu-libc` (glibc). Overall `musl-libc` does focus on correctness a lot more then `glibc`, and static linking is a large benefit.
Static linking isn't an issue when you expect applications to be short lived transient items. Where system images only exist for days, or hours before being redeployed by either patches, or CI-Managered triggered updates. I think this is the largest disconnect. Go expects the development pipeline to be quick so nobody expects multi-year lifespan binaries, which is directly contradictory to how many expect executable binaries to behave.
No that I'm authority on this, just stating a common disconnect I've experienced working with ex-Googlers.
[1] https://www.musl-libc.org/
[2] https://www.gnu.org/software/libc/
[3] https://wiki.musl-libc.org/functional-differences-from-glibc...
[4] https://github.com/golang/go/issues/9627
[+] [-] steveklabnik|7 years ago|reply