Not meaning to bring up a Rust vs Go debate, but since there are a few comments claiming that this is a waste of time, I figured it's worth mentioning the Rust based re-implementation of coreutils:
nice. This is the first time I actually read Rust code, and aside from "something macro" that scares me, it's actually mostly readable. Will definitely check out.
That re-write lists its license as MIT, not GPLv2. Is that legitimate? e.g. in general, can you re-implement a library with one license and change it to a non-compatible license?
I don't know much about go, but taking a quick look at implementations - they seem to be written by a programming novice, and the are quite primitive. Don't mean to be negative, just my opinion.
GNU started as GNU's Not Unix, reimplementing Unix userland for free. The people who are adamant about calling it GNU/Linux are, on some level, remembering that the userspace is historically a reimplementation.
Give me a busybox that I can 'go build' and that becomes really quite interesting.
Except for the fact that it'll be years before all of the subtle bugs are worked out and you can rely on those apps to be as stable as the ones we've got:
1. It has a fairly good test suite, that rewrites should leverage. That can be easily done by setting $PATH to prepend the dir of the new tools, and running `make check`
2. To give an indication of the size of coreutils:
$ for r in gnulib coreutils; do (cd $r && git ls-files | tr '\n' '\0' | wc -l --files0-from=- | tail -n1); done
It's also worth pointing out, since busybox was mentioned a few times, that the latest release of coreutils has the ./configure --enable-single-binary option to build as a multi-call binary linke busybox etc.
Heh, this brings me back. I was a young guy at Sun, perl 4 was a thing, I actually argued that we should redo /usr/bin in perl. In the days of a 20mhz SPARC.
There's "Perl Power Tools: Unix Reconstruction Project" [0], which doesn't seem to have activity since 2004. I remember something older than 2004, I think, back from when perl was first available on Windows, to bring UNIX command line utilities to Windows through perl.
It's great that they all have inline POD documentation too.
I'm curious: what was your main motivation? I can understand it as a worthy challenge, but it would probably have led to a worse performance than the C-based utilities, no?
While I'm not sure if he posts here, I used to work in the same group with Jim Meyering, who maintained/maintains coreutils. Great guy.
Anyway, he told some great stories of the complexities of POSIX, and what happens in Solaris when you have directories 20,000 lines deep (and how to do it efficently, and the fun of teaching various coreutils commands about SELinux). Lots of it gets surprisingly low level quick.
Coreutils is complex for many good reasons, so while these tools look all nice and clever, they are not dealing with alot of the same kinds of issues.
(I also recall some of the heck Ansible had to go through to deal with atomic moves, groking when something was on a NFS partition, and so on. Not the same thing, but small seemingly easy things get tricky quickly.)
Bottom line is appreciate those little tiny Unix utilities, a lot went into them even when you think they aren't doing a whole lot :)
It would be cool to have a "busybox" alternative targeted at Plan9 commands. I personally find Plan9 utils much more logical (and easier to implement!). Something like 9base from suckless, but as a single binary and hopefully in a more modern language (most of the code like sam or rc is not so easy to understand).
Very neat. I started rewriting GNU's coreutils in Rust and find it to be a nice way to learn the language.
Also, it is interesting how many obscure and less-known features some of the tools provide. In this case I clearly see the 80/20 rule, you can implement 80% of the main functionality in 20% of the time, but if you want to make exact clones you're going to need invest a lot more time.
The line mentioning using gccgo to make the binaries small intrigued me... it worked! I've only written a couple of small tools in go but the size of the binary always bugged me.
It's just a shame that setting up cross-compilation with gccgo looks a lot more involved vs. gc.
I rewrote "pause" from Windows into my Ubuntu box out of habit of using it in some cases. First it was written in perl, then in Python, I also symlink clear as cls out of habit from Windows. There's small little "hacks" that you can do that are kind of fun.
I'm just doing this for fun and to learn about Go and Unix at the same time.
I am a beginner, and a lot of my code is inefficient and/or incomplete, but by putting it on the Internet I can get criticism and find out where I went wrong.
For instance, some of you have told me that the way I've been reading files is very inefficient, so now I'll try and do it the correct way.
Why do Go programmers always want to redo everything? It's rare to see something actually new written in Go, proving that Go can do everything C can (except make shared libraries and produce small binary sizes) but not that it's actually better.
[+] [-] caipre|11 years ago|reply
https://github.com/uutils/coreutils/
And another by suckless in plain C:
http://git.suckless.org/sbase/tree/README
[+] [-] barsonme|11 years ago|reply
It's not complete yet, but I couldn't pass up this thread :-)
[+] [-] zarkone|11 years ago|reply
[+] [-] aikah|11 years ago|reply
[+] [-] cloverich|11 years ago|reply
[+] [-] quink|11 years ago|reply
Goodbye, memory.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] vvpan|11 years ago|reply
[+] [-] barakm|11 years ago|reply
GNU started as GNU's Not Unix, reimplementing Unix userland for free. The people who are adamant about calling it GNU/Linux are, on some level, remembering that the userspace is historically a reimplementation.
Give me a busybox that I can 'go build' and that becomes really quite interesting.
[+] [-] mixologic|11 years ago|reply
http://www.joelonsoftware.com/articles/fog0000000069.html
[+] [-] Animats|11 years ago|reply
[+] [-] beoh|11 years ago|reply
[+] [-] pixelbeat|11 years ago|reply
1. It has a fairly good test suite, that rewrites should leverage. That can be easily done by setting $PATH to prepend the dir of the new tools, and running `make check`
2. To give an indication of the size of coreutils:
$ for r in gnulib coreutils; do (cd $r && git ls-files | tr '\n' '\0' | wc -l --files0-from=- | tail -n1); done
985050 total 243154 total
[+] [-] pixelbeat|11 years ago|reply
[+] [-] luckydude|11 years ago|reply
Silly me. Maybe it makes sense now.
[+] [-] thwarted|11 years ago|reply
It's great that they all have inline POD documentation too.
[0] http://search.cpan.org/dist/ppt/
[+] [-] ezequiel-garzon|11 years ago|reply
[+] [-] microcolonel|11 years ago|reply
[+] [-] mpdehaan2|11 years ago|reply
Anyway, he told some great stories of the complexities of POSIX, and what happens in Solaris when you have directories 20,000 lines deep (and how to do it efficently, and the fun of teaching various coreutils commands about SELinux). Lots of it gets surprisingly low level quick.
Coreutils is complex for many good reasons, so while these tools look all nice and clever, they are not dealing with alot of the same kinds of issues.
(I also recall some of the heck Ansible had to go through to deal with atomic moves, groking when something was on a NFS partition, and so on. Not the same thing, but small seemingly easy things get tricky quickly.)
Bottom line is appreciate those little tiny Unix utilities, a lot went into them even when you think they aren't doing a whole lot :)
[+] [-] zserge|11 years ago|reply
[+] [-] SSLy|11 years ago|reply
Could you elaborate on that?
[+] [-] bjenk|11 years ago|reply
[+] [-] misframer|11 years ago|reply
A better way to do this would be to utilize the io.Reader interface.
[0] https://github.com/polegone/gonix/blob/0b65cd4fb9c6c44357d0a...
[+] [-] zobzu|11 years ago|reply
I think it needs some work for actual parity =)
[+] [-] maxmcd|11 years ago|reply
[+] [-] iagooar|11 years ago|reply
Also, it is interesting how many obscure and less-known features some of the tools provide. In this case I clearly see the 80/20 rule, you can implement 80% of the main functionality in 20% of the time, but if you want to make exact clones you're going to need invest a lot more time.
[+] [-] giancarlostoro|11 years ago|reply
[+] [-] lttlrck|11 years ago|reply
[+] [-] tux|11 years ago|reply
[+] [-] aceperry|11 years ago|reply
[+] [-] giancarlostoro|11 years ago|reply
[+] [-] electic|11 years ago|reply
[+] [-] polegone|11 years ago|reply
I am a beginner, and a lot of my code is inefficient and/or incomplete, but by putting it on the Internet I can get criticism and find out where I went wrong.
For instance, some of you have told me that the way I've been reading files is very inefficient, so now I'll try and do it the correct way.
[+] [-] taternuts|11 years ago|reply
[+] [-] forkandwait|11 years ago|reply
[+] [-] tezka|11 years ago|reply
[+] [-] iagooar|11 years ago|reply
2. Learning some less-known flags and use cases for the tools we use everyday
3. Reasoning about OS features and how they work
4. Having fun
Please, if you want to be this rude go back to your troll cave.
[+] [-] vortico|11 years ago|reply
[+] [-] pjmlp|11 years ago|reply
If C is still present in the stack, the typical C exploits are possible, which were how many Oracle JVM exploits came to be, for example.
Reducing C presence to the same as Assembly, will just make everything safer in our systems.
Not that it will ever happen in UNIX systems, given how C came to life.
[+] [-] jonhohle|11 years ago|reply
[+] [-] malkia|11 years ago|reply