Today I looked at Nim in a bit more depth because it keeps popping up. I have a slightly uncomfortable feeling about it that I hope is unfounded!
To me it looks like it makes the unsafety of c more accessible because of better tooling and nicer syntax. Looking at e.g. [1] there are still pointers, null-pointers etc, just like in c. So now you have a language that looks superficially simple but is actually very dangerous. Compare this to e.g. rust which was the most painful thing I learned recently but I also know that it brings something fundamentally new to the table.
Anyway, there's a lot I don't understand about Nim and I'd be happy to see evidence to the contrary.
You are correct in saying that Nim does have C style pointers (ptr keyword). These are unsafe, but are meant to be used as part of the FFI. So when developing ordinary applications you should not be using them, unless you absolutely have to.
Nim also has references (ref keyword) which are traced by the GC and therefore safe.
So Rust has some cool safety features, especially for concurrent code. But, and perhaps I'm just uninformed, I never really understood the safety benefit of Rust's 'never nil' design. Nil is a useful modelling tool, even in Rust where it exists via Option<>/None, correct? Perhaps by forcing you to be extremely explicit (and enforcing `match` always handles all conditions) you gain some arguable safety, but at what cost? It's certainly not easier to use and reason about, IMO. And it seems just as likely you'll end up crashing your program due to a bounds-check error (which may happen more often since Rust encourages indexing over references due to this very design.. at least, so I've read).
It seems to me the design was chosen more as a way to ensure memory lifetime could be better predicted by the compiler rather than any strong argument for safety.. but then, I'm not well read on the subject, and It's very likely there's good safety arguments for it I'm not aware of.. either way, in my experience nil-deref errors are rarely a painful thing. They happen often, but are also fixed quickly.
I've tried Rust, Nim and Go and I prefer Nim. But this could also be because of my background as a C/C++ programmer, and my particular requirements (general purpose programming language that doesn't try to hold my hand too much).
There are things I don't like about the language (eg. case-insensitivity), but overall if I had to choose a newish language for a new task, I'd choose Nim over Rust and Go. (However, if you threw D into the equation I'd probably go with D simply because I feel it's slightly more mature).
Incidentally, the way I tried to teach myself Nim (and to see if the language was usable for creating small Windows apps) was to write a WinApi program. It took about the same or less effort as what it would have taken me in C/C++, but it just felt much safer and more pleasant to work with.
I read this, and wonder why software has gotten so fat? Any simple application these days is easily on the few dozen of MB, most a few hundred, with a few on a few GB in size. Why aren't we streamlining software to reduce its size? I understand we have gotten "rich" on storage, but if the trend continues...
I am sure many portable devices would benefit if applications were trimmed down.
My experience writing KnightOS has given me the understanding that we are wasting the obscene amount of resources available to us from modern computers.
For me, it is easy to understand why, it has to do with shipping code sooner to your customers. It takes a lot of resources and time to optimize your code and the longer your customers have to wait, the more money you lose. In many cases, the extra size is due to frameworks added to the programs to speed up the development.
Frameworks tend to be heavy in size because it needs to accommodate various tasks as well as many platforms it can support. It's not easy to make them modular, so you can pick what you want and leave the rest out to shrink down the size.
For customers, would you rather wait a few days to get a certain feature that works out good enough in a few days or would you rather wait a few months for a feature that works great?
The competition is intense, wait too long to ship a feature and you lose to competitors that managed to get it out sooner than you. So, it's tough to balance each feature and tough to say no to customers, so that you could stay focused and lean.
Because computers can take it and we have limited time / can be lazy? I'm just playing with a barcode reader I made that uses opencv to get the image from the camera. That's 100mb of code to do something you could probably do with <1 mb but it makes things easy and works.
This post is Nim specific, but the key ideas for getting to a small binary (optimize for size, remove the standard library, avoid compiler main() / crt0 baggage by defining _start, use system calls directly) are the same in C, C++, Rust, etc.
I like the end result. However, it makes me wonder just why it's so acceptable that simple programs like this even compile down to a 160KB executable in the first place.
The actual active code is essentially some text and an interrupt. That much, at least, should be language independent. Are modern compilers incapable of discarding unreferenced code, or am I missing something?
The first compilation, which is 160 KB, is totally unoptimized, contains all kinds of debugging and checks. It's just supposed to be for yourself during development of the program.
Nice achievement! The article is quite the journey through various build parameters, switching gcc for clang and glibc for musl along the way. In the end, the secret sauce is syscalls and custom linking, though (as always with this kind of thing).
This seems mostly useful in highly constrained embedded environments (AVR, MSP430, ARM M0, PIC etc.) Unfortunately it seems like none of these "modern" system languages (Nim, Rust) seem to be putting too much effort towards embedded platforms :(
Rust isn't putting a lot of specific effort into embedded, but we already work on many embedded platforms. As the language matures, I expect that support to grow.
Including `-d:release` optimizes it, so that's probably what he meant. It's one of the first things in the [tutorial](http://nim-lang.org/0.11.0/tut1.html).
Hey this isn't a constructive comment & you'll probably get downvoted for it. It happens to me whenever I comment in a thread that mentions Node.js because of my name. It is the best, but not everyone understands.
[+] [-] teh|11 years ago|reply
To me it looks like it makes the unsafety of c more accessible because of better tooling and nicer syntax. Looking at e.g. [1] there are still pointers, null-pointers etc, just like in c. So now you have a language that looks superficially simple but is actually very dangerous. Compare this to e.g. rust which was the most painful thing I learned recently but I also know that it brings something fundamentally new to the table.
Anyway, there's a lot I don't understand about Nim and I'd be happy to see evidence to the contrary.
[1] http://nim-lang.org/0.11.0/tut1.html#advanced-types-referenc...
[+] [-] dom96|11 years ago|reply
Nim also has references (ref keyword) which are traced by the GC and therefore safe.
[+] [-] filwit|11 years ago|reply
It seems to me the design was chosen more as a way to ensure memory lifetime could be better predicted by the compiler rather than any strong argument for safety.. but then, I'm not well read on the subject, and It's very likely there's good safety arguments for it I'm not aware of.. either way, in my experience nil-deref errors are rarely a painful thing. They happen often, but are also fixed quickly.
[+] [-] unknown|11 years ago|reply
[deleted]
[+] [-] ilitirit|11 years ago|reply
There are things I don't like about the language (eg. case-insensitivity), but overall if I had to choose a newish language for a new task, I'd choose Nim over Rust and Go. (However, if you threw D into the equation I'd probably go with D simply because I feel it's slightly more mature).
Incidentally, the way I tried to teach myself Nim (and to see if the language was usable for creating small Windows apps) was to write a WinApi program. It took about the same or less effort as what it would have taken me in C/C++, but it just felt much safer and more pleasant to work with.
[+] [-] k__|11 years ago|reply
I felt a bit overwhelmed with the whole wrapping thing.
[+] [-] masklinn|11 years ago|reply
* Nim/GCC gains 2 bytes by smartly reusing the previously set AX register's value to set DI where Rust/Clang uses an immediate
* Nim can't express that stuff after the EXIT syscall is unreachable and wastes a byte on a RETQ.
[+] [-] Fastidious|11 years ago|reply
I am sure many portable devices would benefit if applications were trimmed down.
[+] [-] Sir_Cmpwn|11 years ago|reply
[+] [-] mikhailt|11 years ago|reply
Frameworks tend to be heavy in size because it needs to accommodate various tasks as well as many platforms it can support. It's not easy to make them modular, so you can pick what you want and leave the rest out to shrink down the size.
For customers, would you rather wait a few days to get a certain feature that works out good enough in a few days or would you rather wait a few months for a feature that works great?
The competition is intense, wait too long to ship a feature and you lose to competitors that managed to get it out sooner than you. So, it's tough to balance each feature and tough to say no to customers, so that you could stay focused and lean.
[+] [-] tim333|11 years ago|reply
[+] [-] _blrj|11 years ago|reply
[+] [-] level|11 years ago|reply
[1] http://www.muppetlabs.com/~breadbox/software/tiny/teensy.htm...
[+] [-] onedognight|11 years ago|reply
[+] [-] bonesmoses|11 years ago|reply
The actual active code is essentially some text and an interrupt. That much, at least, should be language independent. Are modern compilers incapable of discarding unreferenced code, or am I missing something?
[+] [-] def-|11 years ago|reply
Also there is some overhead every Nim program has. But if you get to bigger programs you'll see that Nim's binary size is just fine, for example a NES emulator is just 136 KB: http://hookrace.net/blog/porting-nes-go-nim/#comparison-of-g...
[+] [-] killercup|11 years ago|reply
Nice achievement! The article is quite the journey through various build parameters, switching gcc for clang and glibc for musl along the way. In the end, the secret sauce is syscalls and custom linking, though (as always with this kind of thing).
[+] [-] thom_nic|11 years ago|reply
[+] [-] def-|11 years ago|reply
- http://nim-lang.org/nimc.html#nim-for-embedded-systems
- https://github.com/sirlantis/pebble-nim
[+] [-] istvan__|11 years ago|reply
[+] [-] steveklabnik|11 years ago|reply
[+] [-] jug|11 years ago|reply
Haha!
What I found was most impressive was the small binary even without the tricks.
[+] [-] delinka|11 years ago|reply
Did I miss where he optimized for speed?
[+] [-] lukevers|11 years ago|reply
[+] [-] coldtea|11 years ago|reply
[+] [-] masklinn|11 years ago|reply
[+] [-] cristaloleg|11 years ago|reply
[+] [-] nodejsisbest|11 years ago|reply