A small perspective insight from a game developer:
We (Beamdog) are using nim in production for Neverwinter Nights: Enhanced Edition, for the serverside parts of running the multiplayer infra.
nim is uniquely good in providing an immense amount of value for very little effort. It gets _out of my way_ and makes it very easy to write a lot of code that mostly works really well, without having given me any serious traps and pits to fall into. No memleaks, no spurious crashes, no side-effect oopses, anything like that. It's C/++ interop has been a huge enabler for feature growth as well, as we can partly link in game code and it works fine. For example, our platform integrates seamlessly with native/openssl/dtls for game console connectivity. And it all works, and does so with good performance. It is all now a set of quite a few moving components (a message bus, various network terminators, TURN relays, state management, logging and metrics, a simple json api consumed both by game clients and web (https://nwn.beamdog.net), ...).
We're still lagging behind and are on 1.0.8, but that is totally fine. It's tested and works, and there's no real incentive to move to 1.2 or 1.4 - yet!
Usage for our game has expanded to provide a few open source supporting utilities (https://github.com/Beamdog/nwsync) and libraries (https://github.com/niv/neverwinter.nim/) too. The good part about those is that they are cross-platform as well, and we can provide one-click binaries for users.
OTOH, There's been a few rough edges and some issues along the way. Some platform snafus come to mind, but those have been early days - 0.17, etc. Some strange async bugs had been found and fixed quickly though.
Good and bad, at least for me, nim has been a real joy to work with. If I had the chance to message my 3-years-younger, I'd just say "yea, keep going with that plan", as it turned out to save us a lot of development time. I suspect the features we've put in wouldn't have been possible in the timeframe we had, if it would have been all written in, say, C++.
May I ask, did you consider Go and decided against it for any reason considering your requirements of quick development, cross-platform, interoperability are all guaranteed features of Go which should have given better peace of mind considering a production application?
Since Nim has hit the front page twice in the past day, let me just say: if you're at all curious about the language, try it out over the weekend. Nim isnt quite as simple as Zig (to compare to another compiled language with a smaller ecosystem), but the more advanced features stay out of the way until you need them. If you've worked with any static language, you probably know 80%-90% of what you need to write productive Nim code. The dev team has worked hard over the past year to improve documentation, and the website links to some great tutorials.
My experience is complete opposite. I find Nim to be very simple and Zig to be not simple. What's the problem with Zig? I find the documentation to be chaotic. I believe that this is largely due to the rapid pace of changes (including changes that break earlier code).
Maybe I'll give it another try. I might have gone too deep into the docs, but Nim seemed to get very complicated with all the {.annotation.} to remember
I'm going to do a little bit of a shameless plug as a way to show off just what Nim is capable of. If you've ever played one of the many IO games, it might seem familiar to you. Basically I have used Nim to create a multiplayer game that can be played in the browser[1].
I'm planning to write up a more detailed post outlining the architecture of this. But suffice it to say, Stardust is written 100% in Nim. Both the servers and the client running in your browser is written in Nim. The client uses Nim's JS backend and the server Nim's C backend. The two share code to ensure the game simulation is the same across both. Communication happens over websockets.
It's been a lot of fun working on this and I cannot imagine another language being as flexible as Nim to make something like this possible. I already have an Android client up and running too, it also is built from the same code and I plan to release it soon.
Commercial web application in Java. Java GC is concurrent, on other cpu cores than the cores processing customer requests. Modern GCs such as Red Hat's Shenandoah or Oracle's ZGC have 1 ms or lower pause times on heap sizes of terabytes (TB). Java 15 increased max heap size from 4 TB to 16 TB of memory.
Now the argument. A thread running on a cpu core which processes an incoming request has to earn enough to pay the economic dollar cost of the other threads that do GC on other cpu cores. But that thread processing the client request spends ZERO cpu cycles on memory management. No reference counting. Malloc is extremely fast as long as memory is available. (GC on other cores ensures this.) During or even after that thread has serviced a client request, GC will (later) "dispose" of the garbage that was allocated when servicing the client request.
Obviously, just from Java continuously occupying the top 2 or 3 spots for popularity over the last 15 years -- Java's approach must be doing something right.
That said, I find Nim very interesting, and this is not a rant about Nim. I am skeptical of an alternative to real GC until it is proven to work, in a heavily multi threaded environment. And there is that economic argument of servicing client requests with no cpu cycles spent on memory management -- until AFTER the client request was fully serviced.
> Obviously, just from Java occupying the top 2 or 3 spots for popularity over the last 15 years -- Java's approach must be doing something right.
I'm not sure if this argument holds. Java's high memory usage is frequently cited as a downside of Java. GUI applications written in Java have a reputation of being memory-hungry, and I know plenty of people struggling with memory usage of server application (e.g. ElasticSearch). You will also find C/C++ on the same popularity lists…
That said, I do agree that a tracing GC is a better solution (for most programs) than reference counting these days. The improvements in the pause times by the Java GCs are really impressive, and the throughput is great. One example would be ESBuild: The author created a prototype in both Rust and Go, and found that the Go version was faster allegedly because Rust ended up spending a lot of time deallocating [source: https://news.ycombinator.com/item?id=22336284].
The majority of all GC research goes to java and (primarily the hotspot) jvm. ZGC and kin are like the zfs of garbage collectors: insanely good, but also insanely complex and not readily replicable. It's not practical to expect somebody with fewer resources than oracle to create something similar.
Reference-counting strategies are much easier to optimize; so if you have fewer resources available to throw at your compiler it's the way to go.
I believe the phrasing might not be clear to everyone, so here's a reduced version:
* Nim's current async implementation creates a lot of cycles in the graph, so ARC can't collect them.
* ORC is then developed as ARC + cycle collector to solve this issue, and it has been a success.
* This 1.4 release introduces ORC to everyone so that we can have mass testing for this new GC and eventually move torwards ORC as the default GC.
TL;DR: ORC works with everything† and will be the new default GC in the future. Your old Nim code will continue to work, and will just get faster‡.
† We are not sure that it's bug-free yet, which is why it's not the default for this release.
‡ Most of the time ORC speeds things up, but there are edge cases where it might slow things down. You're encouraged to test your code with --gc:orc against our default GC and report performance regressions.
> ARC was first shipped with Nim 1.2... [ORC is] our main new feature for this release
Seems like they should phrase it like "use ORC unless you know you don't have cycles" rather than "use ORC if you're not sure you have cycles", but that's a reasonable responsibility to take on if you're choosing to use an alternative garbage collector.
Looks like it'a still an opt-in experimental option at this point rather than the default. Preaumably they'll fix these caveats before recommending it for general use.
At an ELI5 level, does anyone know why Swift can have ARC and async but Nim's ARC doesn't work with async? Is it just implementation details of Nim's async specifically instead of anything more fundamental to ARC? Just asking out of curiosity.
Awesome to see 1.4 finally released, and to have a version that should hopefully build cleanly out of the box on OpenBSD and FreeBSD and mostly "just work"[1]. Next target is NetBSD[2] then DragonFly!
I was skeptical towards Nim. Then I wrote a small program that uses SDL2, and compared it to the same program written in over 10 other programming languages. Nim has an excellent combination of ease of use and performance.
Nim is my mistress. I work with Elixir and Javascript, but something about Nim is so pure and exact and precise. I love it.
I'm hoping someone builds a great web framework with it, and a library like Ecto for better postgresql access. This language great potential to build faster software.
Great to see that Nim grows ever faster and more stable. Been a fan of the language for quite a while now and while I miss the old days of new and exiting changes with every release it's much easier to target Nim now that it has become this stable. Great work all around!
I played around with Nim a little and was amazed at how small the executables were: 10s of KB for something simple. Even Rust spits out 100s of KB, or even >1MB by default.
In the end I still went with Rust, simply because it's more popular, but my initial impression was that Nim is a really fun language to work in, and much much easier to pick up than Rust.
So one of the biggest improvements to my day-to-day life as a programmer was having compiler-checked nullability in Kotlin. It is the most significant feature that makes programs more reliable and code more readable compared to Java. I never want to miss it again.
Why did Nim decide to allow null pointers? They must have had a very good reason, given its young age?
Congratulations! And thanks for all the progress on ARC/ORC. But I must say, even though I'm excited about ARC/ORC, I'm even more happy to see the nice list of bugfixes and quality standard library additions!!
Nim is a lot of fun to write, and the language is small enough that you could build something fun / useful in a weekend. The community is also very responsive. Definitely worth checking out!
100% agree. I'm developing an ORM [1] in my spare time specifically with the vision to make it easier to create web apps with Nim. Created a PoC with Jester in backend, Karax on the front, and Norm for model definition, and it turns out to be very much usable, accessing the same models from backend and frontend and all that.
Maybe my lack of lower level language knowledge will show here, but how does that compare to Rust? I keep seeing and hearing about these new-ish languages Rust, Nim, Zig, etc. that all claim to be C/C++ perf lvl but better developper experience.
Any of these is preferred for API/Web development? Does it yield much advantage over something like Elixir that already provides significant perf increase over a Python(Django-Flask)/RoR stack?
> Any of these is preferred for API/Web development?
I often wonder why anyone would use a language like Zig or Rust for Web development. I am very much biased in favour of Nim here, but to me in general a non-GC'd language seems like overkill for web development. So I would rule those languages out straight away.
I can't speak to Elixir, likely the main difference will be the lack of a mature Django-like framework. I'm assuming that Elixir has one, whereas Nim doesn't. If you're looking for a fun project, I would love to see that made for Nim and happy to give pointers if you need them :)
In my mind, and be aware of my bias here, these languages split up into three different categories:
* Has a GC, but you can remove it. This is Nim and D.
* Relies on pervasive refcounting. This is Nim if you choose that implementation, Swift.
* Has no GC. This is Zig and Rust. (Though obviously you can use refcounting in these languages, but it is as a library.)
While this focuses on a specific aspect of these langauges, I think it also represents their philosophies pretty well. Nim and D start from a "what if we had a GC" and then try to make things nicer down the stack. Rust and Zig are how nice can we go starting from nothing?"
There are also additional factors that may or may not play in here, depending on what your needs are. Arguably, Rust is starting to break out of the "niche language" stage and move into the "significant projects and is sticking around" phase, whereas many of these other languages aren't quite there yet. This can matter with things like getting help, package support... some people love the open frontiers of new languages, others want something more mature. https://nimble.directory/ has 1,431 packages at the time of writing, https://crates.io/ has 48,197.
Sibling talks about memory management. Some other notes:
- Nim and rust have macros.
- D has very high-quality metaprogramming (probably better than any other language without macros).
- (Afaik swift and zig have fairly normal templates. I don't know as much about those.)
- D and zig have compile-time function execution (think c++ constexpr on steroids on steroids).
- Swift is likely to be the slowest of the bunch; like go, though it's technically compiled to native code, its performance profile is closer to that of a managed language. The others should be generally on par with each other and with c.
I am using nim this week on a data file scraping project. If you can Google and write python you can just start with nim and learn as you go, very easy.
I am super bummed that there is (effectively) no debugging. I am too lazy to mess with VS code to get gdb working, it should just work already. Someday, I guess. Maybe jetbrains will save us. With real IDE support nim would sweep the nations.
From my experience, there really is not that much need for debugging in the sense of adding breakpoints and attaching to a running process with Nim. Honestly, `debugEcho` suffices, give it a shot. I mean, you may need more debugging tools when your project gets large but since you're just starting with Nim, you can just relax and keep going.
> I started the Nim project after an unsuccessful search for a good systems programming language. Back then (2006) there were essentially two lines of systems programming languages:
> The C (C, C++, Objective C) family of languages.
> The Pascal (Pascal, Modula 2, Modula 3, Ada, Oberon) family of languages.
> The C-family of languages has quirky syntax, grossly unsafe semantics and slow compilers but is overall quite flexible to use. This is mostly thanks to its meta-programming features like the preprocessor and, in C++'s case, to templates.
> The Pascal family of languages has an unpleasant, overly verbose syntax but fast compilers. It also has stronger type systems and extensive runtime checks make it far safer to use. However, it lacks most of the metaprogramming capabilities that I wanted to see in a language.
> For some reason, neither family looked at Lisp to take inspiration from its macro system, which is a very nice fit for systems programming as a macro system pushes complexity from runtime to compile-time.
> And neither family looked much at the upcoming scripting languages like Python or Ruby which focussed on usability and making programmers more productive. Back then these attributes were ascribed to their usage of dynamic typing, but if you looked closely, most of their nice attributes were independent of dynamic typing.
> So that's why I had to create Nim; there was a hole in the programming language landscape for a systems programming language that took Ada's strong type system and safety aspects, Lisp's metaprogramming system so that the Nim programmer can operate on the level of abstraction that suits the problem domain and a language that uses Python's readable but concise syntax.
> The reason is that only after the new(result, finalizer) call the compiler knows that it needs to generate a custom destructor for the CustomObject type, but when it compiled tricky the default destructor was used.
I'm wondering why people underestimate graphs that much. It's lot easier to explicitly represent dependencies between your definitions as a graph and not just avoid such issues but also get rid of unnecessary passes. I did that in my compiler and it works great.
NIM and Red (also on HN today) seem both seem to have an interesting and intersecting feature set.
Anybody here used or heavily evaluated both that can comment?
Nim and Red are quite different languages, both in terms of semantics and in terms of project's goals and scope. Perhaps you should elaborate on an intersection that you see between the two.
I can speak only for Red: it takes its heritage in Lisp, Forth and Logo, has an embeded cross-platform GUI engine with a dedicated DSL for UI building, C-like sub-language for system level programming, OMeta-like PEG parser, and unique type system with literal forms for things like currencies, URLs, e-mail and dates with hashtags; all of that fitting in one-megabyte binary and valuing human-centered design above all.
Rebol/Red is more like Python/Cython - able to be fast but falling back to an interpreter for dynamic things. As such there are probably more "performance footguns" (though any such thing is ultimately subjective based on programmer awareness). Nim feels more like what C++ (or Python) should always have been. Not sure if this helps. It's kind of a "big" question.
As someone who doesn't do software development as part of routine day-to-day work but has played with both, I'd describe Julia as "Fortran for Python developers", while Nim feels like "C for Python developers".
My interest in scientific applications pushes me towards Julia, but the user experience has so far been strictly worse than than Python, so I just don't bother with it as much as I might like to.
On the other hand, I am drawn to experiment with Nim (and to some extent Rust as well) because they feel better constructed, having more professional feeling tools and approaches to packaging. The downside is that their core strengths are in use-cases which aren't so aligned with my interests.
The strength of the Python packaging ecosystem makes me doubtful of the impact Julia can have. Meanwhile for Nim, it feels to me like awareness and adoption suffer a fair bit from competing with Rust for mindshare.
You can pick both, they are very different. Julia is a dynamic language looking for a compromise between interactivity and performance (from it's origin on R/Matlab), while Nim is one of the new batch of static languages that looks for the perfect balance between of easy of development and safety/speed.
For example, if you want to do something more exploratory (like some research or data analysis) that can still easily scale up to HPC you can use Julia, if you want to create something reliable with small binaries and no start-up issues or use in more resource constrained environments you can use Nim.
It is nice with a lot of innovation now in the ahead-of-time compiled languages camp.
What worries me is the fragmentation, and the fact that no one language seems to check all of the (subjective set of) boxes for a general purpose high-speed, ahead-of-time compiled language [0].
E.g, Crystal seems to be the only one supporting a modern concurrency story (similar to Go), but has a huge problem with compile times.
Nim looks nice in many respects, but last I checked, they don't have anything like Go-like concurrency. Maybe not on everyone's wishlist, but as the world move toward flow everything/everywhere[1], I personally find this to be a problem.
nivviv|5 years ago
We (Beamdog) are using nim in production for Neverwinter Nights: Enhanced Edition, for the serverside parts of running the multiplayer infra.
nim is uniquely good in providing an immense amount of value for very little effort. It gets _out of my way_ and makes it very easy to write a lot of code that mostly works really well, without having given me any serious traps and pits to fall into. No memleaks, no spurious crashes, no side-effect oopses, anything like that. It's C/++ interop has been a huge enabler for feature growth as well, as we can partly link in game code and it works fine. For example, our platform integrates seamlessly with native/openssl/dtls for game console connectivity. And it all works, and does so with good performance. It is all now a set of quite a few moving components (a message bus, various network terminators, TURN relays, state management, logging and metrics, a simple json api consumed both by game clients and web (https://nwn.beamdog.net), ...).
We're still lagging behind and are on 1.0.8, but that is totally fine. It's tested and works, and there's no real incentive to move to 1.2 or 1.4 - yet!
Usage for our game has expanded to provide a few open source supporting utilities (https://github.com/Beamdog/nwsync) and libraries (https://github.com/niv/neverwinter.nim/) too. The good part about those is that they are cross-platform as well, and we can provide one-click binaries for users.
OTOH, There's been a few rough edges and some issues along the way. Some platform snafus come to mind, but those have been early days - 0.17, etc. Some strange async bugs had been found and fixed quickly though.
Good and bad, at least for me, nim has been a real joy to work with. If I had the chance to message my 3-years-younger, I'd just say "yea, keep going with that plan", as it turned out to save us a lot of development time. I suspect the features we've put in wouldn't have been possible in the timeframe we had, if it would have been all written in, say, C++.
Abishek_Muthian|5 years ago
May I ask, did you consider Go and decided against it for any reason considering your requirements of quick development, cross-platform, interoperability are all guaranteed features of Go which should have given better peace of mind considering a production application?
tylorr|5 years ago
nepeckman|5 years ago
neverartful|5 years ago
My experience is complete opposite. I find Nim to be very simple and Zig to be not simple. What's the problem with Zig? I find the documentation to be chaotic. I believe that this is largely due to the rapid pace of changes (including changes that break earlier code).
ziml77|5 years ago
jibal|5 years ago
dom96|5 years ago
I'm planning to write up a more detailed post outlining the architecture of this. But suffice it to say, Stardust is written 100% in Nim. Both the servers and the client running in your browser is written in Nim. The client uses Nim's JS backend and the server Nim's C backend. The two share code to ensure the game simulation is the same across both. Communication happens over websockets.
It's been a lot of fun working on this and I cannot imagine another language being as flexible as Nim to make something like this possible. I already have an Android client up and running too, it also is built from the same code and I plan to release it soon.
1 - https://stardust.dev/play
zacmps|5 years ago
* JS, obviously
* Rust
* C/C++
* Anything else that can compile to WASM
krat0sprakhar|5 years ago
mixmastamyk|5 years ago
How does that work, and is it an alternative to dart/flutter?
DannyB2|5 years ago
Commercial web application in Java. Java GC is concurrent, on other cpu cores than the cores processing customer requests. Modern GCs such as Red Hat's Shenandoah or Oracle's ZGC have 1 ms or lower pause times on heap sizes of terabytes (TB). Java 15 increased max heap size from 4 TB to 16 TB of memory.
Now the argument. A thread running on a cpu core which processes an incoming request has to earn enough to pay the economic dollar cost of the other threads that do GC on other cpu cores. But that thread processing the client request spends ZERO cpu cycles on memory management. No reference counting. Malloc is extremely fast as long as memory is available. (GC on other cores ensures this.) During or even after that thread has serviced a client request, GC will (later) "dispose" of the garbage that was allocated when servicing the client request.
Obviously, just from Java continuously occupying the top 2 or 3 spots for popularity over the last 15 years -- Java's approach must be doing something right.
That said, I find Nim very interesting, and this is not a rant about Nim. I am skeptical of an alternative to real GC until it is proven to work, in a heavily multi threaded environment. And there is that economic argument of servicing client requests with no cpu cycles spent on memory management -- until AFTER the client request was fully serviced.
judofyr|5 years ago
I'm not sure if this argument holds. Java's high memory usage is frequently cited as a downside of Java. GUI applications written in Java have a reputation of being memory-hungry, and I know plenty of people struggling with memory usage of server application (e.g. ElasticSearch). You will also find C/C++ on the same popularity lists…
That said, I do agree that a tracing GC is a better solution (for most programs) than reference counting these days. The improvements in the pause times by the Java GCs are really impressive, and the throughput is great. One example would be ESBuild: The author created a prototype in both Rust and Go, and found that the Go version was faster allegedly because Rust ended up spending a lot of time deallocating [source: https://news.ycombinator.com/item?id=22336284].
moonchild|5 years ago
Reference-counting strategies are much easier to optimize; so if you have fewer resources available to throw at your compiler it's the way to go.
Boxxed|5 years ago
> As far as we know, ARC works with the complete standard library except for the current implementation of async...
That's not a great endorsement...
> If your code uses cyclic data structures, or if you’re not sure if your code produces cycles, you need to use --gc:orc and not --gc:arc.
Seems like this is a big onus to put on the user -- it's tough to prove a negative like this.
leorize|5 years ago
* Nim's current async implementation creates a lot of cycles in the graph, so ARC can't collect them.
* ORC is then developed as ARC + cycle collector to solve this issue, and it has been a success.
* This 1.4 release introduces ORC to everyone so that we can have mass testing for this new GC and eventually move torwards ORC as the default GC.
TL;DR: ORC works with everything† and will be the new default GC in the future. Your old Nim code will continue to work, and will just get faster‡.
† We are not sure that it's bug-free yet, which is why it's not the default for this release.
‡ Most of the time ORC speeds things up, but there are edge cases where it might slow things down. You're encouraged to test your code with --gc:orc against our default GC and report performance regressions.
djur|5 years ago
> ARC was first shipped with Nim 1.2... [ORC is] our main new feature for this release
Seems like they should phrase it like "use ORC unless you know you don't have cycles" rather than "use ORC if you're not sure you have cycles", but that's a reasonable responsibility to take on if you're choosing to use an alternative garbage collector.
nicoburns|5 years ago
Tiberium|5 years ago
mcintyre1994|5 years ago
karmakaze|5 years ago
A significant portion of the problems with cycles in ARC are parent references.
wffurr|5 years ago
euan_torano|5 years ago
1. https://github.com/nim-lang/Nim/issues/14035 2. https://forum.nim-lang.org/t/6610
xyproto|5 years ago
sergiotapia|5 years ago
I'm hoping someone builds a great web framework with it, and a library like Ecto for better postgresql access. This language great potential to build faster software.
speps|5 years ago
PMunch|5 years ago
untog|5 years ago
In the end I still went with Rust, simply because it's more popular, but my initial impression was that Nim is a really fun language to work in, and much much easier to pick up than Rust.
anta40|5 years ago
That's my impression so far, too. Previously, I already had some experiences with C, Pascal, and Python. Then learning Nim just feels natural.
Not so much with Rust. Well of course it's not surprising, with memory safety as one of its goals.
Tiberium|5 years ago
MrBuddyCasino|5 years ago
Why did Nim decide to allow null pointers? They must have had a very good reason, given its young age?
cb321|5 years ago
mratsim|5 years ago
That said you can already declare `type MyPtr = ptr int not nil` but the compiler is still clunky on proofs and needs a lot of help.
it is planned to have a much better prover in the future and ultimately Z3 integration for such safety features: https://nim-lang.org/docs/drnim.html
alehander42|5 years ago
qmmmur|5 years ago
sp33der89|5 years ago
icey|5 years ago
tiffanyh|5 years ago
I’ve said it before and I’ll say it again ... I wish more folks would use NIM for web development.
moigagoo|5 years ago
[1] https://norm.nim.town
fataliss|5 years ago
dom96|5 years ago
I often wonder why anyone would use a language like Zig or Rust for Web development. I am very much biased in favour of Nim here, but to me in general a non-GC'd language seems like overkill for web development. So I would rule those languages out straight away.
I can't speak to Elixir, likely the main difference will be the lack of a mature Django-like framework. I'm assuming that Elixir has one, whereas Nim doesn't. If you're looking for a fun project, I would love to see that made for Nim and happy to give pointers if you need them :)
steveklabnik|5 years ago
* Has a GC, but you can remove it. This is Nim and D.
* Relies on pervasive refcounting. This is Nim if you choose that implementation, Swift.
* Has no GC. This is Zig and Rust. (Though obviously you can use refcounting in these languages, but it is as a library.)
While this focuses on a specific aspect of these langauges, I think it also represents their philosophies pretty well. Nim and D start from a "what if we had a GC" and then try to make things nicer down the stack. Rust and Zig are how nice can we go starting from nothing?"
There are also additional factors that may or may not play in here, depending on what your needs are. Arguably, Rust is starting to break out of the "niche language" stage and move into the "significant projects and is sticking around" phase, whereas many of these other languages aren't quite there yet. This can matter with things like getting help, package support... some people love the open frontiers of new languages, others want something more mature. https://nimble.directory/ has 1,431 packages at the time of writing, https://crates.io/ has 48,197.
moonchild|5 years ago
- Nim and rust have macros.
- D has very high-quality metaprogramming (probably better than any other language without macros).
- (Afaik swift and zig have fairly normal templates. I don't know as much about those.)
- D and zig have compile-time function execution (think c++ constexpr on steroids on steroids).
- Swift is likely to be the slowest of the bunch; like go, though it's technically compiled to native code, its performance profile is closer to that of a managed language. The others should be generally on par with each other and with c.
MrMan|5 years ago
I am super bummed that there is (effectively) no debugging. I am too lazy to mess with VS code to get gdb working, it should just work already. Someday, I guess. Maybe jetbrains will save us. With real IDE support nim would sweep the nations.
moigagoo|5 years ago
yitchelle|5 years ago
pietroppeter|5 years ago
> I started the Nim project after an unsuccessful search for a good systems programming language. Back then (2006) there were essentially two lines of systems programming languages:
> The C (C, C++, Objective C) family of languages.
> The Pascal (Pascal, Modula 2, Modula 3, Ada, Oberon) family of languages.
> The C-family of languages has quirky syntax, grossly unsafe semantics and slow compilers but is overall quite flexible to use. This is mostly thanks to its meta-programming features like the preprocessor and, in C++'s case, to templates.
> The Pascal family of languages has an unpleasant, overly verbose syntax but fast compilers. It also has stronger type systems and extensive runtime checks make it far safer to use. However, it lacks most of the metaprogramming capabilities that I wanted to see in a language.
> For some reason, neither family looked at Lisp to take inspiration from its macro system, which is a very nice fit for systems programming as a macro system pushes complexity from runtime to compile-time.
> And neither family looked much at the upcoming scripting languages like Python or Ruby which focussed on usability and making programmers more productive. Back then these attributes were ascribed to their usage of dynamic typing, but if you looked closely, most of their nice attributes were independent of dynamic typing.
> So that's why I had to create Nim; there was a hole in the programming language landscape for a systems programming language that took Ada's strong type system and safety aspects, Lisp's metaprogramming system so that the Nim programmer can operate on the level of abstraction that suits the problem domain and a language that uses Python's readable but concise syntax.
ejstembler|5 years ago
dom96|5 years ago
pshirshov|5 years ago
I'm wondering why people underestimate graphs that much. It's lot easier to explicitly represent dependencies between your definitions as a graph and not just avoid such issues but also get rid of unnecessary passes. I did that in my compiler and it works great.
thelazydogsback|5 years ago
9214|5 years ago
I can speak only for Red: it takes its heritage in Lisp, Forth and Logo, has an embeded cross-platform GUI engine with a dedicated DSL for UI building, C-like sub-language for system level programming, OMeta-like PEG parser, and unique type system with literal forms for things like currencies, URLs, e-mail and dates with hashtags; all of that fitting in one-megabyte binary and valuing human-centered design above all.
cb321|5 years ago
te|5 years ago
UMetaGOMS|5 years ago
My interest in scientific applications pushes me towards Julia, but the user experience has so far been strictly worse than than Python, so I just don't bother with it as much as I might like to.
On the other hand, I am drawn to experiment with Nim (and to some extent Rust as well) because they feel better constructed, having more professional feeling tools and approaches to packaging. The downside is that their core strengths are in use-cases which aren't so aligned with my interests.
The strength of the Python packaging ecosystem makes me doubtful of the impact Julia can have. Meanwhile for Nim, it feels to me like awareness and adoption suffer a fair bit from competing with Rust for mindshare.
ddragon|5 years ago
For example, if you want to do something more exploratory (like some research or data analysis) that can still easily scale up to HPC you can use Julia, if you want to create something reliable with small binaries and no start-up issues or use in more resource constrained environments you can use Nim.
boongeheYOE|5 years ago
historyremade|5 years ago
[deleted]
samuell|5 years ago
What worries me is the fragmentation, and the fact that no one language seems to check all of the (subjective set of) boxes for a general purpose high-speed, ahead-of-time compiled language [0].
E.g, Crystal seems to be the only one supporting a modern concurrency story (similar to Go), but has a huge problem with compile times.
Nim looks nice in many respects, but last I checked, they don't have anything like Go-like concurrency. Maybe not on everyone's wishlist, but as the world move toward flow everything/everywhere[1], I personally find this to be a problem.
[0] https://docs.google.com/spreadsheets/d/1BAiJR026ih1U8HoRw__n...
[1] https://www.amazon.com/Flow-Architectures-Streaming-Event-Dr...
cb321|5 years ago