A failing with programmers is we tend to focus on solutions rather than problems. Perhaps it's partly because, to be a good programmer, you need to immerse yourself in the code. Unfortunately, this takes your attention away from the bigger problem that the code solves.
In this article, it's interesting that though "Node solves a problem" is stressed as key to its success, compared with the many other JS server-side projects, they don't explicitly mention the problem, except obliquely at the end! (The problem is efficiently handling massive concurrencies in network connections.)
It's because of this short-sightedness of programmers that there will always be easy pickings available to anyone who actually looks at problem. Be a problem-person, not a solution-person.
Node.js is neat, and I have contributed some things to the community, but it has a long way to go honestly if it's to be taken seriously in the performance and infrastructure realm. Articles like this don't help it's image either. It's as if a naive PHP developer suddenly discovered long-running processes, eliciting a chorus of rolling eyes from those who've written server software in the last 30 years.
Ryan and team are talented people, but they're mostly dependent on the V8 team, which is focused on building a JavaScript interpreter for a browser. Node.js/V8 has come tremendously far in the last 18 months, but still hits some particularly nasty walls when it comes to massive concurrency and managing big pools of memory in non-trivial applications. It will take some serious time for these challenges to be overcome. The JVM, Erlang/OTP, and C are particularly good platforms for this type of software because they're mature and well tested in these environments.
I really hope for the sake of the project that the hype surrounding Node.js blows over and the hipster crowd moves on.
The PHP community is just now discovering long-running processes. Yesterday we made TYPO3 run on AppServer-in-PHP, and this simple setup change gave a 2-4x performance boost depending on complexity of request
I don't see a lot of merit to the "sharing code" idea as the purpose of server side code and client side code are completely different. There are 2 possible exceptions where I can see some use:
1) Utility libraries. I have a clone method I use a lot, an would work equally well in both environments. I'm sure math libraries would be the same way.
2) Models. Maybe. Personally, my server side models contain get and save methods which make database calls. I don't want or need those on the client side. If for no reason other than wasted bandwidth.
* note: I'm a node user, but I don't share any code between server and browser.
Since I compile all my CoffeeScripts into JavaScripts at node program start, code sharing is a breeze. For each file, I make a client version and a server version and use simple preprocessor lines:
#if client
#if server
#endif
for the pieces that only apply to the client side or the server side, with shared pieces staying outside these "tags". I won't go back. This is especially cool as I have a templating system that can run on the server or on the client. So for JS-enabled browsers, all HTML template rendering is done on the client by building all HTML tags out of very small simple JSON structs. For non-JS clients (spiders, bots, paranoid non-JS enterprise legacy browsers) the "pages" get fully rendered old-school by the server-side -- both sharing the same rather simple custom-made JSON-driven templating functions and producing essentially almost the same HTML output. Of course there are some added complexities going on with things like scripted onclicks, but you get the general idea. (This is mostly done for indexing spiders and human non-JS clients just simply have to put up with a limited, degraded, basic experience.)
Sharing the utility libraries are a huge win though and should not be underestimated. For example on one project alone:
- Input Validation
- Asynchronous loop patterns
- Promise pattern implementations
- markup logic & output
- A* pathfinding
More and more use cases will pop up over time, and it's really unique to the node.js environment. This alone would not make me pick node.js over another solution but I feel it is still an advantage.
I spend about 50% of my dev time on writing / working with utility libraries as writing these libraries takes far more effort than writing handlers, models, etc.
Example: You are building an editable list. The client insert and delete items from the list, which is then synchronized with the database. Now you need the HTML generating code on both the client and the server.
Another example: validation. You want to validate on the client for responsiveness, but on the server for integrity.
Code sharing really only becomes a compelling argument if you're building browser games or something similarly interactive (i.e., really bloody interactive). In this case, you need to run the same logic on the client (to reduce latency) and on the server (to validate and actually update game state).
Otherwise, you really have no excuse for putting logic on the client. Even with something like validation, I think its much more sensible to have your renderers pump out generic validation event handlers than to reconstruct the whole model class on the client.
I definitely am in the same camp on the "sharing code" argument, but could see careful structuring make models sort-of work. Obviously the basic persistence behavior needs to change, but if you reduce validation and some of the business logic to something more composable/interchangeable you could share it between the two.
Obviously then you'd need some way of distinguishing validation that is only important to one of the two ends from that which is meaningful to both, but I can see it being possible.
That said, I'm still not convinced it's worth it. Especially when you look at the state of libraries in node compared to something like Python/Ruby.
IMHO, the merit of sharing code really shines with templates. You can have full-featured Haml-js templates that are rendered on the server and the client. This means your client can make RESTful calls to your JSON api and has full rendering capability in the client.
The way I've heard it argued is that as more and more things happen in the client, you want some level of data validation there. But as we all know, it's not enough to just validate the data in the client - you need to do it on the server too. So hence code duplication.
From a couple of presentations I watched and reading online, nodejs advertises that it makes writing web servers easy. Do everyday programmers actually need node or is it meant to be a niche language?
Yes and no, I've spent the last two weeks building and launching a node.js project and it's really really fast.
Blazingly so.
I've got a PHP project at http://fstr.net and its pageloads take about 4 to 5 seconds under normal load. With my other project pageloads are less than a second most of the time (though there seems to be some variability when dealing with the first connection).
Both projects do essentially the same thing (query database, present list). It's really shocking just how much faster node.js is.
Another nice aspect of NODE with a MongoDB solution is that I can scale it very easily. On AWS I can send all traffic into a load balancer, then as the server's load increases I can just fire up additional (CLONE) instances and add them to the load balancer.
A reserved Amazon micro instance costs ~$5 per month or a 'high CPU' instance comes in at $17. Scaling node is really as easy as turning on another machine (or running multiple instances of the application on a machine with multiple cores).
it's not a language at all. it's a library for writing server-side code in javascript. it's event driven, like python's twisted. in fact, it's hard (if you assume people are rational [edit: and have perfect access to information]) to see why people find nodejs so cool, yet have been ignoring twisted for years.
And another thing .. Buffers, which is partly down to V8's external indexed data, and node's wrapping of that in the Buffer classes which are then used by the various async. IO operations.
This means that content can flow through a node service with being touched by javascript, and without adding work to the garbage collector.
[+] [-] 6ren|15 years ago|reply
In this article, it's interesting that though "Node solves a problem" is stressed as key to its success, compared with the many other JS server-side projects, they don't explicitly mention the problem, except obliquely at the end! (The problem is efficiently handling massive concurrencies in network connections.)
It's because of this short-sightedness of programmers that there will always be easy pickings available to anyone who actually looks at problem. Be a problem-person, not a solution-person.
[+] [-] jshen|15 years ago|reply
I still don't see a compelling reason to use node. I think the jvm is a better choice all around.
[+] [-] rbranson|15 years ago|reply
Ryan and team are talented people, but they're mostly dependent on the V8 team, which is focused on building a JavaScript interpreter for a browser. Node.js/V8 has come tremendously far in the last 18 months, but still hits some particularly nasty walls when it comes to massive concurrency and managing big pools of memory in non-trivial applications. It will take some serious time for these challenges to be overcome. The JVM, Erlang/OTP, and C are particularly good platforms for this type of software because they're mature and well tested in these environments.
I really hope for the sake of the project that the hype surrounding Node.js blows over and the hipster crowd moves on.
[+] [-] bergie|15 years ago|reply
[+] [-] MatthewPhillips|15 years ago|reply
1) Utility libraries. I have a clone method I use a lot, an would work equally well in both environments. I'm sure math libraries would be the same way.
2) Models. Maybe. Personally, my server side models contain get and save methods which make database calls. I don't want or need those on the client side. If for no reason other than wasted bandwidth.
* note: I'm a node user, but I don't share any code between server and browser.
[+] [-] dualogy|15 years ago|reply
#if client
#if server
#endif
for the pieces that only apply to the client side or the server side, with shared pieces staying outside these "tags". I won't go back. This is especially cool as I have a templating system that can run on the server or on the client. So for JS-enabled browsers, all HTML template rendering is done on the client by building all HTML tags out of very small simple JSON structs. For non-JS clients (spiders, bots, paranoid non-JS enterprise legacy browsers) the "pages" get fully rendered old-school by the server-side -- both sharing the same rather simple custom-made JSON-driven templating functions and producing essentially almost the same HTML output. Of course there are some added complexities going on with things like scripted onclicks, but you get the general idea. (This is mostly done for indexing spiders and human non-JS clients just simply have to put up with a limited, degraded, basic experience.)
[+] [-] weixiyen|15 years ago|reply
- Input Validation
- Asynchronous loop patterns
- Promise pattern implementations
- markup logic & output
- A* pathfinding
More and more use cases will pop up over time, and it's really unique to the node.js environment. This alone would not make me pick node.js over another solution but I feel it is still an advantage.
I spend about 50% of my dev time on writing / working with utility libraries as writing these libraries takes far more effort than writing handlers, models, etc.
[+] [-] jules|15 years ago|reply
Another example: validation. You want to validate on the client for responsiveness, but on the server for integrity.
[+] [-] ejones|15 years ago|reply
Otherwise, you really have no excuse for putting logic on the client. Even with something like validation, I think its much more sensible to have your renderers pump out generic validation event handlers than to reconstruct the whole model class on the client.
[+] [-] awj|15 years ago|reply
Obviously then you'd need some way of distinguishing validation that is only important to one of the two ends from that which is meaningful to both, but I can see it being possible.
That said, I'm still not convinced it's worth it. Especially when you look at the state of libraries in node compared to something like Python/Ruby.
[+] [-] aaronblohowiak|15 years ago|reply
[+] [-] baudehlo|15 years ago|reply
[+] [-] gvnonor|15 years ago|reply
[+] [-] AlexC04|15 years ago|reply
Blazingly so.
I've got a PHP project at http://fstr.net and its pageloads take about 4 to 5 seconds under normal load. With my other project pageloads are less than a second most of the time (though there seems to be some variability when dealing with the first connection).
Both projects do essentially the same thing (query database, present list). It's really shocking just how much faster node.js is.
Another nice aspect of NODE with a MongoDB solution is that I can scale it very easily. On AWS I can send all traffic into a load balancer, then as the server's load increases I can just fire up additional (CLONE) instances and add them to the load balancer.
A reserved Amazon micro instance costs ~$5 per month or a 'high CPU' instance comes in at $17. Scaling node is really as easy as turning on another machine (or running multiple instances of the application on a machine with multiple cores).
[+] [-] andrewcooke|15 years ago|reply
[+] [-] samlittlewood|15 years ago|reply
This means that content can flow through a node service with being touched by javascript, and without adding work to the garbage collector.
[+] [-] dreamdu5t|15 years ago|reply
[+] [-] pavlov|15 years ago|reply
There isn't a huge performance gap between the two engines these days... But V8 is easier to embed in C++, I think.
[+] [-] BasDirks|15 years ago|reply