top | item 3073269

An Easy Way to Build Scalable Network Programs

81 points| tbassetto | 14 years ago |blog.nodejs.org | reply

35 comments

order
[+] fleitz|14 years ago|reply
I'm completely unsure what quality of javascript makes it suitable for writing high performance systems.

Is it single threading? Is it the weird typeof crap you have to do to check if a variable is defined? Is it the lack of integers? Is it the prototyping system?

Node.js to me looks like a slightly better syntax than the horribly ugly C# async calls. (Not the new async/wait system). Javascript completely pales in comparison to F# or Haskell in terms of readability of async code.

If you prefer non-functional languages it would seem that Go would be a much better place to start for performance than Javascript. Or clojure or scala.

Sure node.js outperforms rails, but rails isn't designed around being the fastest webserver ever.

[+] awj|14 years ago|reply
...it's not the recent blog posts that are the problem, those are a response to hype huffers in the community spouting scalability nonsense. Having vague wording in the front matter for the project's main page doesn't help.

Many of the people currently "bashing" node are more or less aware of its capabilities. They aren't mad at the technology itself, just that it's being sold as much more than it is.

[+] baudehlo|14 years ago|reply
The only confusion over the wording on the front page seems to be over: "Almost no function in Node directly performs I/O, so the process never blocks". Which can be correctly read as: "Almost no function in Node directly performs I/O, so the process never blocks on I/O". But my English teachers would have told me that's too much repetition and not necessary.
[+] ismarc|14 years ago|reply
I've had to write a few network based applications that each had their own unique performance requirements. Node.js would not have been a fit for any of them. I'd really like to give it a go, but it seems that every scenario where it would be useful is better served by a more specific environment. Granted, I'm pretty lousy at javascript, so getting to use javascript on the server doesn't count, what is Node.js' ideally suited for?
[+] kylemathews|14 years ago|reply
Your comment would be a lot more interesting if you add what those network-based applications are that you think Node.js wouldn't be a good fit for.
[+] sausagefeet|14 years ago|reply
Scalable web apps, maybe, but "network programs", I disagree. Just spawning a bunch of Node instances is insufficient to really scale in many network apps, you also need a good way to communicate between them. Preferably one that hides the fact that you are communicating between separate machines. For the most part, web apps can get by with pushing this to the DB but I think it's a bit much to say this is acceptable for all network programs.
[+] kqueue|14 years ago|reply
No that's not the proper way of doing it. You create worker threads in separate process that receive data from node.js, encodes them and send them back. you don't fork on every request.
[+] mikeryan|14 years ago|reply
Where is he suggesting forking on every request? My read sounds like he's suggeting the main process handles web requests and a single other process to handle encoding?

The suggested approach is to separate the I/O bound task of receiving uploads and serving downloads from the compute bound task of video encoding.

I'm assuming by using something like child_process.fork to create a video encode queue separate from the main event loop.

http://nodejs.org/docs/v0.5.4/api/child_processes.html#child...

[+] fleitz|14 years ago|reply
Once your doing that why not just use threads and get rid of the IPC?

That's kind of the point of the Node.JS bashing, once you work around all it's pitfalls you're right back where you started except your now writing your app in a language unsuited for the purpose.

Node solves the problem of needing to write evented servers in javascript. Beyond that I can't see much advantage in it vs. existing languages. If I wrote something called "Node.NET" which was a JScript wrapper around completion ports and went around telling everyone that this was the future of webdev... what do you think the reaction would be?

[+] mjijackson|14 years ago|reply
He's not suggesting forking. He's suggesting spinning up a new process entirely (ffmpeg in this case).
[+] exogen|14 years ago|reply
I think a lot of people would find this comment more helpful if you explained why.
[+] maratd|14 years ago|reply
In coming releases we’ll make it even easier: just pass --balance on the command line and Node will manage the cluster of processes.

Internet trolls do improve software!

[+] CPlatypus|14 years ago|reply
I'm going to repeat what I said on Twitter when this first came up - dozba's computationally-intensive-task example doesn't really illustrate the problem. Even if the computation is buried somewhere in a library, you can more or less predict when it's going to happen and make sure it happens in a separate thread/process. The real hurt comes when your single-threaded server takes a page fault. That's nowhere near so predictable or easily solved, and it still results in your entire application stalling. Requests on other connections, which never needed to get anywhere near the page that caused the fault and which could have continued in a better design, will get caught in the stall. That's just as true and just as lame as it was almost a decade when I (e.g. http://pl.atyp.us/wordpress/?page_id=1277) and plenty of others were writing about exactly these issues. Single-threaded servers are only appropriate for workloads where requests are trivially partitionable. In other cases you can still use events and asynchrony but you should do it in a framework that is inherently multi-threaded to take advantage of multiple processors/cores.
[+] wmf|14 years ago|reply
Some of us never tire of this topic. :-)

To work properly, it seems like an event-driven program really needs to use mlockall() and hopefully get memory pressure feedback from the kernel.

[+] baudehlo|14 years ago|reply
It's true, a seg/page/fault would take down the whole server, and any requests executing on that process would die too. However in fairness, this is one of the advantages of using a dynamic language, rather than one where you're dealing with memory allocation all the time.