IT IS HERE! Get Smashing Node.JS on Amazon Kindle today!
Show Posts
← Back to homepage

To many beginner Node.JS users, a fundamental and immediate apparent disadvantage of writing their web applications with Node.JS lies in the inability to save a file, refresh the browser and see their changes live.

This “problem” is rooted of course in significantly different architectures. In the case of, for example, PHP applications we traditionally separate the role of the web server and request handler. The monolithic web server maps incoming requests to the execution of particular files in the file system, which become our handlers.

In most setups, the web server is mostly limited to inspecting two pieces of a HTTP request: the resource (like /test.php) and the Host header (like for virtual hosts (vhosts) support.

GET /test.php HTTP/1.1

With this in place, it’s up to the execution of the “test.php” file to handle the work and produce the response. If we change this file, even while other requests are ongoing, subsequent requests will be handled by our new code.

In Node.JS, this separation is not part of the core design. You write your web server and requests handlers as part of the same unit. This of course is the result of a tradeoff: Node.JS offers complete control over how the entire system operates, allowing programmability of aspects beyond the execution of a request. This includes for example, control over the connections that trigger the requests, easy awareness of other connections/requests, and other details that made the development of software like Socket.IO possible.

^ The web server
^ The request handler
var server = require('http').createServer(onRequest);

function onRequest (req, res) {
  res.end('Hello world');

The need for seamless code reloads extends into the realm of production deployment as well: one needs to be able to serve new requests with fresh code immediately, without breaking existing ones (such as file uploads or content transfer).

The up solution

Over at LearnBoost, we’ve solved these problems with two small projects: distribute and up.

In order to reload code without dropping requests, no changes to a codebase are needed other than ensuring that there’s a file that exports a http.Server as a module (let’s call it server.js).

  var server = require('express').createServer();
  // … code
  module.exports = server;

Then, during development you can leverage up from the CLI:

$ up --watch --port 8080 server.js

The --watch flag will watch the working directory for changes. up will start workers to handle the requests, and reload them seamlessly. Alternatively, one can also do this programmatically. In the following example I listen on the SIGUSR2 signal to trigger a reload, which allows for easy interoperability between components.

var up = require('up')
  , master = http.createServer().listen(8080)

var srv = up(master, __dirname + '/server');

process.on('SIGUSR2', function () {

The power to distribute

The server returned by up builds on top of another LearnBoost project called distribute, which leverages node-http-proxy by NodeJitsu

The idea is simple: we can handle requests to be proxied in a middleware-style API. This allows us to solve, for example, the lack of multiple VHosts:

var httpServer = require('http').createServer()
  , srv = require('distribute')(httpServer);


srv.use(function (req, res, next) {
  if ('' == {
    next(3500); // port 3500
  } else {
    next(3000); // otherwise port 3000

And since it’s possible to re-use and chain multiple middleware functions together, it provides a proven foundation for code re-usability. In fact, it’s possible to leverage existing Express/Connect middleware like query string or cookie parsing, should a certain proxying function need it.

Head to GitHub for the projects, or find them on NPM as distribute and up. And stay tuned for an upcoming distribute middleware component release.


Damon Oehlman said

Nice work guys, we implemented some custom logic to handle graceful worker restarts in an application platform we were writing on top of Express.

With the work you guys have done in with distribute and up, we’ll be able to switch over to using up for the next revision, which will make things significantly simpler moving forward…

Andrew Sutherland said

Wow, we were setting out to solve this exact problem today. I knew LearnBoost must have a similar problem, and turns out you do! Thanks

Brandon Lockaby said

Cool, I think I like it more than node-supervisor

Jörn Zaefferer said

Small mistake in the first JS example: You pass “handler” to createServer(), but define the handler as “onRequest”. Trivial to fix.

More interesting though: What is “SIGUSR2″? Or rather, how would you make use of that?

How would you combine VHosts with up?

Tom said

I don’t see how distribute distributes the load across multiple servers. Does it only distribute load across processes on the same server? If so, are you confident it should be called ‘distribute’?

Tadek said

Nice:) Does it work based on build in nodejs cluster module?

Troy said

@tom the next api supports three signatures, one of which is the host. (I’m just a nobody that happened to give a quick glance of the README on github).

I’m thinking of replacing nginx with distribute and static file middleware, very cool stuff

Srirangan said


I’ve had luck with Hotnode ( for hot reloading. Simple, and just works..

Aynur said

Ever since JScript MS, I have shuogt this … I will try to setup a lab NOT using IIS – but Nodejs Thanks Mike

jobject said

What is the best way to nohup up so when exiting an ssh session it still runs?

jobject said

Figured out a solution to my nohup question: 1. Grap github version not npm version, 2. Use a shell script and not the command line, 3. Thank guille!!

Your thoughts?

About Guillermo Rauch:

CTO and co-founder of LearnBoost / Cloudup (acquired by Automattic in 2013). Argentine living in SF.