Deploy Node.js Servers Like a Pro

in DevOps

Writing node.js servers is really simple, and any developer with basic understanding of asynchronous nonblocking I/O principles and some JavaScript fluency can actually develop node.js‑based servers. But alas, in real production environments, especially those required to scale, that’s just not enough.

As it turns out, production grade servers and services require addressing issues far beyond just the code one develops, and it is these issues exactly that dictate if your service will not only function correctly but also be scalable, robust, debuggable and most importantly – maintenance free.

In this post I’ve assembled a few topics that are important (IMHO), some of which were learned the hard way, hoping they will help you, fellow developers.

Use a Process Manager

One of the biggest challenges in production environments is keeping your server up and running at all times. You need to be able and cope with two situations here: (1) server crashes and (2) machine reboots.

Handling these on your own is of course feasible, but far from trivial. One approach would be, for example, to have a daemon monitoring the server’s status and relaunch it after a crash. In that case, you need the daemon itself to startup on a system boot, bearing in mind that the daemon itself must be kept up and running at all times too.

An easier and more convenient approach is to use a specialized node.js Process Manager, which would gracefully handle all (or some) of these challenges for you. Most notable PMs are Forever, StrongLoop PM and PM2.

My personal recommendation is to go with PM2. It is simple to install, very intuitive, highly customizable via declaration files, has Docker support, works right out the box, has OS startup script support (so it starts again on a reboot, which something Forever lacks), works on all popular OS types (StrongLoop PM does not support Windows), filesystem watch & reload (great for CD), clustering with zero code changes, and the list goes on…

Set NODE_ENV = “production”

If you take one thing from this article, let it be this tip. In your production environment, you really want to set the NODE_ENV environment variable to “production”.

How do you set it?

1. If you’re using a process manager (recommended for production environments), for example, StrongLoop PM or PM2, define NODE_ENV in the corresponding process manager configuration/ecosystem file.

2. If you’re directly executing your server, before executing your server run “export NODE_ENV=production” on Linux/OS-X, or “SET NODE_ENV=production” on Windows.

Note that not setting the variable results in NODE_ENV being undefined.

In your app, you can check the value of this environment variable by examining/printing:

process.env.NODE_ENV.

So why is this variable so important?

Well, the modules we require() and use sometimes check this variable’s value and operate differently in production mode. Express, for example, could run up to 3 times faster in production mode, which is significant.

Run in Cluster Mode

JS engines run a single foreground thread. In the browser it is the UI thread, and in Node.js it is your application’s main thread, whereas all blocking I/O operations are handled by a pool of background threads. What this means is that your node server can only use a single core, whereas all other cores might potentially be left unutilized. In order and harness these additional CPU resources to your server’s benefit, you have to run the server in cluster mode.

There are two options here, one which requires little work, and another which requires absolutely none.

The first approach is to instrument your server with cluster aware code to fork more instances and determine if the running instance is either a master or a slave. The rest of the code remains unchanged:

var cluster = require('cluster');
var  http = require('http');
var numCores = require('os').cpus().length;

if (cluster.isMaster) {
    for (let i = 0; i < numCores - 1 ; i++) {
        cluster.fork();
    }
} else {
    http.createServer(function(req, res) {
        res.writeHead(200);
        res.end('node rockz!');
    }).listen(80);
}

Alternatively, if you’re using a process manager, you can have it conceal the above logic from and leave the code completely unchanged. With PM2 for example you would run:

> npm install pm2 -g
> pm2 start myserver.js -i 0

 

Note that the –i argument denotes the number of instances. Zero (0) means run on all CPUs/cores; however, you could instead have run on any arbitrary number of available cores.

Dockerizing

As Docker is gaining momentum, so does running Node.js servers in Docker containers. It seems that Docker’s philosophy of fast container setup/teardown sits well with Node.js’ lean and mean nature. However, deploying your server onto Docker containers, especially to a dynamic Docker fleet, requires a mindset shift in the way you run and launch the servers. I’d like to point two specific aspects to consider in this context.

Logging

Capturing server instances logs is important in case things go south and you need to investigate. However, since instances are many times dynamically created (for autoscaling) on disposable containers, you need some way of capturing and saving the logs from these anonymous fleet instances.

One approach is to map a large drive (or S3 bucket) to the container and write the logs there. In this case, you need to make sure the logs are written to different folders (per instance ID, for example), and consider log rotation/retention policy so that you do not run out of space or need to pay for unnecessary storage resources. You also have to be able to easily search these logs for errors or specific messages. Another approach would be to use a cloud logging service such as logly, and transmit your logs there.

Whatever path you choose to take, it is recommended to use the following approach:

1. Write your server’s logs directly to the console (and not to a file, especially not to local storage).

2. Set up a container logging driver on the host machine to collect all container output, and funnel it to your desired destination. If you’re working with AWS CloudWatch for collecting your logs, you’d probably like to use the awslogs

3. Disable any process manager logging which automatically collects your server’s stdout and stderr streams and saves them locally. This will prevent you running out of disk space in the Docker container. With PM2 for example, you can simply discard its logs by sending them to /dev/null:

-- server.pm2.json --
{
  "apps" : [{
    "name"        : "my-server",
    "script"      : "/usr/local/myserver/server.js",
    "error_file"  : "/dev/null",
    "out_file"    : "/dev/null"  
}]
}

Run in Foreground

If you’re familiar with Docker, you probably already know that when your main process exits so does the Docker container in which it ran. This is how Docker containers work by design – they are disposable. The key point to note here is that your server must run in the foreground, as the main process running in the container.

This approach is contradictory to the way we are used to running servers, especially Node.js servers under the supervision of a process manager. If we naively try to run the server using PM2, for example:

# Dockerfile
# ...
CMD ["pm2", "start", "server.js"]

This will result in the Docker container exiting immediately, because PM2 itself is a daemon / background service. So what can you do? For starters, you could just dump the process manager altogether and run the server directly, in other words:

# Dockerfile
# ...
CMD [ "node", "server.js" ]


However, in this case you’d be losing all the goodies offered by process managers (clustering, source maps support and application declaration files, to name a few). What you really want is to continue using PM2 but have it run in the foreground instead of in the background. To do that, you have to use the –no-daemon flag:

# Dockerfile
# ...
CMD ["pm2", "start", "server.js", "--no-daemon"]

But wait! There is yet a better option. You can use the pm2-docker executable (which is automatically installed when you install PM2):

# Dockerfile
# ...
CMD ["pm2-docker", "server.js"]


pm2-docker not only runs your server in the foreground, it also forwards signals (e.g., SIGINT) to it so that you can gracefully shutdown your app when the container is about to be destroyed (e.g., fleet scaledown). All you need to do now is make sure the code is ready to intercept the signal and exit gracefully:

process.on(“SIGINT”, function() {
   engine.stop( err =>  {
     process.exit(err ? 1 : 0);
   });
});



Contact us
You might also like
Share: