Optimizing Ruby on Rails applications with NodeJS

Last week we had an interesting problem to solve in one of our projects. And as the result of this solution, we unexpectedly ended up using NodeJS.
In this post I will go through the whole process, with code, hoping this could perhaps be helpful to someone else also.

The AJAX chokehold

I’ll give you some basic background information about the application first. It’s an interactive application built on OpenStreetMaps, which means that once the page loads, all the interaction done with the map is handled by AJAX requests. The map itself contains some simple markers, which are only displayed for the current visible area. If the map is panned to any direction, the list of visible markers has to be queried and the map updated. Now let’s say we have 100 people using the application, panning around the map actively – which is a pretty common use-case. This will result in a pretty big requests per-second ratio. And this is also where the problem lies.

Our deployment environment for Ruby on Rails applications is usually on Phusion Passenger as the application server, with Nginx in front of it. The average request-response times are around 200-300 milliseconds at most, while Passenger has been set up to have an application instance pool of 30 instances. This theoretically allows for 30 concurrent requests to be handled.

Let’s get back to the example usage – 100 users panning the map like crazy. Worst case scenario is that we have 100 concurrent AJAX requests, meaning that only 30% of the requests can be handled at once and everything else is queued until any of the application instances free up. On top of that the request handler has to execute a relatively costly query (finding all markers in the given rectangular area and joining some metadata from another table), which could take around 400-500 milliseconds to complete. This will create a lot of delays for the requests in the back of the queue, up to several seconds per request. Obviously that’s a problem.

The first reaction was to crank up the amount of Passenger workers, increasing the amount of concurrent requests. While this would certainly solve to some extent, it would still completely fail once the maximum amount of concurrent requests are reached – the lag could be enormous.

Enter NodeJS

After discussing the issue with my co-worker, we came to a conclusion that that we could have major performance gains from handling the AJAX requests asynchronously (on the server). We played around with a couple of solutions (EventMachine, NodeJS or perhaps something in Erlang) and decided that the easiest and fastest option would be NodeJS.

This decision had a couple of key points:

  • No worker pool draining – requests won’t queue up, reducing the handling times drastically.
  • A relatively thin stack compared to Ruby on Rails – cutting as much overhead as possible from the request-response cycle.

As we would possibly have several separate node processes handling different AJAX requests, we also had to figure out some conventions for bundling them with a Rails projects, as well as manage the automated deployment with Capistrano.

Implementation of the AJAX responder

I picked the Express framework to help structure the code a bit, plus have some simple support for different environments. Besides that, the only additional NPM module needed was node-postgres.
Note that I also tried to re-use Rails database configuration file for the application, but it seems that the only available YAML parser doesn’t work well with advanced features such as wildcards etc. It is recommended to read the database configuration from the Rails application if possible, DRYing it up.

try {

  var system  = require('sys');
  var psql    = require('pg');
  var util    = require('util');
  var express = require('express');

  var app = express.createServer();

  /** read the arguments for environment */
  app.set('env', process.argv[2] || 'development');

  /** set default db info - username from current user */
  app.set('db_host', 'localhost');
  app.set('db_user', process.env.USER);

  /** configure development environment */
  app.configure('development', function() {
    app.use(express.errorHandler({
      dumpExceptions: true,
      showStack     : true
    }));

    app.set('db_schema', 'database_name');
  });

  /** production schema */
  app.configure('production', function() {
    app.set('db_schema', 'database_name');
  });

  /** connection string */
  app.set('db_string', 'postgresql://' + app.settings.db_user + '@'
      + app.settings.db_host + '/' + app.settings.db_schema);

  /** set query string */
  app.set('query', 'expensive query goes here');

  /** handle requests */
  app.get('/', function(request, response) {

    var coords = [];

    var lat = request.param('lat');
    var lon = request.param('lon');

    if (typeof(lat) !== "undefined")
      coords.push(lat);

    if (typeof(lon) !== "undefined")
      coords.push(lon);

    if (coords.length !== 2) {
      response.send("Invalid number of arguments", 401);
      return false;
    }

    /** set response properties */
    response.header('Access-Control-Allow-Origin', '*');
    response.contentType('application/json');

    /** perform query */
    psql.connect(app.settings.db_string, function(error, client) {

      if (error) {
        response.send("", 500);
        system.debug("Database connection failed: " +
            system.inspect(error));
        return false;
      }

      client.query(app.settings.query, coords, function(e, result) {

        if (e) {
          response.send("", 500);
          system.debug("Query failed: " + system.inspect(e));
          return false;
        }

        response.send(JSON.stringify(result.rows));
      });

    });
  });

  app.listen(8080);
  system.log('Express running @ 0.0.0:8080');

} catch(e) {
  system.debug("Failure: " + e);
}

The code itself should be pretty self-explanatory: query the data and spit it out as JSON.

Deployment with Ruby on Rails

I discovered a neat little tool called Forever to manage Node. It basically just monitors the given process and re-starts it when it dies or exists unexpectedly, similar to tools like Monit. Personally I liked the fact that it’s pretty lightweight and doesn’t require much external configuration or anything – ideal for managing our external applications.

We structured our application like (inside the Rails project):

  • lib/nodejs/appname.js – actual application.
  • lib/recipes/nodejs.rb – Capistrano recipe.
  • scripts/nodejs – bash script wrapping Forever.

The bash script was basically created to manage starting and stopping the application via Forever a bit easier, plus setting the relevant log-files for each application.

#!/bin/bash

if [ -z "$1" ]
then
  echo "Usage: script/nodejs [start | stop] NODE_ENV"
  exit 1
fi

if [ -z "$2" ]
then
  echo "Usage: script/nodejs [start | stop] NODE_ENV"
  exit 1
fi

LOG="log/app.log"

exec sh -c "forever $1 -e $LOG -o $LOG lib/nodejs/app.js  $2"
exit 0

The Capistrano recipe is as simple as it can get – invoke the bash script and pass in the current environment:

Capistrano::Configuration.instance(true).load do

  namespace :nodejs do

    task :stop, :roles => :app do
      run "cd #{current_path} && script/nodejs stop #{rails_env}"
    end

    task :start, :roles => :app do
      run "cd #{current_path} && script/nodejs start #{rails_env}"
    end

    task :restart, :roles => :app do
      run "cd #{current_path} && script/nodejs restart #{rails_env}"
    end

  end

end

And the final step is to add the recipe to your deploy.rb along with relevant hooks:

...
require 'recipes/nodejs'
...
# stop nodejs apps after symlinking is done
after "deploy:symlink",     "nodejs:stop"

# start nodejs apps before passenger is restarted
before  "deploy:restart", "nodejs:start"

This makes deployments really painless, allowing to manage multiple NodeJS applications with relative ease.

How it worked out

First off we saw request-response cycle times improve by almost 30% (down from around 500 ms to 350 milliseconds or less). But the real performance boost came from the evented nature of NodeJS – requests weren’t piling up anymore, which allowed for loads more requests to be handled concurrently, effectively eliminating any latency caused by the worker pool draining. And the only change we had to do client-side was to change the URL of the AJAX request.

This experience really displayed the benefits of using something like NodeJS over the traditional deployment stack for tasks that can and should be extracted into a separate handler to improve efficiency. I’ll be sure to consider the same kind of solution for similar bottlenecks in the future, given the potential gains. We are already extracting some other bits of the application already :)

If you have any questions or want clarification about any of the scripts, just leave a comment and we’ll get back to you.

Tanel Suurhans
Tanel is an experienced Software Engineer with strong background in variety of technologies. He is extremely passionate about creating high quality software and constantly explores new technologies.

32 Comments

  • sasuke

    have you tried to use async_sinatra or cramp or fiberpool with thin or rainbows?

    • Tanel Suurhans

      Nope, I haven’t tried any of those, but thank you for suggesting. We will definitely check them out.
      Essentially with NodeJS the goal was to also cut as much of the stack as possible (as you know, Ruby is not exacty blazing fast). We considered using EventMachine directly, but Node seemed more exciting, plus pretty fast (running a simple ApacheBench did show a really small difference in median response times though).

    • sasuke

      yes node.js is really interesting but you must programming in async mode and think only in this mode.

    • sasuke

      there are in ruby too much more options to make concurrency, thread,fiber,evented programming. jruby, thin,cool.io,eventmachine,revactor,zbatery,rainbows,cramp,fiberbool,async-sinatra,async-rails. In this period there is much work on make ruby really concurrent

  • pcdinh

    Do you plan to replace Ruby on Rails apps with NodeJS-based ones? I think that it is a natural evolution

    • Tanel Suurhans

      Possible, but I don’t see that happening any time in the near future. Rails offers too much “extra” in terms of infrastructure and supporting services. While NodeJS certainly is awesome for stuff like handling WebSockets or plain HTTP/TCP connections, it would slow down the development process of a full blown web application.

  • Hugues Lamy

    While I was reading the post, I was wandering if Rails Metal could do any good. At least you can use the database config file, but I guess you’re still stuck with the worker limit. This solution would be a in between the Node.js and the Rails framework.

    • Tanel Suurhans

      Correct, the worker pool size would still be the main cause of the problem. In theory it could be put behind something like Thin for more evented request handling, but I’m a bit skeptical about the performance. The way I see it is that when I’m going to extract something for pure performance reasons, then why not go with something as good as possible – I’m hinting directly at the performance of Ruby as a language :)

  • Saager (@dexterous) Mhatre

    I have a somewhat technical node.js related question, in fact it’s related to express. Is there a particular reason why your route callback returns a boolean value? ’cause I didn’t see any references to such a requirement in the express docs.

    • Tanel Suurhans

      I assume you mean those return false statements?
      Those are in there to exit the request callback and stop all further script process in case of something fails.

  • Nico

    Looks like a nice solution. I’m new to node.js and don’t quite understand what’s going on here exactly. You are setting up a node server that handles requests passed in from the rails app, right? How does the communication between rails and node work? Is it a simple get request? To which url? So the only advantage you have here is that the rails requests return earlier as the db calls don’t block, right?

    Did I get this somewhat right? ;-)

    • Tanel Suurhans

      Well actually the NodeJS piece and the Rails app are not communicating with each other at all. The only common piece for them both is the database.
      Thats the whole point of the NodeJS app – to avoid the Rails stack altogether, as it’s faster this way. And the speed gain here is that NodeJS stack is really thin, plus it’s evented – vs Rails stack, which is pretty thick and blocking.

      Hope that answers your question.

  • Nico

    Yep, thanks, now I got it. And how does your page know where to find the node.js app? Is it on the same domain?

    • Tanel Suurhans

      Yep, it’s on the same domain, just a different port.

  • Chris

    I’m not quite clear on this. Did you replace your RoR application with Node, or supplement it?

    And if you didn’t replace RoR entirely, how did you go about providing a seamless presentation to your users?

    Thanks!

    • Tanel Suurhans

      This was to supplement Rails. The idea was to extract a heavily used, but relatively simple AJAX endpoint onto NodeJS and switch the front-end JS to use a that as its endpoint.

  • Sakchai Siripanyawuth

    How do you handle authentication with this implementation?

    • Mikey

      I’m curious about this point as well…

    • Tanel Suurhans

      If we are talking about per-user auth, then possibly passing some unique user-based token along with the request and checking it on the Node side from the database.

  • Billy

    Have you found another need for Node in the year since you wrote this article? I am looking in to it just to add it as another tool for my belt but so far I’ve come to the same conclusions you did a year ago. I am a Rails(mainly) developer and It doesn’t seem plausible to write an entire non-trivial app in Node. The speed in which you can write a rails app, the support and plethora of gems, and the low cost of cloud space these days makes the blocking nature less of an issue with a traditional web app, but there are cases like yours where it does make sense to shave some milliseconds off of your requests by piecemealing tasks to node. I’m still looking for practical applications for it, though and am curious how you feel today about it. I haven’t seen a family divided on a ‘new’ technology as much as I see the Pro/Anti Node discussions flying about now :)

  • Ben

    looks convincing. so just want to get better understanding of your work..did you deploy the node.js as a proxy in nginx and ror deployed in phusion passenger (which means you have to keep two instances running on two different webservers all the time – 1 ror and 1 node)? And is this the only part of the application where you do nonblocking I/O by redirecting the user requests to the REST URL handled by node.js app instead of ror controller?

    I’m currently trying to solve the same kind of concurrency issue. 100% of the requests to my app are handled by ror controllers, which blocks. I’m trying to outsource some of the heavy tasks to node.js and run them in parallel. I’m very thankful that you wrote this. i was looking into using redis and queueing the requests. Is there any reason why you use postgres instead of NoSQL solution as your data storage?

    • Tanel Suurhans

      The node piece is deployed along with the Rails application, not proxied through anything. It’s just sitting on port 8585 and handling all requests directly. The rails app is not doing any complex redirection logic either. The pieces which get their responses from the NodeJS app just use the URL directly, bypassing the whole Rails stack.

      You have to understand that NodeJS is not a magical solution to long running or heavy processing. It still uses resourced, but in a way that allows you to run other similar proesses almost completely in parallel. I would say you first need to evaluate if Node really is the fit for you, or maybe you are better off using background workers like Resque along with Redis.

      We are using NodeJS in this specific case because it needs to respond to an user interaction, and the operation it does is not so much of a really slow one, but it’s something thats invoked really often by lots of users and it should have as little overhead as possible . If I would have a process or operation like a really slow query etc, I would prefer Resque.

      The reason we use PSQL instead of NoSQL is really simple – PSQL (as most RDMBS’) have tens of years development and research put into them. They are more mature, more reliable, more scalable. Most people switch to NoSQL for the wrong reasons, one of the main reasons being that they do not “get” databases. NoSQL is basically strongly overhyped and misused in a lot of places :)

  • Josh

    hi mate,
    This article pretty interesting. Do you have any idea about access the session in Rails from node.js? Cheers.

  • Dave

    Nice work!

    How did your database handle the additional simultaneous connections?

    • Tanel Suurhans

      PSQL by default supports multiple connections, so I’m not sure whats the question?

  • Dave

    Sorry, my question wasn’t very clear.

    My question was in response to you saying “… requests weren’t piling up anymore, which allowed for loads more requests to be handled concurrently”.

    I wasn’t saying that your database would now have to handle more than one connection, but that it’d now have to handle more concurrent connections.

    Since you’ve removed the Rails bottleneck, that’d expose your database to an increase in the number of simultaneous connections, wouldn’t it?

    psql.connect will asynchronously return a client when/if there is one available in the pool, thus making the database, or the size of the connection pool, the new bottle neck.

    So I guess my question was: Did you find that the connection pool was quickly expired and/or that your database could return results quickly enough for that not to happen? :)

  • Tomaž Zaman

    Dave, I think that could be bypassed with them moving psql.connect function outside routing. I’m not sure but I think pg module already stacks queries if connection is not established at point of calling.
    This way only one pg connection would be used.

  • Abed Halawi

    How did NodeJS communicate with Rails’ Models (if that’s even possible) ? or did you have to connect to the database directly with NodeJS and replicate the models in NodeJS ?

  • Aditya

    Hi.. Am amazed by the process here.. Now my point is I have a full throttle rails application.. Ad now I am thinking of implementing a chat functionality in my app with the help of Node.js.. Is there any way to do this? Can I create a standalone chat functionality independent of rails app with node.js in my app?
    I wrote here because I thought your process may prove helpful..

  • jak

    thanks for this article.

    PS: please remove this => “width: 620px;”

Liked this post?

There’s more where that came from. Follow us on Facebook, Twitter or subscribe to our RSS feed to get all the latest posts immediately.