The purpose of this article is to demonstrate how quickly you can build complete applications with Node.js. This adventure came about whilst working on an engagement. The customer wanted to build a request broker between two systems that needed to be integrated together, both having their own REST APIs. The customer in this case opted to develop their own connector in Java, and estimated a six month development turnaround.

Node.js Connector Architecture
The design here is that the Source REST Client posts a batch of jobs on a fixed schedule to the connector. The connecter then splits up the batch and micro manages each job on the target REST server. The post request from the Connector to the target server results in a unique job id. This id is needed to poll the status of the job. Once the job is complete, the Connector posts the status back to the Source REST system. On success the job status on the Target REST system is closed. So the connector is acting as both a REST server and a REST client; it maintains state about a request for the life of a job. The Target REST API supports the following requests:

In the past I’ve used Java and Python to write similar adapters but I was keen to see how quickly it could be done with Node. I also wanted to explore some existing Node.js projects so that I could leverage existing libraries wherever possible.

I actually started with a script written in Coffeescript that I was given to act as a mock of the target system. The script uses Express to create a bunch of REST endpoints which return a fixed response.

I would have liked to have taken a more TDD approach and written unit tests but for this application there isn’t really a good way of doing this. However, since we have a clear line of separation between the three components, testing at the boundaries of each is really easy. I can write integration tests for each component, and a complete end to end test quite easily. My client test script is simply a bash script which uses curl to send a pre-canned JSON request. I can also test the target system in isolation, in the same way.

In my project directory I did the following to get the folder structure I wanted:

mkdir 'mock'
mkdir 'node-client'
mkdir 'node-connector'
mkdir 'coffee-client'

I put the existing coffee script ‘’ in the ‘mock’ folder.
Then did the following :

cd mock
npm --init (anwser the questions in the text wizard)
npm install express

The ‘–init’ command basically sets up the folder as being a node application. The ‘install’ command downloads and install the dependancies for ‘Express’ in the current project folder. It also updates the ‘package.json’ to include ‘Express’.

cd ../coffee-client

I then created a new script file and supporting files with the following content: :

curl -H "Accept: application/json" -H "Content-Type: application/json" -vX POST -d @single.json http://localhost:8081/automationrequests

single.json :

"action": "Do Something 1",
"application": "API",
"created": "15-OCT-2015 17:17:35",
"description": "This is a test of the REST API 1",
"inboundPayload": {
"Domain": "",
"Host Name": "LNRBAEST01",
"IP Address": "",
"OS Version": "Blah Blah windows 2003"
"location": "UNITED KINGDOM",
"sourceRequestId": "ID00001"

curl -H "Accept: application/json" -H "Content-Type: application/json" -vX GET http://localhost:8081/automationrequests/status/$1


curl -H "Accept: application/json" -H "Content-Type: application/json" -vX POST -d @single.json http://localhost:8079/automationrequests/


curl -H "Accept: application/json" -H "Content-Type: application/json" -vX POST -d @batch.json http://localhost:8079/automationrequests/

and ../node-client/batch.json: Just contains an array of 5 mock objects.

So very quickly I had a mock of the target system and test wrappers for both the source and target system. This left me to concentrate on developing the connector. I knew I wanted to use Express for getting the client requests and actioning them. So I did the following :

cd node-connector
npm --init

Modified the package.json to reflect the correct information about the project.

npm install express

I also knew I want to handle JSON in the request body – the body-parser module for Express was the next thing to get.

npm install bodyparser

At this point I wrote the core of the Express endpoints and tested that I could handle requests from the client script.

Eventually I will need to ship these logs and index them using logstash and elastic search. However, for now I just wanted a simple logging system that I knew I could ship to logstash in the future. Luckily, as is often the case with node an appropriate module already exists:

npm install bunyan

So I added the initial logging code into my main.js script. I will revisit logging and ELK properly in a future post.

Now this is where things get a little complex. Node has a heavy preference for asynchronous actions which makes sense for a lot of its use cases in web applications. However, sometimes we need to do things sequentially – so having the ability to make web requests synchronously is a necessity. Unfortunately Node and other Javascript toolkits try to discourage this sort of usage by moving the functionality out to plugins rather than keeping it in the core. Fortunately, there are a number of npm modules which provide the ability to make synchronous HTTP calls. After a little bit of googling I settled on http-sync.

I added a number of methods that deal with making synchronous requests to the target system’s REST API.

So now I needed a data structure to manage the lifecycle of the outbound requests. I don’t need to hold much state information about the initial batch of requests from the client system. The natural choice would be a queue. Rather than implementing my own queueing system I decided to try a module called ‘Bull’ which is a Redis backed stateful queue.

Installing this was straight forward. Just install Redis using your favourite package manager:

brew install redis

Then install Bull

npm install bull

I started by adding the method hooks for the queue. Of course I was adapting and extending my test cases so I could ensure I was not breaking things.

I soon realised that I would need to separate the queue processing from the main node process running Express. So, I started researching best way of separating the client request handling from the queue processing. Luckily enough, Node’s process model allows for some very simple process separation. Right now all I need is a child process which processes the queue. Node.JS’s builtin module ‘child_process’ allows you to spawn another process. There is no IPC code to be written since the main process writes items into the queue and the child process reads and updates the queue. All I needed to add were some exit handlers to deal with shutdown conditions. This was a good time to refactor my code into three files: common.js for logging and the queue access and the http-sync request methods, worker.js for the queue processing operations, and main.js for the Express endpoint handlers.

For this phase of work, the solution is functionally complete for short running jobs. However, it will still need a bit of work to become a complete solution. So far there is no reverse communication and long running jobs will not be handled correctly. The queue configuration and processing also needs some refinement. However, if you are new to Node.js, I hope this article will inspire you to give it a proper go. Node does get a lot of hype at the moment but it’s fair to say that it does make working with web APIs a breeze. The time to get up and running and productive with node is comparatively small, allowing you too focus on what your app does as opposed to worrying about the low level details. Of course other languages such as Python would have also have been good for this example, Flask provides similar advantages to Express.

If you want to see the code that goes with this example it’s in the following git repo:

git clone

< Back