Developer's Tool Belt – Node.js

As a software developer, 100% of the time I’m looking for a quicker and easier way to accomplish a task. Obviously, it must also be correct. I would love to write reusable software every time; however, sometimes I just need a one-time, quick, useful application that does what I need and does it well, is quick to write, and lets me concentrate on bigger things. This is where I’ve found that Node.js has proved itself over and over.

We deal with mostly C# and .NET projects here at Delphic Digital, but I also work on large PHP projects, and on the side I’ve been concentrating on Node.js. This is a neat little environment using Google’s V8 JavaScript engine (if you wonder why Chrome is so fast, this is part of it), with extensions to handle file and stream operations, networking, HTTP handling, and other stuff. The programs can be tiny.

We cannot solve our problems with the same thinking we used when we created them. – Albert Einstein

Recently, I had a problem with a site that was showing content that it shouldn’t have. I thought Pingdom might be able to do what I needed, which was to poll the site and look for that content, and then send me an alert. After learning that Pingdom didn’t offer that option, I opted to do a quick solution myself. Here’s that program:

var http = require(“http”), 				url = require(“url”);  				function checkSite(){ 				var uri = url.parse(“”); 				http.get(uri, function(res){ 				var text = “”; 				res.on(“data”, function(d){ text += d; }); 				res.on(“end”, function(){ 				if (text.indexOf(“bad data”) != -1){ 				console.log(“found bad data.”); } 				else { console.log(“data ok.”); } 				// or there is no data ;) 				}); 				}); 				} 				setInterval(checkSite, 60000);

DONE! Obviously as far as configuration and customization goes, this offers none of it. It is a hard-coded, one-off project that does exactly what I want, and when I’m done with it, it will be stored until needed again on another site with other bad content. Doing this application in C# or something would create many more lines of code, require compiling and .exe files, referencing large libraries like System.Net, etc.

On a previous project, I came across a log file that was gigantic and couldn’t open it efficiently to find the error I was looking for. It was, for all intents and purposes, around 100 MB. My text editor choked. My solution was to read the file and write out much more manageable chunks of 2 MB files:

 				var fs = require(“fs”); 				var chunkSize = 2048*1024; // 2 MB 				var file = “c:huge-log.txt”; 				var output = “c:chunks”;  				var stream = fs.createReadStream(file, { bufferSize: 64*1024 }); 				stream.setEncoding(“utf8”);  				var chunkIndex = 0, currentChunk = “”;  				stream.on("data", function(data){ 				currentChunk += data;  				if (currentChunk.length > chunkSize){ 				var outfile = output + chunkIndex + ".txt"; 				fs.writeFileSync(outfile, currentChunk, "utf8"); 				console.log(“Wrote chunk #” + chunkIndex); 				currentChunk = ""; 				chunkIndex ++; 				}  				if (chunkIndex > 1024) 				process.exit(0); 				}); 				

Begin at the beginning and go on till you come to the end: then stop. – Lewis Carroll

I hard-coded it to stop after 1024 chunks. That ought to be enough for anybody. If I had written this today, I would simply not write out chunks after 1024 had been processed, reading the whole file and closing streams at the end appropriately. But my solution at the time still works.


Here is another small problem I solved, with N images that were in production that we didn’t have on our development environment. Rather than type in each URL and save each image, I wrote a quick program to do that for me. It uses my SyncArray object that I wrote to do asynchronous operations on each element on an array; in the end, it appears that you operated on the array in order. That was a whole other fun problem to solve! Here’s the code to download 12 images:


var http = require("http"), 				fs = require("fs"), 				SyncArray = require("syncarray").SyncArray;  				var baseUrl = "";  				var files = [];  				for (var i = 1; i <= 12; i++){ 				files.push(i.toString() + ".jpg"); 				}  				var sync = new SyncArray(files);  				sync.forEach(function(file, index, array, finishedOne){ 				http.get(baseUrl + file, function(resp){ 				resp.setEncoding("binary");  				var data = ""; 				resp.on("data", function(d){ 				data += d; 				});  				resp.on("end", function(){ 				fs.writeFile("." + file, data, "binary", 				function(){ 				console.log("finished " + file); 				finishedOne(); 				}); 				}); 				}); 				},function(){ 				console.log("complete"); 				}); 				

If you've dealt heavily with Node.js, you know an array foreach with asynchronous methods called inside of the iteration will process in any order! The SyncArray object that I wrote ensures processing in order, so this program will get each file in the right order. This might seem unimportant, but writing out "1.jpg" with "2.jpg"'s data will get confusing quickly. This makes the code clean by ensuring you don't need any crazy enclosures, while also ensuring that you write 7.jpg data to 7.jpg 🙂

Overall, I find that Node.js is a quick method of dealing with small tasks that would otherwise take a few hours of time. It is powerful, FAST, full featured, yet can be a little bit frustrating to learn due to its non-blocking nature. Give Node.js a shot the next time you find yourself in a similar situation. The module library is massive and covers everything from sending an email to processing images to hosting web sites to running a proxy server, and many other things. Node.js is a great little tool for any developer's tool belt.


Jason is a Senior Software Developer at Delphic Digital. He likes to dabble in new technologies, which currently includes Node.js and MongoDB. He enjoys spending time with his wife and one-year-old daughter. Jason is an amateur photographer and also likes to play piano and guitar when he's not solving logic puzzles and coding all day.

« Prev Article
Next Article »