It’s very common to have to write applications or libraries with dependencies that are completely foreign to your code. Regardless of whether it’s a database server, a web-service, some CLI-tools or even timers, suddenly your tests spend several seconds waiting for responses and/or don’t have strictly defined behavior. It’d be great if we could define foreign behavior programatically. Instead of actually interacting with them, we could just test our interactions.
The solution for this problem exists and is very simple: using stubs. Knowing how to use them will make your tests easier to write, predictable and a whole lot faster.
What are stubs? An example using web-scraping.
Imagine I’m writing a program to extract the main headline from the Folha de S. Paulo website — that’s a big newspaper in São Paulo, where I live. Today, the headline is structured like this:
There’re days in which the main headline is inside a main-headline section and we can crawl through the Internet Archive entries for the website to get more reliable extraction if we want to. But essentially, we want our code to do two tasks:
- Downloading the homepage
- Extracting and serialising this bit from the HTML
In a real project, each task would have its own separate function, but today, we decided to write everything in the same function. This is to make the example actually illustrative, but it’ll happen very often. Even if they were separate, the orchestration of these tasks needs to be tested.
So here’s our function:
I’ve named this function as carregaManchete. That’s portuguese for loadHeadline. The code above uses two libraries which should be part of the repertoire of many Node.js programmers: superagent e cheerio. First, request.get is used to download the page and then cheerio.load(…)’s jQuery syntax is used to extract the desired piece of the HTML.
We want to test our code with mocha, so we write the following test:
Here, Node.js’ built-in assert module is used to test our function. But what happens if we’re offline? Or if the website responds with an error? Our tests, albeit really simple (as our code) are already slow. This takes around 100ms consistently on my connection. We to want control what and how request.get(…).end(…) returns.
A stub is a function which substitutes a method in an object we want to control. In this case, the object is what request.get(…) returns: an instance of the request.Request class. We need a stub on the method request.Request.prototype.end. This way, when request.get(…).end(…) is called, we’ll have full control over what happens.
Before introducing sinon.js, a framework written specifically to help tackle this problem, it’s worth showing that we can easily implement our own stubs. Using the before and after hooks in mocha, we can substitute the method with a fake version for a single test and restore it after the test ran:
After adding that to our top-level describe block, our tests run instantly. Because we have control over the function, we can test what happens when an error occurs:
And so it goes. For more complex scenarios, it’ll be useful to be able to make assertions over what happened with our implanted method. Was it called at all? What arguments was it called with? How many times was it called? Also, once complexity grows, one could be testing code that needs many stubs and the boilerplate for creating and restoring them might get in the way of writing simple tests.
Stubs with sinon.js
Though sinon.js actually covers more ground than just creating stubs, for our intents and purposes, it’s the answer to those questions. It’ll provide helpers for creating and restoring stubs on objects and making those call related assertions. Here’s how creating the same stub in our first before/after example looks using sinon.js:
Here, the first block is creating a stub on request.Request.prototype with sinon.js. The second calls .restore on the fake method, which replaces it back with the original. We can either use the object returned by sinon.stub or the request.Request.prototype.end fake method to find data about how its called.
request.Request.prototype.end.getCalls() for example, returns an array with information about the calls. We can look at .thisValue, .args and more. I suggest logging an object representing a call and finding the most useful data for you. We can also easily assert if the fake method was called with request.Request.prototype.end.called — also provided are .calledOnce, .calledTwice and .callCount
Another thing worth mentioning is that methods for creating these simple fake functions are provided. We could shorten the example above that just calls a callback with some fixed String literal using .yields:
Personally, I stay away from using those. I don’t mind writing the fake functions, since it’s more flexible to control what your replacement will do explicitly. Boilerplate from restoring the functions annoys me though. In the context of mocha, we can enforce that stubs are scoped to a single test block automatically generating those before and after hooks.
I wrote a tiny helper for that called mocha-make-stub. I think the cause for it is that of enforcing conventions. This is what the example looks with its usage:
makeStub calls sinon.stub under the hood, but does it inside a before block. It also adds an after block to restore the original method. If tests are written using the helper, you’ll be sure to always be in control of what the stubs are doing and prevent yourself from having a complex test-suite. Another small sugar it adds is .end on mocha’s context object, so assertions that were on request.Request.prototype.end can be written using this.end. The README on it’s GitHub page shows how to name that property as you wish, though an optional first parameter.
For me, this puts us on a much better place. The test runs really fast, we’ve stablished a convention for creating and destroying stubs and striped away all the boilerplate. Hopefully you can run to your test suites and start taking their dependencies away too. On the worse case, this will make them faster. But for me, it opens-up more possibilities.
When structuring tests, I don’t care about dependencies at first, letting them use their real-world dependencies. Once tests and application code is well-structured and I’m happy with the design, I’ll stub dependencies one-by-one. By commenting out one of the makeStub calls I write, I can see the code running exactly like it will on production, but I have all the benefits of stubbing at my hands.
Please be sure to comment about your thoughts and concerns and to have a more hands-on and in-depth look at the projects mentioned.