Why callback hell can be your friend.

We're all too familiar with this term and stigma. I have enjoyed my limited time working with node.js professionally, but I can't have a conversation with someone outside the ecosystem without callback hell coming up.

The idea is simple. With JavaScript, it is canonical to handle all I/O in an asynchronous way. The lowest level tool being callbacks passed to functions that perform I/O.

So routinely, we get an arbitrary example that looks something like this...

DBfind(function(err, thing) {
    thing.doSomething(function(err) {
        thing.doAnother(function(err) {
            thing.doYetAnother(function(err) {
                // :(((((
            });
        });
    });
});

The primary problem presented with these types of examples boils down to the fact that it is ugly.1 With the universal counter point, Promises or flow control libraries, focused on fixing nesting depth.

Is that the primary problem though, or is it deeper?

We have stacked 4 sequential I/O operations. Is this a good thing even in a synchronous I/O environment? Imagine the same logic stuffed into an object method that blocks awaiting the return. On platforms that prefer synchronous I/O, these expensive operations are easy to bury.

The concept of callback hell should be treated first as a symptom that could point to a deeper design issue. Is this method's single responsibility I/O? I have come to view methods accepting callbacks as a potential code smell. Causing us to determine whether the I/O can be made more explicit to the developer.

A common offense being what I will call tail call I/O. The object method mutates the object, and then calls out to I/O proxying the callback passed to the original method call.

Imagine the following example.

Thing.prototype.updateFoo = function(newFoo, callback) {
    this.somethingComplicated = newFoo;
    this.save(callback);
};

Thing.prototype.updateBar = function(newBar, callback) {
    this.somethingMoreComplicated = newBar;
    this.save(callback);
};

var thing = new Thing();

thing.updateFoo('Foo', function(err) {
    thing.updateBar('Bar', function(err) {
        // :((((((
    };
};

In these cases, you can easily flatten that out like so.

Thing.prototype.updateFoo = function(newFoo, callback) {
    this.somethingComplicated = newFoo;
};

Thing.prototype.updateBar = function(newBar, callback) {
    this.somethingMoreComplicated = newBar;
};

var someThing = new Thing();
someThing.updateFoo('Foo');
someThing.updateBar('Bar');
someThing.save(function(err) {
    // :)
});

Not only is this nicer, but more importantly it is a design improvement. Reaching out to I/O land only once, when the developer consciously decides it is required. This approach also makes .updateFoo() and .updateBar() easy to test. Previously, they would be beholden to the external services performing the I/O operation, needing to be mocked or integrated into the test runner.

There are situations that require a waterfall of I/O operations, but our first instinct should not be to accept them as is, targeting cosmetic defects.

Instead of bemoaning this distinctive property of JavaScript and slapping some paint on it, use it to better assess the quality and performance of our api designs.


1. Call it (read|scale|maintain)ability if you wish. And yes, I'm aware javascript does not start from a place of cosmetic excellence.