selfcontained[web development]

flipflop 0.1.0 - configurable urls

Saturday September 14, 2013
By Brad Harris

I released a new version of flipflop that has support for configurable urls. Details are in the readme file, but in short, when you generate a new flipflop blog, part of the config in the blog.json file will now include a section for routes. If you have an existing flipflop blog and want to update to the latest, you can add this section to your blog.json config file. You may also want/need to update a few of your templates, depending on how much you've changed them. Check this commit to see an example of what changes to make.

"routes": {
    "archive": "/archive",
    "article": "/:year/:month/:day/:slug",
    "error": "/404.html",
    "homepage": "/",
    "feed": "/feed/rss.xml",
    "tag": "/tag/:tag"
}

This will allow for customizing the url, such as adding a prefix to your blog:

"routes": {
    "archive": "/blog/archive",
    "article": "/blog/:year/:month/:day/:slug",
    "error": "/blog/404.html",
    "homepage": "/blog",
    "feed": "/blog/feed/rss.xml",
    "tag": "/blog/tag/:tag"
}

There are a few special things to note with routes:

  • The article route requires a :slug param. Available params are:
    • :year
    • :month
    • :day
    • :slug (required)
  • The tag route requires a :tag param. Available params are:
    • :tag (required)

using express with broadway

Thursday September 20, 2012
By Brad Harris

Express is by no doubt an extremely popular http application server/framework for node.js. In this article I'd like to demonstrate how you can take advantage of broadway and express together.

In this approach, we'll treat the http server that express provides as just another plugin to our broadway application.

express http plugin

First we'll create an http plugin encapsulating express.

// http.js

var express = require('express');

var HttpPlugin = exports;

HttpPlugin.name = 'http';

HttpPlugin.attach = function(options) {
    var app = this;

    var server = app.http = express();
    server.use(server.router);

    require('./routes')(app);
};

HttpPlugin.init = function(done) {
    done();
};

I'd also suggest defining your routes in their own module(s), and passing along the broadway application. This will allow them to take full advantage of any other functionality your application acquires through other plugins.

// routes.js
module.exports = function(app) {

    app.http.get('/', function(req, res){
        res.send('express on broadway');
    });

};

broadway.concat(express)

Next we'll create a broadway application and toss in our http plugin.

// app.js

var broadway = require('broadway'),
    app = new broadway.App();

app.use(require('./http'));

app.init(function(err) {
    app.http.listen(8080);
});

You can also just module.exports = app.http if you're using something like up and need to export an http server from your main applicaiton.

krakens.release()

Finally, fire up your application

>node app.js

That's all there is to it. In this example, the http server is just one piece of our application, and we can bolt on additional functionality as needed through more plugins.

decoupling your node.js application with broadway

Tuesday September 18, 2012
By Brad Harris

the why

Why should you worry about decoupling your node.js application? Can't you just use the module pattern and require() away? Sure, sort of...until your application starts to grow, and module's begin to have cross dependencies. In reality, you can avoid cross dependencies between modules for most small to medium sized applications, but as your application grows, you may run into cyclic dependencies, which can be hard to decipher and debug. Without going into detail on what those are (follow those links if you're wondering), I present, a contrived example.

var ModuleA = require('module-a'),
    ModuleB = require('module-b');

var ModuleC = module.exports = function() {
    this.myB = new ModuleB();
};

ModuleC.prototype = {

    /** amazin' function! */
    amazinFunction : function() {
        if(ModuleA.isAmazin(this.myB) {
            this.beginAwesomness();
        }else {
            this.sadPanda();
        }
    },

    beginAwesomness : function() { /** awesome stuff */ },

    sadPanda : function() { /** sad stuff */ }

};

Here ModuleC is exported, and is a pretty basic function/prototype that has some required dependencies. The principle of dependency injection would tell us that instead of ModuleC being responsible for loading ModuleA and ModuleB, those should be injected into it somehow. Broadway is a fantastic library to help with this.

broadway

At it's core, Broadway provides a plugin architecture for applications, and a consistent way to manage and add functionality. It also gives us a nice platform for dependency injection, and inversion of control, letting the modules alter and build on the application instead of the application being responsible to build everything and pull in your modules' functionality. If you're interested in this concept, this article from the Nodejitsu blog is super informative.

So, let's start with a basic Broadway application, and load up a plugin that we'll define below.

var broadway = require('broadway'),
    app = new broadway.App();

app.use(require('myposse'));

app.init(function(err) {
    // we're all setup, gtg
});

A basic Broadway plugin might look something like...

// myposse.js

var MyPosse = exports;

MyPosse.name = 'myposse';

MyPosse.attach = function(options) {
    var app = this;

    // here we can add some functionality to the app
    app.posse = function() {
        console.log("my posse's on broadway");
    };
};

MyPosse.init = function(done) {
    // handle any asynchronous initilization
    done();
};

RIP MCA. Inside our attach function is where we can pull in our related modules, and expose them to the application. Notice how we're calling the result of the require statement of each module, passing in the Broadway application, and setting that onto the application itself.

MyPosse.attach = function(options) {
    var app = this;

    app.ModuleA = require('module-a')(app);
    app.ModuleB = require('module-b')(app);
    app.ModuleC = require('module-c')(app);

};

We would want to rework our above example of ModuleC to allow for this change, which also lets us remove the require statements for ModuleA and ModuleB, and pull them in as dependencies from the app object.

module.exports = function(app) {

    var ModuleA = app.ModuleA,
        ModuleB = app.ModuleB;

    var ModuleC = function() {
        this.myB = new ModuleB();
    };

    ModuleC.prototype = {

        /** amazin' function! */
        amazinFunction : function() {
            if(ModuleA.isAmazin(this.myB) {
                this.beginAwesomness();
            }else {
                this.sadPanda();
            }
        },

        beginAwesomness : function() { /** awesome stuff */ },

        sadPanda : function() { /** sad stuff */ }

    };

    return ModuleC;

};

With Broadway you can organize your application's modules and expose their functionality to the application via plugins. I've found it a great way to organize services, models, and other application resources, and expose them to the application without coupling them directly to eachother via require statements. There's definitely a place for modules that are independant enough to be require()'d at will, but I've also found that there's a place for application specific modules that are best managed at an application level.

There's a lot more Broadway has to offer (such as application events), so check it out if you're building large applications on node.js.

hello fabric

Saturday September 1, 2012
By Brad Harris

In an effort to teach myself a little about Fabric, I threw together a script to help publish updates for this blog. It's stored on Github and is an auto-generated static site created by FlipFlop.

from __future__ import with_statement
import re
from fabric.api import run, cd, env

env.hosts = ['selfcontained.us']

def publish():
    with cd('/data/www/selfcontained'):
        run('git pull')
        run('flipflop generate')
    print('changes published')

So simple, just a fab publish and changes are out there. Sure beats ssh'ing around and doing it manually. I realize this is a super basic useage of fabric, but I'm a fan.

Here's a few more commands I put together for tagging and deploying tags to a server:

def tag(version=None):
    if version is None:
        version = getNextVersion()
    if(confirm('create new tag (%s)?' % version) is False):
        abort('no tag for u')
    local('git tag -a %s -m "%s"' % (version, version))
    local('git push --tags');
    print('tag %s created' % version)

def deploy(version=None):
    if version is None:
        version = getNextVersion()
    if newVersion(version):
        tag(version)
    if(confirm('Deploy tag "%s" to production?' % version) is False):
        abort('no deploy for u')
    with cd('/path/to/repo'):
        run('git fetch')
        run('git co %s' % version)
        sudo('/etc/init.d/apache2 restart')
    print('tag %s deployed' % version)

def getNextVersion():
    latest = local('git tag | sort -V | tail -1').strip().split('.')
    latest.append(str(int(latest.pop())+1))
    return '.'.join(latest)

def newVersion(version):
    return version not in local('git tag').strip().split('\n')

node.js and circular dependencies

Tuesday May 8, 2012
By Brad Harris

Circular Dependencies in modules can be tricky, and hard to debug in node.js. If module A requires('B') before it has finished setting up it's exports, and then module B requires('A'), it will get back an empty object instead what A may have intended to export. It makes logical sense that if the export of A wasn't setup, requiring it in B results in an empty export object. All the same, it can be a pain to debug, and not inherently obvious to developers used to having those circular dependencies handled automatically. Fortunately, there are rather simple approaches to resolving the issue.

example.broken() === true

Let's define a broken scenario to clearly illustrate the issue. Module A delegates to an instance of Module B to do some important stuff().

Module A

var B = require('./B'),
    id,
    bInstance;

var A = module.exports = {
    init : function(val) {
        id = val;
        bInstance = new B();
        return this;
    },

    doStuff : function() {
        bInstance.stuff();
        return this;
    },

    getId : function() {
        return id;
    }
};

Module B

var A = require('./A');

var B = module.exports = function(){
    return {
        stuff : function() {
            console.log('I got the id: ', A.getId());
        }
    };
};

Tie them together

require('./A.js')
    .init(1234)
    .doStuff();

With this you'll end up with an error:

TypeError: Object #<Object> has no method 'getId'
    at Object.stuff (/Users/bharris/workspace/circular-dep/B.js:7:36)
    at Object.doStuff (/Users/bharris/workspace/circular-dep/A.js:18:13)
    at Object.<anonymous> (/Users/bharris/workspace/circular-dep/test.js:4:3)

The issue is that when A is required at the top of B, it ends up being an empty object, which doesn't have a getId method.

example.solutions().length === 2

I'll explain two simple solutions to this issue:

delay invocation of dependency until runtime

If we move the require statements to where they are needed at runtime, it will delay the execution of them, allowing for the exports to have been created properly. In this example, we can get away with simply moving the require('./B') statement.

Module A

var id,
    bInstance;

var A = module.exports = {
    init : function(val) {
        id = val;
        bInstance = new require('./B')();
        return this;
    },

    doStuff : function() {
        bInstance.stuff();
        return this;
    },

    getId : function() {
        return id;
    }
};

This feels like a bit of bandaid to this particular problem, but perhaps is the right solution in some cases.

replace circular dependency with dependency injection

The only dependecy that B currently has on A is an id property it needs access to. We could just pass the id into the constructor of B, but let's assume A is more significant to the operations B must perform, and a proper reference is required. If we inject that dependency we'll allow for a loose coupling between the two modules, and have a slightly more elegant solution. Zing!

Module A

var B = require('./B'),
    id,
    bInstance;

var A = module.exports = {
    init : function(val) {
        id = val;
        bInstance = new B(this);
        return this;
    },

    getId : function() {
        return id;
    },

    doStuff : function() {
        bInstance.stuff();
        return this;
    }
};

Module B

var B = module.exports = function(val){
    var dependency = val;
    return {
        stuff : function() {
            console.log('I got the id: ', dependency.getId());
        }
    };

};

#winning

DependencyInjection
    .merge(LooseCoupling)
    .attach($);

I put my $ on dependency injection and loose coupling.

google.drive.backup(iphoto.library)

Friday April 27, 2012
By Brad Harris

Time Machine is great, but certain things, like the past 10 years of photos of my kids crawling around in diapers, learning to swim, snowboarding, and shoving their faces full of cake, warrant more than just one extra local backup in my mind. There are lots of cloud storage solutions, but none have been cheep enough for me to really consider using, until Google Drive came out. $4.99 a month for 100 GB, yes please!

iphoto.library.clone() === ftw()

After installing Google Drive and grabbing a cheap 100 GB subscription, I set forth to backup my precious photos. iPhoto stores it's library in an app file, which is really just a folder. You can put that folder anywhere (it's in ~\Pictures\iPhoto Library by default), so I could just move it into my ~\Google Drive folder and call it good. I don't have that much confidence in Google Drive yet, and would hate myself if I lost all those photos. Since local hard-drive space is so cheap and plentiful, I decided to copy my ~\Pictures\iPhoto Library folder into my ~\Google Drive folder. Once that was done, I just needed to keep it synched with my real iPhoto Library folder.

rescue.add(crontab).add(rsync);

OSX Lion uses launchd instead of cron, but I'm used to cron, and it's still installed w/ Lion, so I'm gonna use it. Setting up an hourly cron to rsync any changes to the local Google Drive copy of the iPhoto Library does the trick, and is super simple. Create a text file, somewhere, like maybe ~\crontab.txt and fill it with this lovely bit of text, adjusting any directories as needed:

1 * * * * rsync -lrtuv ~/Pictures/iPhoto\ Library/ ~/Google\ Drive/iPhoto\ Library/

On the first minute of every hour, this job will run. Feel free to adjust as you need, but rsync is fast, and only copies changes, so don't worry too much about running it too often. The -l is important, as iPhoto contains a few internal symlinks that are needed for it to function correctly. If you feel like giving it a test run, just run the following in a terminal:

rsync -lrtuv ~/Pictures/iPhoto\ Library/ ~/Google\ Drive/iPhoto\ Library/

With that setup, we just need to add our crontab file to the system:

crontab ~/crontab.txt

Then check to make sure your job is setup:

crontab -l

That's it, now Google Drive will backup your iPhoto Library, and any updates to the library will be synched locally with the Google Drive copy. If you get courageous, you can tell iPhoto to load the library in your Google Drive folder as the source by double clicking ~\Google Drive\iPhoto Library. If you do that, be sure to turn off your cron job though!

crontab -r

Now you can happily camera.takePicture(kids.eat(cake)) and know it's backed up somewhere other than just your home.

markdown powered blogs

Saturday April 14, 2012
By Brad Harris

Switching from a complex blogging platform to a lightweight, file-based blog is something of a trend amongst web developers lately. I chalk it up to our love of simple solutions, and a preference of interfacing directly with a file-system. I much prefer to open up SublimeText2, go into distraction free mode and create a markdown file for a new article. For me it's a smaller barrier of entry to writing than logging into Worpress and gathering my thoughts in a <textarea>.

what's out there

There are some great options out there for markdown based blogs. My preference is towards node.js, and Wheat (created by Tim Caswell) as a platform for howtonode.org is one of the more popular solutions in that category. Blacksmith is another great solution, created by nodejitsu. Both are interesting and powerful solutions, but weren't exactly what I wanted.

javascript is easy

I wanna be in control of the urls for articles, use the templating engine I prefer, and have some fun writing javascript. I also want to be able to start my blog up locally on a node http server to tweak it and test it, and not have to generate the static site to view every change. For the live site, I want to just clone a git repo on a server, and run > node generate.js to create a static site for Apache to serve. So I did, it was fun. Feel free to check it out, maybe fork it and see what you think.

i can haz dropbox blog?

Check out what Joe Hewitt is doing to integrate Dropbox into his blogging solution. I dig it.

node.js clusters

Wednesday April 4, 2012
By Brad Harris

When it comes time to deploy a node.js application, you'll want to examine how you can benefit from your hardware and take advantage of multiple cpus. Node is single threaded, so having one process run your application on a server with more than one cpu isn't optimal. Fortunately, like many things on node, it's simple to do so.

Let's take the following example of a dead simple http server:

    require('http').createServer(function(req, res) {
        res.writeHead(200);
        res.end('this is dead simple');
    }).listen(3001);

You've got that saved in a file, let's call it app.js. You start it up...

    > node app.js

...and it's amazing, it writes out text to all those that visit your site, just like you want. It's become an overnight internet sensation, and now you want to know what crazy hoops you'll have to jump through to scale it up and take advantage of all the cpus on your server. Enter the cluster module.

We'll create a new file that we'll use in production to launch our application. Let's call it cluster.js.

    var cluster = require('cluster');

    if (cluster.isMaster) {
        //start up workers for each cpu
        require('os').cpus().forEach(function() {
            cluster.fork();
        });

    } else {
        //load up your application as a worker
        require('./app.js');
    }

When you start your app now...

    > node cluster.js

...node will recognize that the cluster is the master, and then you simply fork the cluster for each cpu you have. Those in turn will start up, and those workers won't be the master, so they'll just load up your app.js and start up a process for each cpu.

But wait, doesn't each worker have to listen on a different port?

"The cluster module allows you to easily create a network of processes that all share server ports."

One TCP server is shared between all workers, so they can all be listening on the same port.

Awesome, you're now handling loads of traffic, but after a day you realize two of your workers died because you had an exception being thrown in that complex codebase of yours. How can you make sure you keep them running?

    if(cluster.isMaster) {
        //start up workers for each cpu
        require('os').cpus().forEach(function() {
            cluster.fork();
        });

        cluster.on('death', function(worker) {
            console.log('worker ' + worker.pid + ' died');
            cluster.fork();
        });
    }

Listen for the 'death' event on the cluster, and just fork a new worker if one dies. Simple huh? Keep in mind, every worker is it's own process, therefor they don't share memory.

markdown.me

Friday July 8, 2011
By Brad Harris

Markdown is a great shorthand syntax for creating HTML, and subsequently, for taking notes. I often take notes for different situations and use the Markdown syntax to help give them structure and organization. One thing I found I often wanted was a way to enter that Markdown somewhere, and have it generate an HTML page from it, with a permalink so I could share it or access it later. The Markdown dingus provides a great UI for testing out HTML conversion, but doesn't provide any persistence, so I threw together a pretty quick site that fit my needs.

markdown.me

It's pretty basic, but you can throw in your Markdown and you end up with a unique url you can share with the generated HTML. I was too lazy to add a full-blown account registration layer to organize and manage your documents, but did add Facebook login so you can do that if desired. Perhaps I'll add other forms of login if anyone else ends up using it. Anything you put on there is public as well, for now. Here's the permalink to some notes I took at this year's Velocity Conference.