Three Steps to Effective Code Reviews

Exchanging feedback doesn’t have to be painful

These days, software developers are living in a GitHub Workflow world. They develop new code on version-controlled branches and gather feedback prior to inclusion in the primary release, or “master” branch, through pull requests.

Our development team at ChallengePost has been using this workflow for almost two years with great success, although we’ve had our share of pain points. For better or worse, feedback typically happens asynchronously and is in written form. Convenient, yes, although this approach is not free of the wrinkles, especially when we use poor word choice, hyperbole, sarcasm, and other forms of counterproductive commentary.

This has led to resentment and injured relationships on occasion. In response, I’m working to improve how we give and receive criticism.

Building trust

Let’s assume that, when done well, code reviews are a good thing. That is to say, the practice of giving and receiving feedback in a consistent, continual manner has true benefits. These may include improving code quality over time and driving convergence of ideas and practices within your team. In my experience, for feedback to be effective, trust amongst team members is a key requirement.

This may not be an issue for teams that have been together for a long time or share common values, but for others, trust has to be earned. In the absence of trust, there’s more opportunity for personal differences to get intertwined with feedback. While there are no quick fixes, what follows are code review practices that we have adopted to foster our shared sense of trust.

1. Adopt a style guide

Spoiler alert: code syntax and formatting are trivial choices. What’s most important is your team agrees on and adheres to a set of guidelines.

Take a few hours as a team to hammer out a style guide for each of the languages you use. Better yet, use a public example like GitHub’s style guide as a starting point. Besides the obvious benefits of consistency and maintainability, style guides reduce the likelihood of flared tempers during reviews; when you’re pushing to get a new feature out the door, it’s unhealthy to argue over whitespace. This works when your team respectfully follows and comments on style issues respectfully, saving concerns about existing guidelines for separate discussions.

2. Start with the end in mind

Imagine a developer who emerges, after hours or days off in the “zone,” with a sparkly new feature and asks for a review. All is good, right? Except that the rest of the team has issues with the implementation. Words are exchanged, the developer takes the feedback personally, and suddenly the entire team is distracted from shipping code.

Personally, I believe code review should begin well before the final commit. It can happen early on; in short discussions with teammates once the ideas start to take shape. Get buy-in on your approach before you’re ready to merge your branch. Opening a pull request and asking for feedback while work is still in progress is a great way to build trust between teammates, and reduce the likelihood that criticism may be interpreted as a personal attack.

3. Use the Rubber Duck

Rubber duck debugging is a method of finding solutions simply by explaining code line-by-line to an inanimate object. We’ve found it helps to do the same with our writing, especially when our first instinct is to respond to code or another comment with sarcasm or anger. Take a moment to read your response aloud and question the wording, timing, and appropriateness. This includes taking into account the personality of the team members you’re addressing. Thoughtbot has compiled a useful set of code review guidelines to help both readers and writers respond thoughtfully. I also suggest that teammates share meta-feedback to ensure that everyone is hitting the right notes of tone and instruction.

The next time you feel pain in a code review, take a step back to consider what’s missing. It could be that your team needs to adopt some guidelines to reduce friction and ensure feedback is exchanged in as a constructive and positive manner as possible. After all, you have both code and relationships to maintain.


Feb 25


Ruby, You Autocomplete Me

Hacking on a smarter ruby console

My team recently added a tagging feature to our web app. As the user types in the text input, the app supplies autocomplete suggestions from our database via javascript; a familiar UX. While backporting tags to existing records on the rails console, it hit me: “Why not bring tag autocompletion to the command line?”

The default rails console provides completion out-of-the-box though all the script does is start irb with the rails environment and irb/completion required.

#!/usr/bin/env ruby
require File.expand_path('../../load_paths', __FILE__)
require 'rails/all'
require 'active_support/all'
require 'irb'
require 'irb/completion'

# from

Turns out that all irb/completion does is configure the ruby interface to the GNU Readline Library. This is done with the ruby Readline module. Readline accepts a proc that determines completion behavior by returning an array of string candidates given an input string triggered, typically, by pressing TAB.

From irb/completion:

if Readline.respond_to?("basic_word_break_characters=")
#  Readline.basic_word_break_characters= " \t\n\"\\'`><=;|&{("
  Readline.basic_word_break_characters= " \t\n`><=;|&{("
Readline.completion_append_character = nil
Readline.completion_proc = IRB::InputCompletor::CompletionProc

IRB::InputCompletor::CompletionProc is a proc that evaluates a large case statement of regular expressions that attempt to determine the type of given object and provide a set of candidates to match, such as String instance methods when the input matches $r{^((["']).*\2)\.([^.]*)$}.

To give Readline a spin, fire up irb and paste in the following example, borrowed from the ruby docs:

require 'readline'

LIST = [
  'search', 'download', 'open',
  'help', 'history', 'quit',
  'url', 'next', 'clear',
  'prev', 'past'

comp = proc { |s| LIST.grep(/^#{Regexp.escape(s)}/) }

Readline.completion_append_character = " "
Readline.completion_proc = comp

There's nothing stopping us from taking this to the rails console to take advantage of our rails environment and even access the database. Building off the example, we can replace the hard-coded array with a list of tags plucked from a simple activerecord query:

require 'readline'

comp = proc { |s| ActsAsTaggableOn::Tag.named_like(s).pluck(:name) }

Readline.completion_proc = comp

We have room for improvement. For one thing, this makes a new query every time you attempt to autocomplete. For a reasonable number of tags, we could load the tag list in memory and grep for the matches instead. There is still another problem; by replacing the Readline.completion_proc, we've clobbered the functionality provided by irb/completion. One approach would be to fall back to the IRB::InputCompletor::CompletionProc or add its result to the array of candidates. Given IRB has documented, incorrect completions (try completing methods on a proc) and no built-in support for extending completion behavior, this could get messy.

Enter bond, a drop-in replacement for IRB completion. It aims to improve on IRB's shortcomings and provides methods for adding custom completions. To take advantage of Bond in the console:

require 'bond'

Bond allows you to extend the strategies for autocompleting text with the Bond.completion method. To set up a Bond completion, we need a condition and an action; when the condition is matched, then the given action will determine which candidates are returned. Calling Bond.start will register Bond's default completions. For example, the following completion is triggered with the text for completion starts with a letter preceded by “::”; the search space is scoped to Object.constants.

complete(:prefix=>'::', :anywhere=>'[A-Z][^:\.\(]*') {|e| Object.constants }

To add tag autocompletion whenever we start a new string, we could use the following:

include Bond::Search # provides methods to search lists

TAG_NAMES = ActsAsTaggableOn::Tag.pluck(:name) # load tag names in memory

Bond.complete(:name=>:tags, prefix: '"', :anywhere=>'([A-Z][^,]*)') {|e|
  tag = e.matched[2]
  normal_search(tag, TAG_NAMES)

Boom! Now we when autocomplete with some text inside an open double-quote, matching tags from the database appear on the console.

irb(main)> "Face[TAB]
Face++                     Facebook Graph             FaceCash                   Facebook Graph API         FaceDetection
Facebook                   Facebook Opengraph         Facelets
Facebook Ads               Facebook Real-time Updates
Facebook Chat              Facebook SDK               Facetly
Facebook Credits           Facebook Social Plugins
irb(main)> "Facebook", "Twit[TAB]
Twitcher          TwitLonger        Twitter           Twitter Streaming Twitxr
TwitchTV          TwitPic           Twitter API       TwitterBrite
TwitDoc           TwitrPix          Twitter Bootstrap TwitterCounter
Twitgoo           Twitscoop         Twitter Grader    Twittervision
Twitlbl           TwitSprout        Twitter Oauth     Twitvid

Even though we ended up leveraging an existing gem, digging into the Ruby standard library source code proved to be a useful exercise, revealing some simple ways to hook into features easily taken for granted.

Feb 5


Automatic Backups to Amazon S3 are Easy

Push important files to the cloud with s3cmd and cron

You have good reason to backup your files. Amazon S3 is a cost-effective storage option. It doesn't take the place of a dedicated drive that you own, it can be useful for redundancy nonetheless. With a few easy command-line steps (plus some pre-requisites), you can set up your machine to automate backups to S3 in no time.


The cron is pretty standard on unix-based systems. As of this writing, s3cmd should be straightforward:

# Mac users
$ brew install s3cmd

# Linux
$ yum install s3cmd
# or
$ apt-get install s3cmd


  • gpg: opensource encryption program


First you'll need to configure s3cmd: s3cmd --configure. Have your Amazon access key and secret key at the ready.

If you plan to store sensitive data on S3, enter the path to your gpg executable; s3cmd will encrypt your data before transferring from your machine to S3. It also decrypts when downloading to your machine. Keep in mind, encrypted files won't be readable by others with direct access.

Here's a sample result:

$ s3cmd --configure

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3
Access Key: xxxxxxxxxxxxxxxxxxxx
Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: xxxxxxxxxx
Path to GPG program: /usr/local/bin/gpg

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: Yes

New settings:
  Access Key: xxxxxxxxxxxxxxxxxxxx
  Secret Key: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  Encryption password: xxxxxxxxxx
  Path to GPG program: /usr/local/bin/gpg
  Use HTTPS protocol: True
  HTTP Proxy server name:
  HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] Y
Please wait...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)

Save settings? [y/N] y
Configuration saved to '$HOME/.s3cfg'


Now all you need is a file to backup and an S3 bucket to store it.

Let's say you're a web developer like me and you want to back up your MySQL or Postgres development data. First, generate the backup file (you may need to add database credentials command-line options, of course):

# mysql
$ mysqldump my_app_development > backup-`date +%Y-%m-%d`.sql

# or postgres
$ pg_dump my_app_development > backup-`date +%Y-%m-%d`.sql

You can use s3cmd to create a bucket. This is essentially a top-level directory in your S3 account. Since bucket names must be unique to all S3 users, you won't be able to call it something like “backups”. It's helpful to use a prefix like your email or handle.

Creates an S3 bucket called 'myname-backups':

$ s3cmd mb s3://myname-backups

Now you're ready to deliver. Encrypt and send your sql dump file to your new S3 bucket:

$ s3cmd put backup-2014-02-01.sql s3://myname-backups/backup-2014-02-01.sql --encrypt

You can verify it's in the bucket:

$ s3cmd ls s3://myname-backups/
2014-02-01 22:32   1109702   s3://myname-backups/test/backup-2014-02-01.sql

And retrieve it (with automatic decryption when performed on your machine):

s3cmd get s3://myname-backups/backup-2014-02-01.sql

s3cmd supports a wide range of configuration options beyond those entered during the setup phase.Once set, your global configuration is editable in your .s3cfg file, typically saved in your home directory. You can also set options at the command line.


Backing up is good but automatic, recurring backups are even better; like saving money, it's more likely to happen when you make a computer do it for you.

Let's add a cron task:

#!/usr/bin/env bash

TIMESTAMP=$(date +%Y-%m-%d)
pg_dump directory_development > $TEMP_FILE
s3cmd put $TEMP_FILE $S3_FILE --encrypt

Save this in a directory for your local scripts, like $HOME/bin/ and add execute permissions with chmod +x ~/bin/

To edit your crontab, crontab -e, and set it to run everyday at 10PM:

# Backup database to S3 daily
0 22 * * * /Users/myname/bin/

Easy, right?

Feb 1


Featured Post for Speed Awareness Month

Check out my contribution to Speed Awareness Month to help make the web a faster place where I share tips to improve delivery of static assets, like images, stylesheets, and javascript files, by serving from a cookie-free domain to reduce bandwidth and levaraging CDN pull zones to improve delivery times.

Aug 26


Configuring Rack Test Driver in Capybara 2

Though I don't recommend excessive redirects, sometimes you need more than 5 in your Capybara specs; this is the default redirect limit for Capybara. When you exceed this limit, you get a dreaded Capybara::InfiniteRedirectError.

In Capybara 2.0+, this limit is configurable:

Capybara.register_driver :rack_test do |app| app, \
    redirect_limit: 15,
    follow_redirects: true,
    respect_data_method: true

Register a new instance of the rack test driver with options, as shown above. If you're on Rails, it may be necessary for you to set :respect_data_method to true; this instructs capybara to simulate the request method specified via data-method attributes in your link. With Rails, an extension like rails/jquery-ujs allows you to enable additional request methods via unobtrusive javascript in real browsers. This setting currently defaults to false in the RackTest driver; one of the primary configurations in capybara/rails is to set this option. So… you may be surprised if you omit this option and suddenly get missing route exceptions in your specs.

The best long term solution for you and your users is to figure out how to reduce or eliminate unnecessary redirects. Playing with the redirect limit in your test environment may be a good way to identify potential problem areas of your app.

Mar 26

Page 1 of 4
Previous page