Oct 152014

I ran into a bunch of trouble over the past few days trying to get codeship.io to deploy a Go app I was playing around with. To save you some debugging time, I found that the appcfg.py codeship uses deploying App Engine Apps (at least the Go app I was using) is incorrectly coming from the Python bundle of GAE utitilies, not the Go bundle. This can result in unexpected dependency errors like:

--- begin server output ---
Compile failed:
2014/10/13 21:59:55 go-app-builder: build timing: 16g (38ms total), 0gopack (0 total), 06l (0 total)
2014/10/13 21:59:55 go-app-builder: failed running 6g: exit status 1
main.go:8: can't find import: "github.com/gorilla/mux"
--- end server output ---
04:59 AM Rolling back the update.
Error 422: --- begin server output ---
--- end server output ---

The fix here is pretty easy. Add a line like export PATH=/home/rof/appengine/go_appengine:$PATH to your Setup Commands via the Settings page. If you ssh into your debug box (which is a really cool feature) you’ll see that $PATH lists the python_appengine folder first, which means the appcfg.py from that folder will take precedence over any others like the second one which is better suited for Go.

Overall, the UI that codeship provides is really nice and I liked the thought of not having to configure my deployment commands but in practice that didn’t work our very well. It would have been useful if their documentation was a bit more transparent what went into the “Updating your Google App Engine application” step. Now to sort out why codeship is trying to healthcheck the non-existent root URL of my application…

Jun 152014

I’ve recently spent a bit of contributing to the <google-map> element which leverages Polymer to help developers quickly integrate Google Maps into a website without having to jump through all the hoops of learning the V3 JavaScript API.

One of the main challenges I faced when getting started contributing to the element was trying to figure out the environment and workflow for development.  I’m use to working with Ruby on Rails where I have my trusty ./script rails s command or a Makefile to build an executable but these custom elements are just a collection of HTML, JS, and CSS files loosely organized in a directory with some dependency management stuff.  Here’s my quick guide to get starting developing the google-map element, or really any custom element with Polymer.

  1. Make a new directory to contain your Polymer development:  mkdir polymer-dev; cd polymer-dev
  2. Clone the repo you want into that new directory.  If you’ve forked the repo, you’ll probably want to git clone your copy here: git clone https://github.com/PolymerLabs/google-map.git
  3. Head into the cloned repo and create a .bowerrc file with the following contents:
    "directory": "../"
  4. Use bower to install all the dependencies specified in the element: bower install
  5. Head out of the custom element’s directory back into the development space: cd ..
  6. Start a static web server.  I use the default Python server, but you can use anything that serves static files: python -m SimpleHTTPServer
  7. Presto!  Head to http://localhost:8000/google-map/demo.html to enjoy the element.

By default, Polymer elements seem to reference external dependencies as living just above the element, so google-map.html is looking for polymer via a path like “../polymer/polymer.html”.  The .bowerrc file that we setup tells bower to install all the dependencies one level higher which lets everything resolve correctly.

If you’re making changes across multiple elements / resources you can always manually remove a dependency that bower installed in your polymer-dev directory and replace it with a git clone of your own fork to start making changes.  As an example, if I’m making a change that straddles both google-map and google-apis I remove the default google-apis that bower install pulls for me with a fork of my own.

Jun 192013

Unlike a lot of the other Google Maps APIs, the Google Earth JS API doesn’t presently have the ability to load itself asynchronously.  There’s no callback parameter to specify a function to get called when it’s finished loading and initializing which requires most people to load it in <head> every time a page loads.  If you’re only showing the 3D globe in response to some user interaction or other non-default show experience you end up loading a bunch of JavaScript that might never get used (Google Maps for Business customers also incur a page view!).

I pulled together some simple JavaScript which loads the Earth API on demand, letting you specify a success and error callback so you can start drawing your 3D experience when it finishes.  You can find the code here: https://github.com/bamnet/map_sandbox/tree/master/earthAsync.

If you’re curious, the code polls checking every 20ms to see if the JavaScript components like google.earth are available.  When they are your success code runs, if they don’t become available within a certain amount of time (2 seconds), the error code runs so you can try again or wait for your users to be on a faster-connection.

Apr 222012

Sometimes I avoid learning new things because I’m lazy, pressed for time, or for some other reason couldn’t be bothered to figure them out.  I write a lot of tests these days, but I’ve been putting off figuring out how to use Mox because it was usually just as fast for me to roll my own solution and because the documentation seemed like it was written for folks who know what they are doing already and are just looking for the answer to the how question.  Let me explain Mox as I understand it and give some examples how to use it for testing applications that use web services or make remote http calls.

Let’s say you’ve got this Python Class that looks something like this:

"""Find locations."""
__author__ = 'bmichalski@gmail.com (Brian Michalski)'
import json
import urllib
class LocationFinder(object):
  """Find the geographic location of addresses."""
  def __init__(self, urlfetch):
    """Initialize a LocationFinder.
      urlfetch: Backend to use when fetching a URL.
        Should return a file-like object in response to the urlopen method.
    self.urlfetcher = urlfetch
  def find(self, address=''):
    """Find the latitude and longitude of an address.
      address:  String describing the location to lookup.
      Tuple with (latitude, longitude).
    base_url = 'https://maps.google.com/maps/api/geocode/json'
    params = urllib.urlencode({
      'sensor': 'false',
      'address': address
    url = '%s?%s' % (base_url, params)
    result = self.urlfetcher.urlopen(url)
    data = json.loads(result.read())
    location = data['results'][0]['geometry']['location']
    return (location['lat'], location['lng'])

The code is pretty simple, you can run it using something as simple as:

finder = LocationFinder(urllib)
print finder.find('1600 Amphitheatre Parkway, Mountain View, CA')

What’s important is that LocationFinder takes urllib as an argument. This is a kind of poor example because urllib isn’t another class that really needs mocking, but if you were developing on AppEngine or other environments where outbound connections weren’t as straight forward you could pass in the instance of your outbound connection library.

For demonstrative purposes, let’s pretend one of a few things is happening. 1. We can’t get an outbound internet connection to actually test against Google. 2. Google is too slow to test against. or 3. The service we’re testing against requires a complicated authentication handshake beforehand. None of these three cases are actually at play here on my laptop, but you could imagine wanting to isolate your testing from Google in the event that service goes down or is temporarily unavailable to you.

Mox to the rescue. Using Mox, we can make a fake urllib which, by default, doesn’t know anything about the existing urllib. Since we only call the urlopen function and don’t care about any other externals, all we have to do is define that method on our fake urllib and tell it what to return when it’s called. I find the syntax a bit strange, but to define the method you just call it and pass it the expected values (or matchers to broadly match your expected values) and then add .AndReturn(return value here) to wire up it’s return. When urllib.urlopen is called with the parameters you’ve specified it will return the return value you’ve stored otherwise you’ll get an error saying that the expected parameters don’t match what it’s actually being called with or the expected return doesn’t match the actual return (putting the the return from a void into a variable for example). Speaking of examples, here’s how I could quickly test the code above:

"""Testing the LocationFinder."""
__author__ = 'bmichalski@gmail.com (Brian Michalski)'
import location_finder
import mox
import StringIO
import urllib
import unittest
class TestLocationFinder(unittest.TestCase):
  def setUp(self):
    self.mox = mox.Mox()
  def tearDown(self):
  def testFinder(self):
    fetch_backend = self.mox.CreateMock(urllib)
    fake_data = StringIO.StringIO((
    finder = location_finder.LocationFinder(fetch_backend)
    result = finder.find('1600 Amphitheatre Parkway, Mountain View, CA')
    self.assertEqual(result[0], 37.42114440)
if __name__ == '__main__':

Since urlopen returns a file-like object I use a StringIO object and hardcode some output. I could have saved the result verbatim from Google in a file and put that somewhere to return. In summary, testFinder is broken down into two halves, the first have creates a fake urllib and tells it how to work responding to the one method and the second half loads the LocationFinder using the fake backend and verifies the calls worked as expected.

My old fashioned technique would have just been to write something like:

class mockurllib(object):
  def urlopen(url):
    return something

which isn’t too bad when you’re testing just 1 function like I am above, but if you’re testing different calls to different backends with different responses it can get a bit verbose and messy. I’m sure there’s room to improve my current understanding, maybe I’ll pick up some more handy testing tricks later.

The one thing I dislike about mox is the need to include urllib at all in the test (or import in Python’s case). I think there are ways to mock it out in a more generic fashion, but that feels like it might be getting sloppy. Since urllib is being imported it could still run a potentially slow initialization sequence, not applicable in this specific case but certainly something to watch out for.

Dec 052011

Concerto 2 is going to be a technical marvel if I have anything to say about it.  We’re going to be using SVG when possible as alternatives to server-size RMagick work when we need to generate simple graphics.  In theory this will save a few ms of processing time (returning text > processing an image file) and reduce our cache size if enough people support svg images.  We’ll still fall back to PNG / JPEGs for people with outdated browsers, and you probably should too; there’s a lot of folks out there who don’t have SVG support… optimistically this will be the minority.

One of the things that Concerto does it return a small preview of a template when you’re modifying it.  A template is essentially a high-res background image (often 1920×1080) and an accompanying set of transparent boxes (positions) that content get displayed in and the preview is really important to help see where those invisible boxes are.  This turned out to be really easy to implement in <svg> using the <rect> element.  Just like I’d draw div elements on a screen’s dom, I can draw rectangles positioned relative to the svg image to recreate the exact same layout but in a more graphical fashion.  The background image was quickly added in using svg’s image element, setting it’s width and height to 100% and the x and y coords to 0,0.

<image id="background" xlink:href="/media/19" height="100%" width="100%" x="0" y="0" />

I found that this almost scaled the template appropriately.  By default the aspect ratio was constraining it to the image’s aspect ratio, so the width and height of 100 was really width = 100% or height = 100% depending what was smaller.  Quick fix for small stretching issue was to add preserveAspectRatio=”none”.

<image id="background" xlink:href="/media/19" height="100%" width="100%" x="0" y="0"
       preserveAspectRatio="none" />

This was working wonderfully, and with my rectangles overlaying the positions it was a near pixel perfect replica of the code that I had already used to generate the PNG / JPEG images.  The only subtle differences were in the text / font rendering, and I don’t really care too much about that.

One problem left: resizing the svg / resizing the window.

Despite using relative sizes with percentages, the image wasn’t getting resized when the window was resizing like all the other objects were.  This meant that you needed to refresh the page to see the correct box placement over the image if you ever dragged thing, and that’s not a very desirable outcome.  I didn’t have much luck figuring out why this behavior was happening, but it seems like the image object is a special case in SVG-land that isn’t handled the same as drawable graphic elements like text or images.

To work around this I ended up writing some very short emcascript (aka JavaScript) and embedding it in the SVG.  I’ve tested the resulting behavior in Chrome and Safari and will assume it works in Firefox too.  Every time the svg get’s resized the onresize event is triggered, just like in JavaScript’s and we fire a callback to resize the image element… really just reminding it to fill the screen.

<svg xmlns="http://www.w3.org/2000/svg"
     xmlns:xlink="http://www.w3.org/1999/xlink" height="100%" width="100%"
  <script type="text/ecmascript"><![CDATA[
    function resize(evt){
      var background = document.getElementById('background');
      background.setAttribute("width", "100%");
      background.setAttribute("height", "100%");
    <image id="background" xlink:href="/media/19" height="100%" width="100%" x="0" y="0"
           preserveAspectRatio="none" />
      <rect x="2.5%" y="2.6%"
            width="56.7%" height="77.0%" 
            style="fill:grey; stroke:none; fill-opacity:0.6;"
            id="position_25" />
      <text y="45.1%" x="30.85%" 
            style="fill:black; stroke:black;" font-size="300%" text-anchor="middle">
      <rect x="22.1%" y="88.5%"
            width="75.4%" height="10.0%" 
            style="fill:grey; stroke:none; fill-opacity:0.6;"
            id="position_26" />
      <text y="97.5%" x="59.8%" 
            style="fill:black; stroke:black;" font-size="300%" text-anchor="middle">

If you’re particularly curious, you can see this code in Concerto 2 here or by appending “.svg” to a template preview link in a Concerto 2 install to force the svg image.

Jul 172011

In some of my spare time last weekend I was playing around with Google Closure, a JavaScript library that I think is radically different from the more traditional libraries I’m used to like jQuery or Prototype.  Google Closure isn’t really about enhancing your existing application with some snazzy JavaScript effects, I guess it can do that too, but to me it’s much more an application design framework than a quick and dirty tool to made some divs fade in and out easily.

Writing the correct JavaScript wasn’t too hard to get down, the examples provide good starting points to learn their object oriented syntax.  The trouble comes when you start pushing much beyond the simple examples.  Unlike jQuery there isn’t an example published for every function and the internet isn’t really swarming with information to help you sort things out.  I suspect the people that use closure know what they’re doing, and the people that don’t just hack in jQuery.  I almost gave up and went to jQuery a few times, but I stuck it out.

I wanted to use my browser’s HTML5 geolocation feature in my application so I could track my phone.  This was going to require a callback function, since the navigator.geolocation calls are non-blocking.  In traditional Javascript this is dead simple, you can just type the function name or just put an anonymous function like function(e){ … } right there and it works.  Not so in closure land, well at least not that simply.

Callbacks – In Google Closure, when you want a callback to reference a specific instance of an object you have to bind them together.  So if I want the GPS update to call back to the specific GPS object that’s holding it, my code might look something like this:

example.Gps.prototype.start = function() {
  navigator.geolocation.getCurrentPosition(goog.bind(this.update, this));

Which is english creates a function ‘start’ for an instance of a GPS object.  When the function start is called like myGps.start(); we call the standard HTML5 option for getting the current position.  When we have that position ready, we dispatch it to the instance’s update method (i.e myGps.update(position); is called).  The critical line is goog.bind(function, context), without that the update function might be called in context of the Window or Document or something weird.  Binding it forces it to stick with the currently instantiated object.

HTML5 Externs – Google Closure can also compile your code into a super minified version.  If you use the advanced optimizations flag I think the compiler will even go through and prune unused execution paths and optimize existing code around variable types.  To tell the compiler what types different variables are, you have to spruce up your code (or litter it with comments if you’re not in a good mindset) with JSDoc tags to annotate everything.  I don’t actually know how this works, but I think the more you annotate the better you can do.  This is really easy when you’re working with numbers, strings, arrays, etc.  The examples in the documentation tell you exactly what to do.

I ran intro trouble trying to figure out how to handle the custom objects associated with the navigator.geolocation calls.  The compiler would throw errors and warnings not understanding what type my var options = {enableHighAccuracy: true; } object was and the documentation how to reference the external types was weak at best.  Searching around showed me that the compiler had a definition file describing the possible types and returns and with a little luck, I was able to extrapolate that into JSDoc tags that seem to do the trick.

var options = (/** @type {GeolocationPositionOptions} */
    {enableHighAccuracy: true,
      maximumAge: 1000,
      timeout: 2000});
* Write a good comment here describing what the update callback
* does when called.
* @param {GeolocationPosition} geoposition A position coming from the GPS.
example.Gps.prototype.update = function(geoposition) {
  //Do Something here with the geoposition

As you can see, the ‘type’ tends to line up closely with what the folks over at W3C use to describe the object which makes sense… it was just quite tough to figure out without any pointers or examples.

If I remember, I’ll post what I learned about Event Triggers soon!

May 042011

I’m trying to integrate testing as much as possible into the development of Concerto 2.  We’re building in Ruby on Rails, so testing is built into the framework and language, the trick is just getting the team on board and writing tests (and not breaking them as we develop).  To help facilitate testing, I setup BigTuna to run a continuous integration server, automatically testing each version that gets pushed to our GitHub repository.

Since people could be using Ruby 1.8.7 or 1.9.2, we use RVM on testing environment to run tests in both versions.  If you’re looking to replicate our setup which updates git submodules, refreshes the bundle, migrates the database, and finally runs the tests, this set of steps might work for you.

In our setup, we have a database.yml.sample located in our project directory (in our case /bigtuna/builds/concerto) which gets copied over to each build.

rvm 1.8.7 exec bundle install --path=%project_dir%/bundle --deployment
cp %project_dir%/database.yml.sample %build_dir%/config/database.yml
git submodule init &amp;&amp; git submodule update
rvm 1.8.7 exec env RAILS_ENV=test bundle exec rake db:migrate --trace
rvm 1.8.7 exec env RAILS_ENV=test bundle exec rake

and for our 1.9.2 tests..

rvm 1.9.2 exec bundle install --path=%project_dir%/bundle --deployment
cp %project_dir%/database.yml.sample %build_dir%/config/database.yml
git submodule init &amp;&amp; git submodule update
rvm 1.9.2 exec env RAILS_ENV=test bundle exec rake db:migrate --trace
rvm 1.9.2 exec env RAILS_ENV=test bundle exec rake

Getting the rvm portion of the setup working has been the hardest bit, getting the right ruby environment and bundle of gems to the apps was tough.  The above config was written for rvm 1.2.8 and Bundler 1.0.12, otherwise I don’t believe there are any server-specific configuration parameters at play here.

Best of luck if you’re going the continuous integration route, hopefully you can keep your build green!

Mar 132011

I presented a poster about the Community Mapping project at HFOSS 2011, you can find the paper submission here OR you can read a copy of it below.  The tool has been launched, and is available for anyone to try out here: http://www.mymapper.org.  If you run into any bugs I suggest you report them on the GitHub issues page so I can take a look at them when time permits.


Community groups have many interests in generating highly localized maps of events, key locations, and other important markers as part of their respective missions. Sharing similar objectives, community groups stand to benefit from using and sharing similar mapping tools well suited to the model of free open source software. We discuss the development and deployment of a Community Mapping tool initially planned for use in community groups in Troy, New York.


The Community Mapping project was initially designed to help community groups in Troy, NY map locations of interest in support of a variety of different projects. Group members need a simple way to mark locations with additional metadata on a map, building overlays in a collaborate fashion.

To accomplish this goal, we built a web application based on Flagship Geo [6], a Ruby on Rails geographic data framework. Current focus of the project is on extending the application to other community groups as a hosted cloud application, providing a free software as a service-style offering.

Community Collaboration

Initial design discussions with a representative from a community group (conveniently, a professor at Rensselaer Polytechnic Institute) provided initial insight into the need for an application to quickly and easily build community maps for free. Local governments and affiliated organizations invest in software tools and integrated solutions to map data but many community groups,including those initially seeking this solution, do not have readily available access to commercial mapping products.

Recognizing the limited scope of an independent study, the Community Mapping project focused on providing a basic mapping solution that would quickly meet many of the needs of the local organization without requiring extensive training or time requirements from community volunteers. To accomplish this, the project was designed to be flexible, maximizing the use of general-use, optional fields. The software does not impose any validations or restrictions on what values can be used in the available fields except for the required latitude and longitude references. After an initial draft of the software and user interface was available, community group representatives were given a brief demonstration and asked for feedback.

The resulting conversations provided valuable clues into the different use cases the tool might have and also seemed to inspire additional projects that might be easier to tackle now that a simple mapping program was on the horizon. In addition to a discussion of the specifics of various projects that might benefit from the tool, the community members expressed a desire for several features like a need to print maps (an analog exchange this author regularly overlooks), and the ability to choose what maps are public or private. Continually iteration is underway to develop the identified features, at which point additional comments will be sought from the community groups.

Continual involvement of community members has been a key to developing a solution that would best serve their needs. By personally interacting with the community members planning to use the software, developers can gain extremely valuable insights into the problem the tool may be used to solve and better prioritize development efforts and focus on highly desired features.

Application Architecture

Community Mapping provides each mapping project the ability to plot points on a map using many different layers. Layers are used as a logical grouping of points within each project, and each point typically represents a distinct location or unique event on the map. In many projects using the initial software, the number of layers used on a map is relatively static and small in number, while the points being marked change often.


Each layer, unique to each project, is identified with a name, description, and graphical icon. These layers can be marked as hidden or visible on the project’s main map or can also be viewed individually (e.g seeing all the markers assigned to the “public benches” layer). The layer icon is displayed on the map as the graphical marker for each contained point., providing an easy visual clue to associate marker that belong to the same layer.


Points represent distinct units of information on the map and can contain a variety of metadata useful to the specific project. Points are located on a map with a latitude and longitude, which can be looked up by geocoding an address if available. Address geocoding is provided by the Google Maps API [2], and is executed via AJAX when a user is creating a point. Each point is represented with the icon inherited from being assigned to a specific layer. Points are required to have a name and layer assigned but optionally can have a description, datetime, and address used for geocoding. Further work may add the ability to dynamically create additional data fields for points within a project.


To support multiple independent maps on a single instance of the software, layers are points and separated into projects. Each project represents one map, with an independent collection of layers and points. Projects include reference to a specific geographic location through an assigned a latitude and longitude to automatically center the map and provide a default point of reference when creating new points. While projects by default are public, they can also be set as private so only authenticated users with access can view the map.Each project can generate a static image of the map, suitable for use in presentations or printed material. In addition, projects can be exported as KML files [5] for offline display in software like Google Earth or for backup purposes. Future improvements may include the ability to import a KML file to a new project.

Software as a Service Offering

As the initial application was being developed, it became clear through the use cases mentioned by the community groups that this application was not serving the unique needs of a single community. Paired with it’s development as a Ruby on Rails web application, the Community Mapping project became an ideal candidate to be deployed and hosted in the cloud. Deployment in the cloud can greatly reduce the setup time and initial barrier of entry to organizations, especially those lacking technical expertise.

Given a budget of $0 and the lack of available production space for Ruby on Rails applications at Rensselaer Polytechnic Institute, the project has been hosted on Heroku through their free plan [3]. Static assets, such as the layer icons, are hosted on Amazon S3 [1] producing a monthly bill that can be paid in pocket change (e.g less than $1).

While the application enters a more public phase the cloud computing backend provides a great mechanism to scale if other community groups take interest in using the platform for local mapping. Significant use may increase the monthly cost to a noticeable amount, but that is a problem open source software solutions should like to have.

No specific components of the application require use of a cloud computing vendor to host the application. The dependencies are all publicly available open source libraries all of which can be setup locally on a Linux server or equivalent platform.

Technical Challenges

The initial demonstration of the prototype tool got off to a rough start at the community group meeting. Wi-Fi access was not available, and the presentation of the web based application had to be carried out on the group’s desktop computer running Microsoft Windows XP. During this demonstration in Internet Explorer 6 [4] several previously unseen bugs were exposed. Web application development, while much easier to maintain and standardize across platforms than traditional distribute and install desktop applications, is still not a completely standardized platform. Given the unknown resources of community groups, extra effort needs to be taken for the web-based application to be compatible with as many devices as possible, with particular attention paid to older hardware running outdated web browsers.


A final public release of the Community Mapping project is scheduled for December 2010 / January 2011. This initial public offering aims to adequately satisfy the needs of the local community groups that have participated in its development, but the application likely has uses far beyond that. Though a software as a service style offering, any community group can try out the tool to see what additional value it provides to their organization at no additional expense.


The author would like to thank Sean O’Sullivan ’85, RCOS, Dr. Mukkai S. Krishnamoorthy, Dr. Ken Rose, and the Troy Weed and Seed Program.


[1] Amazon Simple Storage Service. Retrieved December 17, 2010, from Amazon Web Services: http://aws.amazon.com/s3/.
[2] Google Maps Geocoding. Retrieved December 17, 2010 from Google Maps JavaScript API V3: http://code.google.com/apis/maps/documentation/javascript/services.html#Geocoding
[3] Heroku Pricing. Retrieved December 18, 2010, from Heroku: http://heroku.com/pricing.
[4] Internet Explorer 6 Death. Retrieved December 18, 2010: http://www.ie6death.com/
[5] KML Standards, 2008. Retrieved December 10, 2010, from Open Geospatial Consortium: http://www.opengeospatial.org/standards/kml.
[6] MICHALSKI, Brian. Flagship Geo, 2010. Retrieved December 18, 2010 from Github: https://github.com/bamnet/flagship_geo/

Think FOSS, Act Locally:HFOSS in the Local Community, Humanitarian Free and Open Source Software (HFOSS) Symposium March 09th 2011 CC: BY SA Copyright Brian Michalski, Humanitarian FOSS Project 2011.

Mar 132011

I found myself on a plane a few days ago and was hoping to do some work on a few of my Ruby on Rails projects, primarily some polishing of the the Community Mapping project I’m launching later this week.  Here are a few tips / tricks to developing in Ruby on Rails without internet access:

  1. Clone / Pull / Update the code for your application locally.  I do almost all of my development on remote servers so it’s rare I have the latest of anything on my hard drive.  git clone / git pull is a must to make this happen.  If you don’t have your SCM tool installed (like git, svn, hg. etc) you need to do this ASAP.
  2. Bundle. Bundler helps maintain the dependencies in your applications plugins / libraries but to do that is usually needs to download libraries from the internet unless you have them installed locally.  If you’re short on time (i.e. waiting to board the airplane) I would run bundle install from the app that has the most libraries associated with it.  Bundler will reuse things that are already installed and if you’re lucky many of your applications share common libraries.  The more wifi time you have the more applications you should bundle before you try and do it offline.
  3. Try and find any useful wiki / documentation pages that aren’t generated from source code.  These pages are likely going to include examples of implementations and features not associated with a particularly function.  In my case, I know there is a wiki page on Github that describes a “best approach” to the problem I’m having right now, but I can’t get to that in the airplane.  I tend to have the most trouble with jQuery-based documentation… when ever I need to know the syntax for a function like $.ajax I just google it.  Not possible on an airplane.  Instead of waiting til I land to quickly fire up wifi before dashing to my connecting flight (that’s my current plan), I could have been smart and downloaded the documentation first.  http://www.jqapi.com/ or http://docs.jquery.com/Alternative_Resources may be worth exploring.
  4. Don’t worry about the framework / gem documentation, at least not the function-by-function style documentation that is generated automatically.  You can regenerate it on your own if you need to.  The Ruby on Rails documentation can be generated by running `rake doc:rails` from your application directory.  You’ll find the output in your apps doc/api directory.  If you need documentation for a gem your system might already have it.  Run `gem server` to start a server with information about your gems.  If the rdoc link isn’t working for the gem you’re interested in fear not, you generate it most of the time using `gem rdoc gemname`.  I needed the documentation for CanCan so I ran `gem rdoc cancan` and presto, the server was able to point me to some moderately useful information.
  5. Hack it if you have to.  If you forget step 3 and step 4 didn’t help, you can probably write some really sloppy code to do what you’re trying to.  If you can’t (or don’t want to) write some junky code perhaps you can simulate it.  For example, I don’t know the exact call I need to figure out if I want to give the user access or not, but knowing that it will return true or false lets me very easily simulate what will happen in the rest of my application.
  6. Write lots of comments.  You’re flying in an airplane.  For all you know the baby crying behind you could be effecting your normal coding practices, it’s not going to be very easy to get back in the same mindset again so you should document what you’re doing extensively.  This applies extra if you have to use step 5.

Best of luck with your offline development, and safe travels.

Dec 182010

Most of the Shuttle Tracking project is managed via an open-source repository on Github, providing a great platform for others to check out the project and see that code we’re using to track shuttles here at RPI.  The one caveat is that not all of the code can be released under an open source license.  Shuttle Tracking interfaces with an external data provider responsible for the in-vehicle modules and their API isn’t public.  We also have a lot of config options specific to RPI that wouldn’t make sense in a public sense like references to the CAS config, our hoptoad instance, and our Google analytics config.

To help manage these RPI-specific things I commit them to my  local ‘RPI’ branch.  This branch doesn’t get pushed to Github (because the world can’t see some of the “secrets”) but it provides version control over things and lets me easily test out the changes in my development copy.  We also use Capistrano for our deployment, it makes it very easy for me to push new code to production and (more importantly) rollback code when things are broken.  The problem with Capistrano, or my understanding of it, is that it doesn’t easily pull code from a less-than-public branches.

So, I wanted to get my RPI-specific changes to the production server, which can only pull from the public ‘master’ branch on Github.  To do this, I added some code to my Capistrano config/deploy.rb file to pull the RPI changes as well.  The following code generates a patch file, sends the patch to the production server, and applies the patch with the RPI changes.

desc "Apply RPI-specific patch"
task :apply_patch, :roles =&gt; :app do
  patch_contents = `git diff --no-prefix master..RPI`
  put(patch_contents, "#{release_path}/patch", :via =&gt; :scp)
  run "cd #{release_path} &amp;&amp; patch -N -p0 &lt; #{release_path}/patch"

You’ll need to add something in here to call this function, like:

after "deploy:update_code", "deploy:apply_patch"

Presto, now your production code will carry over changes committed to a local branch. To make sure things don’t get crazy with conflicts, I make a point of checking out the RPI branch and merging master in with it before deploying it. This gives me the opportunity to resolve any conflicts that might come up during the patch process before they actually happen. We can also, pretty easily, see what makes each release (running cap deploy or alike) specific to RPI by looking at the patch file in each release folder on the server.