Deploying Go to AppEngine on Codeship.io

I ran into a bunch of trouble over the past few days trying to get codeship.io to deploy a Go app I was playing around with. To save you some debugging time, I found that the appcfg.py codeship uses deploying App Engine Apps (at least the Go app I was using) is incorrectly coming from the Python bundle of GAE utitilies, not the Go bundle. This can result in unexpected dependency errors like:

 --- begin server output ---
 Compile failed:
 2014/10/13 21:59:55 go-app-builder: build timing: 16g (38ms total), 0gopack (0 total), 06l (0 total)
 2014/10/13 21:59:55 go-app-builder: failed running 6g: exit status 1
 main.go:8: can't find import: "github.com/gorilla/mux"
 --- end server output ---
 04:59 AM Rolling back the update.
 Error 422: --- begin server output ---
 --- end server output ---

The fix here is pretty easy. Add a line like export PATH=/home/rof/appengine/go_appengine:$PATH to your Setup Commands via the Settings page. If you ssh into your debug box (which is a really cool feature) you’ll see that $PATH lists the python_appengine folder first, which means the appcfg.py from that folder will take precedence over any others like the second one which is better suited for Go.

Overall, the UI that codeship provides is really nice and I liked the thought of not having to configure my deployment commands but in practice that didn’t work our very well. It would have been useful if their documentation was a bit more transparent what went into the “Updating your Google App Engine application” step. Now to sort out why codeship is trying to healthcheck the non-existent root URL of my application…

Workflow for Developing Custom Elements w/ Polymer

I’ve recently spent a bit of contributing to the <google-map> element which leverages Polymer to help developers quickly integrate Google Maps into a website without having to jump through all the hoops of learning the V3 JavaScript API.

One of the main challenges I faced when getting started contributing to the element was trying to figure out the environment and workflow for development.  I’m use to working with Ruby on Rails where I have my trusty ./script rails s command or a Makefile to build an executable but these custom elements are just a collection of HTML, JS, and CSS files loosely organized in a directory with some dependency management stuff.  Here’s my quick guide to get starting developing the google-map element, or really any custom element with Polymer.

  1. Make a new directory to contain your Polymer development:  mkdir polymer-dev; cd polymer-dev
  2. Clone the repo you want into that new directory.  If you’ve forked the repo, you’ll probably want to git clone your copy here: git clone https://github.com/PolymerLabs/google-map.git
  3. Head into the cloned repo and create a .bowerrc file with the following contents:
    {
    "directory": "../"
    }
  4. Use bower to install all the dependencies specified in the element: bower install
  5. Head out of the custom element’s directory back into the development space: cd ..
  6. Start a static web server.  I use the default Python server, but you can use anything that serves static files: python -m SimpleHTTPServer
  7. Presto!  Head to http://localhost:8000/google-map/demo.html to enjoy the element.

By default, Polymer elements seem to reference external dependencies as living just above the element, so google-map.html is looking for polymer via a path like “../polymer/polymer.html”.  The .bowerrc file that we setup tells bower to install all the dependencies one level higher which lets everything resolve correctly.

If you’re making changes across multiple elements / resources you can always manually remove a dependency that bower installed in your polymer-dev directory and replace it with a git clone of your own fork to start making changes.  As an example, if I’m making a change that straddles both google-map and google-apis I remove the default google-apis that bower install pulls for me with a fork of my own.

Google Earth JS API Asynchronous Loading

Unlike a lot of the other Google Maps APIs, the Google Earth JS API doesn’t presently have the ability to load itself asynchronously.  There’s no callback parameter to specify a function to get called when it’s finished loading and initializing which requires most people to load it in <head> every time a page loads.  If you’re only showing the 3D globe in response to some user interaction or other non-default show experience you end up loading a bunch of JavaScript that might never get used (Google Maps for Business customers also incur a page view!).

I pulled together some simple JavaScript which loads the Earth API on demand, letting you specify a success and error callback so you can start drawing your 3D experience when it finishes.  You can find the code here: https://github.com/bamnet/map_sandbox/tree/master/earthAsync.

If you’re curious, the code polls checking every 20ms to see if the JavaScript components like google.earth are available.  When they are your success code runs, if they don’t become available within a certain amount of time (2 seconds), the error code runs so you can try again or wait for your users to be on a faster-connection.

Testing with Mox

Sometimes I avoid learning new things because I’m lazy, pressed for time, or for some other reason couldn’t be bothered to figure them out.  I write a lot of tests these days, but I’ve been putting off figuring out how to use Mox because it was usually just as fast for me to roll my own solution and because the documentation seemed like it was written for folks who know what they are doing already and are just looking for the answer to the how question.  Let me explain Mox as I understand it and give some examples how to use it for testing applications that use web services or make remote http calls.

Let’s say you’ve got this Python Class that looks something like this:

"""Find locations."""

__author__ = 'bmichalski@gmail.com (Brian Michalski)'

import json
import urllib

class LocationFinder(object):
  """Find the geographic location of addresses."""

  def __init__(self, urlfetch):
    """Initialize a LocationFinder.

    Args:
      urlfetch: Backend to use when fetching a URL.
        Should return a file-like object in response to the urlopen method.
    """
    self.urlfetcher = urlfetch

  def find(self, address=''):
    """Find the latitude and longitude of an address.

    Args:
      address:  String describing the location to lookup.

    Returns:
      Tuple with (latitude, longitude).
    """
    base_url = 'https://maps.google.com/maps/api/geocode/json'
    params = urllib.urlencode({
      'sensor': 'false',
      'address': address
    })
    url = '%s?%s' % (base_url, params)

    result = self.urlfetcher.urlopen(url)
    data = json.loads(result.read())
    location = data['results'][0]['geometry']['location']
    return (location['lat'], location['lng'])

The code is pretty simple, you can run it using something as simple as:

finder = LocationFinder(urllib)
print finder.find('1600 Amphitheatre Parkway, Mountain View, CA')

What’s important is that LocationFinder takes urllib as an argument. This is a kind of poor example because urllib isn’t another class that really needs mocking, but if you were developing on AppEngine or other environments where outbound connections weren’t as straight forward you could pass in the instance of your outbound connection library.

For demonstrative purposes, let’s pretend one of a few things is happening. 1. We can’t get an outbound internet connection to actually test against Google. 2. Google is too slow to test against. or 3. The service we’re testing against requires a complicated authentication handshake beforehand. None of these three cases are actually at play here on my laptop, but you could imagine wanting to isolate your testing from Google in the event that service goes down or is temporarily unavailable to you.

Mox to the rescue. Using Mox, we can make a fake urllib which, by default, doesn’t know anything about the existing urllib. Since we only call the urlopen function and don’t care about any other externals, all we have to do is define that method on our fake urllib and tell it what to return when it’s called. I find the syntax a bit strange, but to define the method you just call it and pass it the expected values (or matchers to broadly match your expected values) and then add .AndReturn(return value here) to wire up it’s return. When urllib.urlopen is called with the parameters you’ve specified it will return the return value you’ve stored otherwise you’ll get an error saying that the expected parameters don’t match what it’s actually being called with or the expected return doesn’t match the actual return (putting the the return from a void into a variable for example). Speaking of examples, here’s how I could quickly test the code above:

"""Testing the LocationFinder."""

__author__ = 'bmichalski@gmail.com (Brian Michalski)'

import location_finder
import mox
import StringIO
import urllib
import unittest

class TestLocationFinder(unittest.TestCase):

  def setUp(self):
    self.mox = mox.Mox()

  def tearDown(self):
    self.mox.UnsetStubs()

  def testFinder(self):
    fetch_backend = self.mox.CreateMock(urllib)
    fake_data = StringIO.StringIO((
      '{"results":[{"geometry":{"location":'
      '{"lat":37.42114440,"lng":-122.08530320}}}],'
      '"status":"OK"}'
    ))
    fetch_backend.urlopen(mox.StrContains('Amphitheatre')).AndReturn(fake_data)
    self.mox.ReplayAll()

    finder = location_finder.LocationFinder(fetch_backend)
    result = finder.find('1600 Amphitheatre Parkway, Mountain View, CA')
    self.assertEqual(result[0], 37.42114440)
    self.mox.VerifyAll()

if __name__ == '__main__':
    unittest.main()

Since urlopen returns a file-like object I use a StringIO object and hardcode some output. I could have saved the result verbatim from Google in a file and put that somewhere to return. In summary, testFinder is broken down into two halves, the first have creates a fake urllib and tells it how to work responding to the one method and the second half loads the LocationFinder using the fake backend and verifies the calls worked as expected.

My old fashioned technique would have just been to write something like:

class mockurllib(object):
  def urlopen(url):
    return something

which isn’t too bad when you’re testing just 1 function like I am above, but if you’re testing different calls to different backends with different responses it can get a bit verbose and messy. I’m sure there’s room to improve my current understanding, maybe I’ll pick up some more handy testing tricks later.

The one thing I dislike about mox is the need to include urllib at all in the test (or import in Python’s case). I think there are ways to mock it out in a more generic fashion, but that feels like it might be getting sloppy. Since urllib is being imported it could still run a potentially slow initialization sequence, not applicable in this specific case but certainly something to watch out for.

Developing Offline

I found myself on a plane a few days ago and was hoping to do some work on a few of my Ruby on Rails projects, primarily some polishing of the the Community Mapping project I’m launching later this week.  Here are a few tips / tricks to developing in Ruby on Rails without internet access:

  1. Clone / Pull / Update the code for your application locally.  I do almost all of my development on remote servers so it’s rare I have the latest of anything on my hard drive.  git clone / git pull is a must to make this happen.  If you don’t have your SCM tool installed (like git, svn, hg. etc) you need to do this ASAP.
  2. Bundle. Bundler helps maintain the dependencies in your applications plugins / libraries but to do that is usually needs to download libraries from the internet unless you have them installed locally.  If you’re short on time (i.e. waiting to board the airplane) I would run bundle install from the app that has the most libraries associated with it.  Bundler will reuse things that are already installed and if you’re lucky many of your applications share common libraries.  The more wifi time you have the more applications you should bundle before you try and do it offline.
  3. Try and find any useful wiki / documentation pages that aren’t generated from source code.  These pages are likely going to include examples of implementations and features not associated with a particularly function.  In my case, I know there is a wiki page on Github that describes a “best approach” to the problem I’m having right now, but I can’t get to that in the airplane.  I tend to have the most trouble with jQuery-based documentation… when ever I need to know the syntax for a function like $.ajax I just google it.  Not possible on an airplane.  Instead of waiting til I land to quickly fire up wifi before dashing to my connecting flight (that’s my current plan), I could have been smart and downloaded the documentation first.  http://www.jqapi.com/ or http://docs.jquery.com/Alternative_Resources may be worth exploring.
  4. Don’t worry about the framework / gem documentation, at least not the function-by-function style documentation that is generated automatically.  You can regenerate it on your own if you need to.  The Ruby on Rails documentation can be generated by running `rake doc:rails` from your application directory.  You’ll find the output in your apps doc/api directory.  If you need documentation for a gem your system might already have it.  Run `gem server` to start a server with information about your gems.  If the rdoc link isn’t working for the gem you’re interested in fear not, you generate it most of the time using `gem rdoc gemname`.  I needed the documentation for CanCan so I ran `gem rdoc cancan` and presto, the server was able to point me to some moderately useful information.
  5. Hack it if you have to.  If you forget step 3 and step 4 didn’t help, you can probably write some really sloppy code to do what you’re trying to.  If you can’t (or don’t want to) write some junky code perhaps you can simulate it.  For example, I don’t know the exact call I need to figure out if I want to give the user access or not, but knowing that it will return true or false lets me very easily simulate what will happen in the rest of my application.
  6. Write lots of comments.  You’re flying in an airplane.  For all you know the baby crying behind you could be effecting your normal coding practices, it’s not going to be very easy to get back in the same mindset again so you should document what you’re doing extensively.  This applies extra if you have to use step 5.

Best of luck with your offline development, and safe travels.

Testing 1… 2… 3…

Whew, I think this is finally working.

Over the past few years I made the mistake of creating unique blogs for each project I had been working on.  It seems like a great way to segment things, create friendly URL’s, and not have to deal with old WordPress installations, but I decided that it wasn’t a very sustainable plan.  On the server end of things I ended up with 4-5 different WordPress installs, each with a separate database, apache config, etc. one huge mess that makes moving servers much slower than I’d like it to be.  When a project “finished” (aka I got busy with something else) I would end up with this dusty blog sitting out there on the internet somewhere.

My new plan is to use one blog for all my projects, using categories to separate posts into their respective projects.  Luckily, WordPress lets you generate RSS feeds based on categories so I don’t have to do any magic to keep separate RSS feeds working.  Also, having just one blog to maintain should help me keep 1 thing up and running better than 4-5 different things up and running.

Before this post I imported all my writings from Flagship Geo and Bonsai Video, two open source projects I worked on during the Summer 10 and Summer 09 respectively.  Over the next few days/weeks I’ll be adding other projects and notes that didn’t import so easily so stay tunes for some updates.  Ideally I’ll be posting >1 entry per week but don’t hold me to it.

So, here’s to giving this a try.