Optimal Holiday Shopping

This year I’ve been running behind with my holiday shopping and need to make one last visit to a bunch of stores and pick up my last minute gifts.  I feel pretty good at mentally planning the most efficient route to visit locations, but with more than 3-4 locations to visit I start  second guessing the order I’ve come up with. With lots of stores to visit, I fired up the Google Maps APIs to figure out the optimal order for my shopping spree.

Locating

First, I just need to figure out where I’m going.  I’m going to use the Places API Places API Icon and lookup the place id for each of the stores I need to visit.  The Places API includes a Text Search method I can use to search for places using a human-readable string, in this case, the name and approximate city/town of the place.  Since I don’t know the exact name of the pet store (it’s either a PetSmart or a Petco), I’ll generically search for “Pet Store” and let the Google Maps magic figure it out for me.  To do this, I can just make HTTP requests in my browser to the following URL:

https://maps.googleapis.com/maps/api/place/textsearch/json?query=CVS,%20Secaucus&key=API_KEY_HERE

An HTTP call like that spits out a long JSON response, we’re only interested in the place_id field I previously mentioned. Here’s an example of the output I got.  Finding those fields, I extracted the following Place IDs.

  • CVS, Secaucus => ChIJn5ui2MNXwokRSD3-7K014GI
  • Pet Store, Secaucus => ChIJh6CBAelXwokRJtEap8f2bAw (turns out this is a PetSmart)
  • FedEx, East Rutherford => ChIJdf-POpP4wokRB644UH322Cw
  • Dollar Tree, East Rutherford => ChIJ-562O5P4wokRWcLXUc1mADc
  • Target, North Bergen => ChIJFziVmP33wokRRu0dx8XBfUE

Optimal Order

Now that I can accurately describe where I need to go, it’s time to figure out the most optimal order to visit all those locations.  Enter the Directions API Directions API Icon.  The Directions API accepts not just a start and end point (or origin and destination to be getting technical), but also an array of waypoints to visit along the route.  Using the optimize:true flag we can ask these waypoints to be optimized and re-ordered in the most efficient manner.

Since my last minute shopping spree will be a round trip back to my apartment, the origin and destination will be the same.  The meat of the query is the waypoints listing where I list the Place ID’s separated with the | character. They also need to be prefixed with place_id: so the Directions API knows these are Place IDs and doesn’t try to geocode them.

You can list the waypoints in any order you want, just remember what order they are in.  To keep it simple, I’m using the same order as the list of places above.

https://maps.googleapis.com/maps/api/directions/json?origin=40.7619,-74.0818&destination=40.7619,-74.0818&waypoints=optimize:true|place_id:ChIJn5ui2MNXwokRSD3-7K014GI|place_id:ChIJh6CBAelXwokRJtEap8f2bAw|place_id:ChIJdf-POpP4wokRB644UH322Cw|place_id:ChIJ-562O5P4wokRWcLXUc1mADc|place_id:ChIJFziVmP33wokRRu0dx8XBfUE&key=API_KEY_HERE

This HTTP call will return a very long response with all the directions I need to navigate from my home to each of these stores, and back home.  To understand the order to visit these places I’m looking for the waypoint_order field, here’s an example of the output I got.

"waypoint_order" : [ 4, 1, 2, 3, 0 ]

This array is telling us we should visit waypoint 4 (o-indexed) first and visit waypoint 0 last. Based on the order of the waypoints I supplied, this mean I should run my errands in this order:

  1. Target
  2. Pet Store
  3. FedEx
  4. Dollar Tree
  5. CVS

Results

By adding up the duration associated with each leg in the JSON response, I can tell this trip will take a total of 3414 seconds which is 273 seconds faster than the trip I would have taken visiting the stores in the order I initially listed them in.  I saved 4.5 minutes optimizing the order of these errands!

If you’re looking for a one-liner JavaScript function to add uo the duration, try this:
data.routes[0].legs.reduce((sum, leg) => { return sum + leg.duration.value; }, 0);
Thanks to the Directions API and my optimized shopping order I’ll have an extra 5 minutes I can spend wrapping these of these gifts up. Happy Holidays!

Deploying Go to AppEngine on Codeship.io

I ran into a bunch of trouble over the past few days trying to get codeship.io to deploy a Go app I was playing around with. To save you some debugging time, I found that the appcfg.py codeship uses deploying App Engine Apps (at least the Go app I was using) is incorrectly coming from the Python bundle of GAE utitilies, not the Go bundle. This can result in unexpected dependency errors like:

 --- begin server output ---
 Compile failed:
 2014/10/13 21:59:55 go-app-builder: build timing: 16g (38ms total), 0gopack (0 total), 06l (0 total)
 2014/10/13 21:59:55 go-app-builder: failed running 6g: exit status 1
 main.go:8: can't find import: "github.com/gorilla/mux"
 --- end server output ---
 04:59 AM Rolling back the update.
 Error 422: --- begin server output ---
 --- end server output ---

The fix here is pretty easy. Add a line like export PATH=/home/rof/appengine/go_appengine:$PATH to your Setup Commands via the Settings page. If you ssh into your debug box (which is a really cool feature) you’ll see that $PATH lists the python_appengine folder first, which means the appcfg.py from that folder will take precedence over any others like the second one which is better suited for Go.

Overall, the UI that codeship provides is really nice and I liked the thought of not having to configure my deployment commands but in practice that didn’t work our very well. It would have been useful if their documentation was a bit more transparent what went into the “Updating your Google App Engine application” step. Now to sort out why codeship is trying to healthcheck the non-existent root URL of my application…

Workflow for Developing Custom Elements w/ Polymer

I’ve recently spent a bit of contributing to the <google-map> element which leverages Polymer to help developers quickly integrate Google Maps into a website without having to jump through all the hoops of learning the V3 JavaScript API.

One of the main challenges I faced when getting started contributing to the element was trying to figure out the environment and workflow for development.  I’m use to working with Ruby on Rails where I have my trusty ./script rails s command or a Makefile to build an executable but these custom elements are just a collection of HTML, JS, and CSS files loosely organized in a directory with some dependency management stuff.  Here’s my quick guide to get starting developing the google-map element, or really any custom element with Polymer.

  1. Make a new directory to contain your Polymer development:  mkdir polymer-dev; cd polymer-dev
  2. Clone the repo you want into that new directory.  If you’ve forked the repo, you’ll probably want to git clone your copy here: git clone https://github.com/PolymerLabs/google-map.git
  3. Head into the cloned repo and create a .bowerrc file with the following contents:
    {
    "directory": "../"
    }
  4. Use bower to install all the dependencies specified in the element: bower install
  5. Head out of the custom element’s directory back into the development space: cd ..
  6. Start a static web server.  I use the default Python server, but you can use anything that serves static files: python -m SimpleHTTPServer
  7. Presto!  Head to http://localhost:8000/google-map/demo.html to enjoy the element.

By default, Polymer elements seem to reference external dependencies as living just above the element, so google-map.html is looking for polymer via a path like “../polymer/polymer.html”.  The .bowerrc file that we setup tells bower to install all the dependencies one level higher which lets everything resolve correctly.

If you’re making changes across multiple elements / resources you can always manually remove a dependency that bower installed in your polymer-dev directory and replace it with a git clone of your own fork to start making changes.  As an example, if I’m making a change that straddles both google-map and google-apis I remove the default google-apis that bower install pulls for me with a fork of my own.

Google Earth JS API Asynchronous Loading

Unlike a lot of the other Google Maps APIs, the Google Earth JS API doesn’t presently have the ability to load itself asynchronously.  There’s no callback parameter to specify a function to get called when it’s finished loading and initializing which requires most people to load it in <head> every time a page loads.  If you’re only showing the 3D globe in response to some user interaction or other non-default show experience you end up loading a bunch of JavaScript that might never get used (Google Maps for Business customers also incur a page view!).

I pulled together some simple JavaScript which loads the Earth API on demand, letting you specify a success and error callback so you can start drawing your 3D experience when it finishes.  You can find the code here: https://github.com/bamnet/map_sandbox/tree/master/earthAsync.

If you’re curious, the code polls checking every 20ms to see if the JavaScript components like google.earth are available.  When they are your success code runs, if they don’t become available within a certain amount of time (2 seconds), the error code runs so you can try again or wait for your users to be on a faster-connection.

Testing with Mox

Sometimes I avoid learning new things because I’m lazy, pressed for time, or for some other reason couldn’t be bothered to figure them out.  I write a lot of tests these days, but I’ve been putting off figuring out how to use Mox because it was usually just as fast for me to roll my own solution and because the documentation seemed like it was written for folks who know what they are doing already and are just looking for the answer to the how question.  Let me explain Mox as I understand it and give some examples how to use it for testing applications that use web services or make remote http calls.

Let’s say you’ve got this Python Class that looks something like this:

"""Find locations."""

__author__ = 'bmichalski@gmail.com (Brian Michalski)'

import json
import urllib

class LocationFinder(object):
  """Find the geographic location of addresses."""

  def __init__(self, urlfetch):
    """Initialize a LocationFinder.

    Args:
      urlfetch: Backend to use when fetching a URL.
        Should return a file-like object in response to the urlopen method.
    """
    self.urlfetcher = urlfetch

  def find(self, address=''):
    """Find the latitude and longitude of an address.

    Args:
      address:  String describing the location to lookup.

    Returns:
      Tuple with (latitude, longitude).
    """
    base_url = 'https://maps.google.com/maps/api/geocode/json'
    params = urllib.urlencode({
      'sensor': 'false',
      'address': address
    })
    url = '%s?%s' % (base_url, params)

    result = self.urlfetcher.urlopen(url)
    data = json.loads(result.read())
    location = data['results'][0]['geometry']['location']
    return (location['lat'], location['lng'])

The code is pretty simple, you can run it using something as simple as:

finder = LocationFinder(urllib)
print finder.find('1600 Amphitheatre Parkway, Mountain View, CA')

What’s important is that LocationFinder takes urllib as an argument. This is a kind of poor example because urllib isn’t another class that really needs mocking, but if you were developing on AppEngine or other environments where outbound connections weren’t as straight forward you could pass in the instance of your outbound connection library.

For demonstrative purposes, let’s pretend one of a few things is happening. 1. We can’t get an outbound internet connection to actually test against Google. 2. Google is too slow to test against. or 3. The service we’re testing against requires a complicated authentication handshake beforehand. None of these three cases are actually at play here on my laptop, but you could imagine wanting to isolate your testing from Google in the event that service goes down or is temporarily unavailable to you.

Mox to the rescue. Using Mox, we can make a fake urllib which, by default, doesn’t know anything about the existing urllib. Since we only call the urlopen function and don’t care about any other externals, all we have to do is define that method on our fake urllib and tell it what to return when it’s called. I find the syntax a bit strange, but to define the method you just call it and pass it the expected values (or matchers to broadly match your expected values) and then add .AndReturn(return value here) to wire up it’s return. When urllib.urlopen is called with the parameters you’ve specified it will return the return value you’ve stored otherwise you’ll get an error saying that the expected parameters don’t match what it’s actually being called with or the expected return doesn’t match the actual return (putting the the return from a void into a variable for example). Speaking of examples, here’s how I could quickly test the code above:

"""Testing the LocationFinder."""

__author__ = 'bmichalski@gmail.com (Brian Michalski)'

import location_finder
import mox
import StringIO
import urllib
import unittest

class TestLocationFinder(unittest.TestCase):

  def setUp(self):
    self.mox = mox.Mox()

  def tearDown(self):
    self.mox.UnsetStubs()

  def testFinder(self):
    fetch_backend = self.mox.CreateMock(urllib)
    fake_data = StringIO.StringIO((
      '{"results":[{"geometry":{"location":'
      '{"lat":37.42114440,"lng":-122.08530320}}}],'
      '"status":"OK"}'
    ))
    fetch_backend.urlopen(mox.StrContains('Amphitheatre')).AndReturn(fake_data)
    self.mox.ReplayAll()

    finder = location_finder.LocationFinder(fetch_backend)
    result = finder.find('1600 Amphitheatre Parkway, Mountain View, CA')
    self.assertEqual(result[0], 37.42114440)
    self.mox.VerifyAll()

if __name__ == '__main__':
    unittest.main()

Since urlopen returns a file-like object I use a StringIO object and hardcode some output. I could have saved the result verbatim from Google in a file and put that somewhere to return. In summary, testFinder is broken down into two halves, the first have creates a fake urllib and tells it how to work responding to the one method and the second half loads the LocationFinder using the fake backend and verifies the calls worked as expected.

My old fashioned technique would have just been to write something like:

class mockurllib(object):
  def urlopen(url):
    return something

which isn’t too bad when you’re testing just 1 function like I am above, but if you’re testing different calls to different backends with different responses it can get a bit verbose and messy. I’m sure there’s room to improve my current understanding, maybe I’ll pick up some more handy testing tricks later.

The one thing I dislike about mox is the need to include urllib at all in the test (or import in Python’s case). I think there are ways to mock it out in a more generic fashion, but that feels like it might be getting sloppy. Since urllib is being imported it could still run a potentially slow initialization sequence, not applicable in this specific case but certainly something to watch out for.

Resizing Images in SVGs

Concerto 2 is going to be a technical marvel if I have anything to say about it.  We’re going to be using SVG when possible as alternatives to server-size RMagick work when we need to generate simple graphics.  In theory this will save a few ms of processing time (returning text > processing an image file) and reduce our cache size if enough people support svg images.  We’ll still fall back to PNG / JPEGs for people with outdated browsers, and you probably should too; there’s a lot of folks out there who don’t have SVG support… optimistically this will be the minority.

One of the things that Concerto does it return a small preview of a template when you’re modifying it.  A template is essentially a high-res background image (often 1920×1080) and an accompanying set of transparent boxes (positions) that content get displayed in and the preview is really important to help see where those invisible boxes are.  This turned out to be really easy to implement in <svg> using the <rect> element.  Just like I’d draw div elements on a screen’s dom, I can draw rectangles positioned relative to the svg image to recreate the exact same layout but in a more graphical fashion.  The background image was quickly added in using svg’s image element, setting it’s width and height to 100% and the x and y coords to 0,0.


I found that this almost scaled the template appropriately.  By default the aspect ratio was constraining it to the image’s aspect ratio, so the width and height of 100 was really width = 100% or height = 100% depending what was smaller.  Quick fix for small stretching issue was to add preserveAspectRatio=”none”.


This was working wonderfully, and with my rectangles overlaying the positions it was a near pixel perfect replica of the code that I had already used to generate the PNG / JPEG images.  The only subtle differences were in the text / font rendering, and I don’t really care too much about that.

One problem left: resizing the svg / resizing the window.

Despite using relative sizes with percentages, the image wasn’t getting resized when the window was resizing like all the other objects were.  This meant that you needed to refresh the page to see the correct box placement over the image if you ever dragged thing, and that’s not a very desirable outcome.  I didn’t have much luck figuring out why this behavior was happening, but it seems like the image object is a special case in SVG-land that isn’t handled the same as drawable graphic elements like text or images.

To work around this I ended up writing some very short emcascript (aka JavaScript) and embedding it in the SVG.  I’ve tested the resulting behavior in Chrome and Safari and will assume it works in Firefox too.  Every time the svg get’s resized the onresize event is triggered, just like in JavaScript’s and we fire a callback to resize the image element… really just reminding it to fill the screen.