Tuesday, June 14, 2016

New Leaflet control: L.Control.AccordionLegend

Some time back I came up with a custom legend control for Leaflet. Unlike the basic L.Layers control, we wanted it to be really snazzy:
  • The layer list broken into sections, not by basemap/overlay, but thematic similarity: Parks in a section, Admin Boundaries as a section, Health Statistics as a section, etc. And the sections should have accordion-style behavior: clicking a section title should expand and collapse, show only one section at a time so as not to overload the screen, and so on.
  • Each layer has an opacity slider.
  • Each layer has a legend.
So, here it is:


Friday, June 3, 2016

New Leaflet control: L.Control.CartographicScale

A request we get now and then, is to add a readout of the current map scale denominator, e.g. a readout that changes from "1:24K" to "1:12K" as you zoom in a notch. These are handy for cartographers who still think in terms of scale, instead of zoom levels.

OpenLayers had the handy-dandy OpenLayers.Control.Scale but I couldn't find a similar such control for Leaflet. So I wrote it.



I should point out that the accuracy of this sort of thing, has always been dubious. Between the projection itself (a "square degree" in Lagos is not the same area as in Barrow, and Greenland is not that big in real life) and the realities of screen resolution (96dpi is for mid-90s CRTs) these scale readouts have always had very poor accuracy and always will. It's the mathematics of a world that isn't flat, you know?

Still, the control is accurate enough for cartographers to get the sense of "2K or 20K?" and that's what was really needed here.

Friday, May 20, 2016

In-Browser filtering of raster pixels in Leaflet, Part 3

In my last posting, I mentioned described the process of creating a three-band TIFF and then slicing it into three-band PNG tiles for display in Leaflet. The tiles are not visually impressive, being almost wholly black, but they have the proper R, G, and B codes corresponding to the location's PHZ, PCP, and GEZ. (if you don't know what those are, go back and read that post from a few days ago)

So now the last step: transforming these tiles based on a set of arbitrary RGB codes determined by the user. Let's go through the working source code for the L.TileLayer.PixelFilter and I'll describe how it works.

First off, we need two RGBA codes so we can color the pixels as hit or miss, and we need the list of [r,g,b] trios that would be considered a match. The initialize method accepts these as constructor params, and the various set functions allow this to be set later as well.

(The decision of what list of RGB codes should be passed into setPixelCodes() is a matter of business logic: selectors, clicks, lists... outside the scope of this discussion. See the demo code for a trivial example.)

Next, we need to intercept the map tile when it's in the browser. Thus the
tileload event handler set up in the constructor, which runs the tile through
applyFiltersToTile() to do the real work.

applyFiltersToTile() is the interesting part, which only took me a couple of hours sitting down with some HTML5 Canvas tutorials. Let's dissect it one piece at a time:
  • "copy the image data onto a canvas" The first paragraph creates a Canvas of the same width and height as the image, and copies the data into it. The IMG object itself won't let us access raw pixel data, but once it's in a Canvas we can.
  • "target imagedata" We then use createImageData() to construct a new and empty byte sequence, which can later be assigned back into a Canvas using putImageData() This is effectively a straight list of bytes, and as we write to it, we need to always write 4 bytes at a time for every pixel: R, G, B, and A.
  • "generate the list of integers" Before we start looping over pixels, a performance optimization. Arrays have the indexOf() method so we can determine whether a given value is in an array, but it only works on primitive values and not three-element arrays. The lodash library has findIndex() which would work, but it means a new dependency... and also the performance was not so great (256 x 256 = 65536 pixels per tile). So I cheat, and translate the list of desired
    pixelCodes into a simple list of integers so that indexOf() can work after all.
  • "iterate over the pixels" Now we start looping over the pixels in our Canvas and doing the actual substitution. Again, we step over the bytes in fours (R, G, B, A) and we would assign them into the output imagedata in fours as well. Each pixel is translated into an integer, and if that integer appears in the pixelcodes list of integers, it's a hit. Since indexOf() is a native function it's pretty swift, and since our pixelcodes list tends to be very short (10-50 values) the usually-avoided loop-in-loop is actually quite fast.
  • "push a R, a G, and a B" Whatever the results, we push a new R, G, B, A set of 4 bytes onto the output imagedata.
  •  "write the image back" And here we go: write the imagedata sequence back into the Canvas to replace the old data, then reassign the img.src to this Canvas's base64 representation of the newly-created imagedata. Now the visible tile isn't based on a HTTP file at all, but from a base64-encoded image in memory.
The only gotcha I found, was that upon calling setPixelCodes() the tiles would flash to their proper color and then instantly all turn into the no-match color. It worked... then would un-work itself?

The tileload event wraps the IMG element's load event. This means that when I assigned the img.src to replace the tile's visible representation... this was itself a tileload event! It would call applyFiltersToTile() again, and this time the RGB codes didn't match anything on my filters so all pixels were no-match. Worse, the assignment of img.src was again a tileload event, so it was in an infinite loop of processing the tile.

Thus the already_pixel_swapped flag. After processing the IMG element, this flag is set and
applyFiltersToTile() will skip out on subsequent runs on that IMG element. If we need to change pixel codes via setPixelCodes() that calls layer.redraw() which empties out all these old IMG elements anyway, replacing them with fresh new ones that do not have already_pixel_swapped set.

So yeah, it was an educational day. Not only did we achieve what the client asked for (the more complex version, not the simplified case I presented last week) and as a neat reusable package, but the performance is near instant.

Tuesday, May 17, 2016

In-Browser filtering of raster pixels in Leaflet, Part 2

A few days back I posted a link to L.TileLayer.PixelFilter This is a Leaflet TileLayer extension which rewrites the tiles after they have loaded, comparing each pixel against a set of pixel-codes and replacing the pixel with either a "matched" or "not matched" color. It's pretty useful for generating dynamic masks and highlights, if your back-end data is a raster and not vector data.

I had said that the client's request was to display plant hardiness zones, and that I was using the Unique Values colors as the RGB codes to match against zone codes. I lied. The reality was much more complicated than that, but would have been distracting from the end and the result. So here's the longer story.

The Rasters and the Requirement

The client has 3 rasters: Plant Hardiness Zones (PHZ), Precipitation classifications (PCP), Global Ecoregions (GEZ). Each of these is a worldwide dataset, and each has about 20 discrete integer values numbered 10 to 30ish.

The user selects multiple combination of these three factors, e.g. these two:
  • Zone 7 (PHZ=7)
  • Rainfall 20-30 inches per year (PCP=3)
  • Temperate oceanic forest (GEZ=31)

  • Zone 7 (PHZ=7)
  • Rainfall 20-30 inches per year (PCP=3)
  • Temperate continental forest (GEZ=32)
The user would then see any areas which match any of these "criteria trios" highlighted on the map. The idea is that a plant that's invasive in these areas, would also be invasive in the other areas highlighted on the map.

Fortunately, the rasters are not intended to be visible and do not need to be visually pleasing. They need to be either colored (the pixel matches any of the trios) or else transparent (not a match).

Raster To Tiles To Leaflet

Three variables and a need for one raster... sounds like we could have a three-band raster! I used ArcMap's Composite Bands tool to merge the three rasters into a three-band raster. Voila, one TIFF with three-part pixels such as (7, 3, 31) and (7, 3, 32) I just have to keep straight that R is PHZ, G is PCP, and B is GEZ.

Second, I needed to slice this up into map tiles. But it's very important that:
  • The tiles be in a format that preserves R, G, and B exactly. Something GIF or JPEG would be entirely unsuitable since GIF picks a 256-color palette, and JPEG is lossy and fudges together colors. TIFF on the other hand is not viewable in browsers as a map tile. But PNG is just perfect: RGB by default, and viewable in browsers.
  • The tile-slicing process (I picked gdal2tiles.py) must also preserve exact RGB values. The default average resampler uses interpolation, so a pixel in between a PCP=3 and PCP=2 would get PCP=2.5 and that's no good! Use the nearest neighbor resampling algorithm, which guarantees to use an existing pixel value.
Slicing up the TIFF into web-read PNGs was pretty straightforward:
gdal2tiles.py -z 0-7 -r near -t_srs epsg:3857 threeband.tif tiles

Point Leaflet at it, and I get some nearly-black tiles loading and displaying just as I expected. Well, after I remembered that gdal2tiles.py generates tiles in TMS numbering scheme, so I had to set tms:true in my L.TileLayer constructor.

I downloaded a few of the tiles and opened them up in GIMP and sure enough, that shade of black is actually (7,3,31) The tiles are not meant to be visually attractive... but those preserved RGB codes are beautiful to me.

Client-Side Pixel Detection and Swapping

That's the topic of my next posting: now that I have RGB tiles, how can I intercept them, compare them against a set of RGB codes, and appropriately color/clear pixels...? Turns out it was easier and more elegant than I had imagined.

Saturday, May 14, 2016

In-Browser filtering of raster pixels in Leaflet, Part 1

Last week I was asked to do something I haven't done before:

A client has a raster of values, this being World Plant Hardiness Zones. Each pixel is a code number, which ranges from 1 to 13. If you're into gardening, you've seen the US variation on this on the back of seed packets.

The client wants the user to select one or more hardiness zones, and have the appropriate pixels highlighted on the map. For reasons I'll cover in a later blog post, this needs to be done using the raster's RGB codes and not through some of the simpler techniques such as GeoJSON polygons with varying opacity.

I ended up writing this Leaflet TileLayer extension. It takes a list of RGB codes, and then rewrites the tile images in-canvas so as to have only two colors: those which matched and those which did not.

And there's our basic need filled: near-instant pixel-filtering into "selected" and "non-selected" categories, with appropriate color fill.

If you're in a hurry, there's the link to the demo and to the library. But the technical process involved, and the application of it for the client turned out to be very interesting indeed, so they will form my next few postings.

Friday, April 29, 2016

Heroku and Wordpress: Afterthoughts

The last two postings were about my successful and happy migration of my personal Wordpress site from my home PC to Heroku. It went beautifully, but a little testing and a few days revealed some improvements. One could call these Pitfalls and Gotchas about Heroku, lessons for my next app.

Database Usage / Wordpress Revisions

My MySQL dump file was 2.3 MB, fitting quite well under the 5 MB mark. Wrong. After being loaded, it bloated to about 4.1 MB and I got a warning email from Heroku that I was nearing my capacity.

First off, thank you Heroku and ClearDB. Nice job on the notification before this became a problem.

Second, what's using my disk space? I did some queries (thank you, Stackoverflow) and found that wp_posts had 450 rows... for my 35 pages. Ah yes: Page Revisions. Wordpress keeps copies of every time I've clicked Update, so there are hundreds of extra pages in there.

I trawled the net a little bit for two pieces of advice that set this right:
  • Disabling revisions (or limiting them to 1 or 2 revisions, if you prefer), with a wp-config.php modification. Easy.
  • A plugin to clean out old revisions. I then uninstalled it since disabling revisions means there won't be more to clean up.
Now my table was down to 150 posts, which is right for 30ish pages and 120ish images. My dump file was under half the previous size.

To finish the job, I had to take a dump of the table and then reload it, so as to drop and recreate the table. (databases do that, keep the space for the future; it's normally a good idea, but not here and today) And now I'm down to 1.6 MB, one third of my free tier.

Sending Email

Heroku doesn't have a sendmail setup, not even sure they have a SMTP service. So Wordpress's email capabilities were not working.

Your basic options here are:
  • Sign up for SendGrid, since Heroku has an addon for that. They have a free tier of 12,000 messages per month, which is really more than my blog could ever generate.
  • Supply your own SMTP server.

I went for the latter since I use GMail and can definitely navigate the steps of a Project, the API keys, and the GMail SMTP plugin for Wordpress. The plugin's own instructions were somewhat outdated, but that wasn't a problem. And voila, Wordpress sending email.

For your own programming, you would want to use PHPMailer or some such so you can specify your SMTP settings. A lot of things presume that there's a local sendmail or local SMTP, but not here.

So yeah, that was it. Not too awful in the "gotchas" department. Not at all! I am suitably impressed with Heroku at virtually every level: price, performance, ease of deployment once you wrap your head around it, and notifications.

Wednesday, April 27, 2016

Heroku and Wordpress: The Migration Process

My post day-before-yesterday begun the story of my migration from a home-hosted website to Heroku. Here's part two: The Migration Process

Step: Move my Media Library content into S3

First off, the uploaded photos should not be under version control; so they shouldn't be in the repository and in the ephemeral filesystem.

This was a tedious process of a few hours since I had about 130 images.
  • I set up the S3 plugin I mentioned above, made a few tests and confirmed that it's sweet.
  • I SFTP'd into my Wordpress site's wp-content/uploads folder and grabbed everything.
  • Then deleted everything from my Media Library,
  • Then uploaded it all again and watched it load into S3 and leave my uploads folder empty.
  • Then went through every posting and replaced all of the images, which of course were now broken. Annoying, but fortunately I only had 35 pages with images and did it in about 2 hours.

Step: Init repo

I'm a fan of Gitlab. They offer unlimited private repositories for free, which is really excellent. I created the repository, then followed their simple instructions to load my Wordpress files into the repo and basically turn my site into a clone.

I also created a .gitignore file with these two entries for some folders I'll be creating in a little bit. The private is where I'll do some other work I don't want in version control, e.g. working files and database dumps that I want to keep close. The vendor would be generated by composer later on, trust me.

Step: Add Heroku as a Secondary Master

The trick in allowing git push to push to deploy to Heroku, is that you tell your repo clone to use your Heroku as a secondary master.

heroku git:remote -a your-server-name

As of now, when pushing you will need to distinguish between git push origin master and git push heroku master. One pushes into your git repository (Gitlab, Github) and the other would redeploy to Heroku.

Step: Wordpress Updates

I then noticed that some updates were available for some plugins and for Wordpress itself. So I ran those, and after each one noted that git status reported exactly what I would expect from each upgrade. So three commits later I had Wordpress and plugins all updated, with commit notes for each update. (I could have done this before the repo init, but why not do it under version control?)

Step: Add a Procfile and composer.json file

For Heroku compatibility, it's advisable to add a Procfile and a composer.json and composer.lock file, to indicate to heroku what PHP version to prefer, that you prefer Apache over Nginx, etc. You will want these in version control.

web: vendor/bin/heroku-php-apache2

  "require" : {
    "php": "^5.6.0"
  "require-dev": {
    "heroku/heroku-buildpack-php": "*"

composer.lock is generated from the composer.json with a command:
composer update --ignore-platform-reqs

Step: Push to Heroku

All set? Then here goes:
git push heroku master
And I visit my website. And it's a Wordpress error that it can't make the database connection. That's to be expected: my wp-config.php has the old credentials and I still need to upload the database content. But that's definitely Wordpress making the error, so a fine start.

Step: Database Configuration

First step was to scrub my database credentials from the wp-config.php file. Yeah, dummy move to forget to do that, but the credentials are wrong for Heroku so are useless, and my local MySQL is going away in an hour anyway. But yes... don't do what I did. ;)

When I scrubbed the database credentials, I replaced them with $_ENV variables like this:
define('DB_NAME', $_ENV['DATABASE_BASE']);
define('DB_USER', $_ENV['DATABASE_USER']);
define('DB_HOST', $_ENV['DATABASE_HOST']);
Add, commit, push. Site is still dead but it's ready for this next trick.

Run heroku config and it dumps the app's configuration variables back to me. I tease apart the database URL string, into my set of 4 environment variables for the 4 $_ENV items above.
My app/dyno reboots and... the site's up!

Step: Database Data

A simple mysqldump was all it took to take a backup of my database, then loading it was one more command.

mysqldump -h localhost -u olduser -p olddbname > private/dbdump.sql
mysql -h herokuhost -u herokuuser -p herokudbname < private/dbdump.sql

It doesn't get as lot easier than this.

And now the site is up! My data, on a new database on a new Heroku server. Shiny.

Step: Custom Domain

My site is www.fightingfantasyfan.info and not fightingfantasyfan.herokuapp.com Heroku calls this a custom domain. There are two steps in setting up the Heroku app to work properly with a custom domain.

  • I hit up Heroku's dashboard and the Settings for my site, and added two domains for it: www.fightingfantasyfan.info and fightingfantasyfan.info This allows it to respond to these alternate hostnames, should they point to this app.
  • I went to my domain registrar's domain control panel and set up a CNAME record, so that www.fightingfantasyfan.info is equivalent to fightingfantasyfan.herokuapp.com This took a little bit to propagate, but was done about the time I finished my cup of coffee.

And that really was it. Well, almost. More on this tomorrow.