Kicking the tires of TileMill’s support for File Geodatabases

Back in April, MapBox announced that TileMill now supports ESRI File Geodatabases.  The support appears to come via GDAL’s integration of  Even Rouault’s work to reverse engineer the FGDB format.

When I first looked at Even’s work, there was no support for reading the spatial indexing files of a FGDB.  Of course, without spatial indexing, large data sets would perform quite poorly.  Its worth noting that Even’s project now supports spatial indexing, but GDAL 1.1 uses the older version.  The current latest TileMill dev build to include an installer – TileMill-v0.10.1-291 – should similarly lack spatial indexing.

To make my test exciting, then, I decided to use a large dataset.  I fired up Ogr2Org and created an FGDB dump of the full Open Street Map globe (OSM2PGSQL schema).  I tested the data in ArcMap and OGR and everything was quite zippy.  Upon attempting to load the FGDB in TileMill, it crashed.  I can’t say I didn’t expect this.

It’s worth noting that ESRI’s File Geodatabase API is free as in beer.  I think Even’s work is fantastic for the community, but I’m not sure why MapBox didn’t use that other GDAL FGDB driver.  Nevertheless, OSS marches on, and I expect we’ll see these recent features bubble their way up.  I look forward to seeing FGDB spatial-indexing support hit TileMill, as I believe the idea has real legs.

Plasio, king content, and the browser

Gary Bernhardt’s The Birth and Death of JavaScript presents an interesting vision of the future – where web apps are cross-compiled into Javascript / asm.js.  Closer to home, Howard Butler just posted about his plasio project.   It’s fantastic stuff – lidar in the browser - built on Emscripten + WebGL.  Cesium is another wiz-bang WebGL GIS application – but contrary to Bernhardt’s vision of the future – Cesium is coded in JavaScript, no cross-compiler necessary.  Still, Bernhardt is right about the death of JavaScript, the value of these apps is not the language they were coded in.

Immortal sage Jim Morrison once said “Whoever controls the media, controls the mind.”  I prefer “whoever controls the medium controls the mind.”  Cesium and Plasio shine in their presentation of 3D datasets, but there’s a subtle undertone here: these web maps were designed for certain browsers.  Plasio goes so far as to use Chrome’s NaCl processing – eschewing web development norms.  Repeating myself, Bernhardt is right.  Even for web apps,  JavaScript doesn’t rule the show.  Content is still king.



Thus if content is king, there must be a queen.  In web-based GIS, the value of content is limited by its visualization medium – the browser.  Bill Gate’s oft-quoted saying is undercut by IE’s slow adoption of emerging browser technologies.  Plasio’s brave new world where WebGL and asm.js are required features is already upon us.  Programming language may be becoming implementation detail, but browser choice is not.

Browsers are becoming the medium in which content – including web-based GIS content – is being delivered.  Nevertheless, “web” is a loose term.  Application development platforms - such as the QT Project – have embedded browsers into their core capabilities.  PhoneGap has swept the mobile world with it’s embedded browser technology.  Even GIS applications such as TileMill are being built with the Chromium Embedded Framework.  As technologists, our ability to control the browser directly impacts our ability to create compelling content.

The value in our web-based GIS is not the language they were create in, it is in content dissemination and visualization.  As we attempt to integrate better content in more compelling ways, we must re-examine its relationship with the stand-alone “browser”, and attempt to better control the medium.

The Go Programming Language

A few days ago, I referenced Imposm 3 and joked that developers who use the Go programming language must be hipsters.  

Yet, I’m close to being of a hipster myself.  I’m variously obsessed with libuv,  ZeroMQ, the C10k problem, and all things threading minus locking.  I deliberately write many .NET classes akin to JavaBeans, disregarding elements of object-oriented design in favor of trivial serialization, trivial cloning, and therefore easy distribution processing.  I was a crack shot JavaScript programmer long before it was cool and my obsession with reflection and auto-generated code goes beyond healthy.  In short, people pay me to write .NET code, but I long for something more.

Lo and behold, I discovered something truly lovable in Go.  It’s simple, type-safe, and garbage collected.  Lightweight threading constructs (Go routines) are the eponymous, standout feature.  Its a fast, capable alternative to C/C++.  The compiler is fast and build tools include package management, making static linked native binaries totally sensible.  The syntax is succinct, and its relatively easy to call existing C/C++ libraries.

GIS packages are beginning to show up in Go, even if they’re mostly C/C++ wrappers… EG: GoGEOS and go-mapnik.

Geospatial and the Entity Framework: Half Full, Half Empty, or Wrong Sized ORM?

In late 2004, Ted Neward famously called Object-Relational-Mapping (ORM) the Vietnam of Computer Science.  Recently I switched to .NET 4.5, hoping to reap the benefits of LINQ-to-Entities‘ support for the spatial datatypes.  For SQL Server, this works every bit as well as the Entity Framework does in the first place – great for some databases, a hassle for others (particularly legacy databases).

When LINQ-to-SQL first came out – things didn’t really work too well for spatial.  Back then, it took a little modification of the generated SQL queries before things could get rolling using WKT.  Few people manage their spatial objects as WKT, of course, so you sprinkled some conversion code into your DAL.  Nothing worked out of the box, but the solutions were clear and made sense.

With the Entity Framework’s new spatial support in the System.Data.Spatial namespace, did things improve?  They certainly did if you’re just shuffling geometries from the web to SQL Server.  But what about people who do real work?  Their applications were all built using geometries from Vertesaur, DotSpatial, SharpMap, or NTS.  So we’re still looking at conversion, mostly likely via WKT.  Beyond that letdown, how is the database support?  I personally ran into a lack of naive DbGeometry support when using SQLite.  I wouldn’t have much cared if it were serialized to WKB or WKT, as long as something worked out of the box.

The plain truth is, its often easier to do things yourself than to learn the weird things other people do.  So despite some great use cases, the new geospatial support in .NET 4.5, for me, is the wrong sized glass.  This GIS-specific realization mirrors ORM’s issues in general.  Synapses firing, my brain dug up an old Sam Saffron / Marc Gravell project called Dapper.  Dapper has been called a “micro-ORM”; less of a ground assault and more of a smart bomb – you still manage your ADO connections and write your own SQL, it does fast binding of objects to query parameters and results.

In the end, I moved to Dapper.  Its code-base was small enough for me to grok and hack geospatial support into in a few hours.  Writing SQL is a fair trade for control, particularly when you need control – geospatial data storage being a prime candidate.  It is great to generate object models using the Entity Framework; but I’ve grabbed my POCOs and switched to a smaller, easier to modify ORM with a more stable codebase.

Free, public base-map imagery data?

I’m looking for a decent, truly public, imagery base-map for offline use.

The OnEarth Global Mosaic (15m pan-sharpened pseudo-color Landsat 7) may be years old, but its the best I’ve found so far.  Unfortunately, the old download links appear to be dead, and I don’t imagine they’d appreciate me scraping their WMS.

Does anybody have a link to the entire (~1.3TB) OnEarth Global Mosaic dataset, or a recommendation for a prettier / newer / better data-set at 15-30m?

Lessons in wrestling an ESRI compact cache

I have a 13.2 terabyte ESRI compact-cache.  For those of you not familiar, a compact-cache is ESRI’s proprietary bundle format.  You need to bundle these large caches, because if you stored “just a bunch of tiles”, when you went to move/back-up the data, your OS would spend an eternity worrying about the details.  My cache has 44,500 bundles, 16384 tiles per bundle, and 512×512 pixels per tile.  A comparable tile-cache with 256×256 pixels per file would have around three trillion tiles.

If you look around for about 5 minutes, you’ll find a myriad of tile-cache servers capable of serving traditional tile caches.  For ESRI compact-caches, you’ll find ArcGIS Server.  Now, ArcGIS Server isn’t free, and that’s enough to make one wonder if there are other bundle formats out there.  There are.

As it turns out, other people have previously cracked open ESRI compact-caches, and documented the cache format on the interwebs.  Basically, the files are in that bundle, and they’re whole files.  Mine are whole JPEGs.  I wanted at those JPEGs, and I wanted to put them into new bundles.

Forget the details of my bundle format; I used .NET to extract the JPEGs, and GDAL to write them to the new format.  Along the way, I resized the JPEGs from 512×512 into the 256×256 tiles my new format expected.

Well, making a 512×512-pixel JPEG into four 256×256-pixel JPEGs is a comparatively CPU-intensive operation.  For me, it’s ~125x slower than a simple file copy.  Also, sending a JPEG to GDAL from .NET is similarly slow.  GDAL’s .NET interop layer won’t store a JPEG directly, you first have to convert to a raw bitmap.  On the other side, GDAL turns that bitmap back into a JPEG.  Again, this is ~125x slower than a simple file copy.

In the end, when one tries to convert 13TB using a single box, one makes compromises.  I ended up dropping GDAL from the equation (favoring lower level libraries), and kept the 512×512 pixel internal tile size.  My conversion code is now 250x faster, and operates at near file-copy speed.  Viola.