Posted by: benismobile | January 28, 2014

Integrating Openlayers and HTML5 Canvas (Revisited)

The WordPress stats tell me there is still a lot of interest in our previous post on integrating OpenLayers and HTML5 Canvas from way back in 2010.
Time has passed, technology has moved on and I’ve started buying shoes in bulk like Mr Magorium. So below, I provide an update on how I integrate OL and HTML5 Canvas 3 years on.

Previously my approach was to replace each individual tile image with a corresponding canvas element, the same size as the tile (typically 256*256). Also we used JQuery to capture tile rendering events. The updated approach is to capture tile images as they are rendered by OpenLayers using OL built in event listeners and then draw these onto a single HTML5 canvas element.
Apart from being more efficient and producing cleaner, more robust code, this approach has the advantage that you can use HTML5 to draw shapes, lines and manipulate pixels on a single canvas tile, crossing tile boundaries. This is particularly useful for drawing lines and shapes using paths (e.g. lineTo() , moveTo() functions).

To demonstrate this I’ve set up a simple demo that shows the HTML5 Canvas adjacent to a simple OpenLayers map, where the canvas version (on the right hand side) is manipulated to show a grayscale and inverted version of the original map image (grayscale is triggered by loadend and the invert function by moveend) The source code is available on EDINAs gitHub page. ( and on JS Fiddle page.
The solution hinges on using the OpenLayers.Layer loadend event to capture the tiles when OpenLayers has finished loading all the tiles for a layer, and also the OpenLayers.Map moveend event, which OpenLayers triggers when it has dealt with the user panning the map. The former is shown in the code snippet below:
// register loadend event for the layer so that once OL has loaded all tiles we can redraw them on the canvas. Triggered by zooming and page refresh."loadend", layer, function()

// create a canvas if not already created


var mapCanvas = document.getElementById("mapcvs" ) ; // get the canvas element
var mapContainer = document.getElementById("OpenLayers.Map_2_OpenLayers_Container") ; // WARNING: Brittle to changes in OL

if(mapCanvas !== null)
var ctx = mapCanvas.getContext("2d") ;
var layers = document.getElementsByClassName("olLayerDiv") ; // WARNING: Brittle to changes in OL

// loop through layers starting with base layer
for(var i = 0 ; i < layers.length ; i++)

var layertiles = layers[i].getElementsByClassName("olTileImage") ; // WARNING: Brittle to changes on OL

// loop through the tiles loaded for this layer
for(var j = 0 ; j < layertiles.length ; j++ )

var tileImg = layertiles[j] ;
// get position of tile relative to map container
var offsetLeft = tileImg.offsetLeft;
var offsetTop = tileImg.offsetTop ;
// get postion of map container
var left = Number(,"p"))) ; // extract value from style e.g. left: 30px
var top = Number(,"p"))) ;
// draw the tile on the canvas in same relative postion it appears in OL map
ctx.drawImage(tileImg, offsetLeft + left, offsetTop + top) ;


greyscale(mapCanvas, 0, 0, mapCanvas.width, mapCanvas.height) ;
// uncomment below to toggle OL map on /off can only be done after layer has loaded
// = "none" ;


Note that some of the code here comes with a health warning. The DOM functions used to navigate the OpenLayers hierarchy is susceptible to changes in the Open Layers API so you need to use a local copy of OpenLayers (as is case in GitHub sample) rather than point to the OpenLayers URL (as is case in the JS Fiddle version).  Also note that all layers are drawn to the Canvas, not just the one that Open Layers triggered the loadend event for. This is necessary to ensure that the order of layers is maintained. Another issue to be aware of when using Canvas drawing methods on maps  is the likelihood of a CrossOrigin tainting error. This is due to map images being loaded from a different domain to that of the HTML5 code. The error will not get triggered simply by drawing the tiles to canvas using the drawImage() function, but does fail when you attempt pixel manipulation using functions such as putImageData() . OpenLayers handles this using the Cross-Origin-Resource-Sharing protocol which by default is set to ‘anonymous’ as below. So long as the map server you are pointing to is configured to handle CORS requests from anonymous sources you will be fine.

layer.tileOptions = {crossOriginKeyword: ‘anonymous’} ;

Would be interested to hear if others are doing similar or have other solutions to doing Canvasy things with OpenLayers.

OpenLayers Canvas Capture

Posted by: dsfairas | December 20, 2013

Create a linear buffer tool for Digimap for Schools

I’m working on the development of a new linear buffer tool for the Digimap for Schools service. Linear buffering is a common feature in GIS applications.


200 meters buffer on a part of river clyde in Glasgow

In geometrical terms such an operation on polygons is also known as Minkowski sum and offsetting.

I was looking of a Javascript library that would offer such functionality as OpenLayers 2.13, currently used by Digimap for Schools, does not offer this as part of its codebase.

I came across 2 libraries that would offer this sort of functionality. One is JSTS and jsclipper the former being a port of the famous Java JTS Topology suite and the later being a port of the C++, C# and Delpi Clipper. I finally decided to go for jsclipper due to being unable to build a custom cut-down version of the huge JSTS library.

The resulting tool made use of jsclipper to calculate the buffer polygon along with OpenLayers, used to draw the buffer polygon and the inner linear path.

A standalone example along with the code, making use of EDINA’s OpenStream service, can be found here:  (Full screen here)

One of the challenges encountered was jaggy rounded ends on low buffer widths which is due to the way jsclipper handles floats. Fortunately jsclipper provides a method to scale up coordinates before passing them to jsclipper for offsetting and then scaling them down again before drawing. The Lighten and CleanPolygons functions also provided a way to remove unnecessary points and merge too-near points of the resulting buffer polygon.

All in all, jsclipper is a light, fast and robust library for polygon offsetting and would recommend having a look at it:

Posted by: murrayhking | December 2, 2013

Creating a transparent overlay map with mapbox-ios-sdk.

I am working on a historic map overlay, where the user can adjust the transparency of the historic map. The user can then see the how the land use has changed over time by using the slider.


I am going to use the map-box fork of route me. Looks like a good bug fixed version of Route-me and map-box do seem to have great some great products.

Unfortunately it doesn’t have an have an API to dynamically change the opacity of a tile source out the box. So I added it.

Its pretty easy to add. Each tileSource has a RMMapTileLayerView container when added to the map. Within that can manipulate the CALayer.opacity to get the desired effect.

I added a fork to github for testing

And example of use – the code is in github. Do a ‘git clone –recursive’ to install the submodules.

And example of use. In the  main view controller.

- (void)viewDidLoad
    [super viewDidLoad];
        // Do any additional setup after loading the view, typically from a nib.
    RMOpenStreetMapSource * openStreetMap = [[RMOpenStreetMapSource alloc] init];
    RMGenericMapSource * weatherMap = [[RMGenericMapSource alloc] initWithHost:@"" tileCacheKey:@"cloudCover" minZoom:0 maxZoom:18];

    self.mapView.tileSource = openStreetMap;

    [self.mapView addTileSource:weatherMap];

    self.overlay = weatherMap;
    // rough bb W = -30.0 degrees; E = 50.0 degrees; S = +35.0 degrees; N = +70.0 degrees
    NSLog(@"zooming to europe");
    CLLocationCoordinate2D northEastEurope = CLLocationCoordinate2DMake(70,-30);
    CLLocationCoordinate2D southWestEurope= CLLocationCoordinate2DMake(35,50);
    [self.mapView zoomWithLatitudeLongitudeBoundsSouthWest:southWestEurope northEast:northEastEurope animated:YES];

    [self.mapView setOpacity:0.5 forTileSource: self.overlay];


//hook up a slider to manipulate the opacity.  

- (IBAction)changeOverlayOpacity:(UISlider *)sender {

    NSLog(@"Slider value changed %f", sender.value );
    [self.mapView setOpacity:sender.value forTileSource: self.overlay];

Posted by: benismobile | October 24, 2013

Fieldtrip GB – Mapserver 6.2 Mask Layers

By Fiona Hemsley-Flint (GIS Engineer)

Whilst developing the background mapping for the Fieldtrip GB App, it became clear that there was going to have to be some cartographic compromises between urban and rural areas at larger scales; Since we were restricted to using OS Open products, we had a choice between Streetview and Vector Map District (VMD) – Streetview works nicely in urban environments, but not so much in rural areas, where VMD works best  (with the addition of some nice EDINA–crafted relief mapping) . This contrast can be seen in images below.


Streetview (L) and Vector Map District (R) maps in an urban area.


Streetview (L) and Vector Map District (R) maps in a rural area.

In an off-the-cuff comment, Ben set me a challenge – “It would be good if we could have the Streetview maps in urban areas, and VMD maps in rural areas “.

I laughed.

Since these products are continuous over the whole of the country, I didn’t see how we could have two different maps showing at the same time.

Then, because I like a challenge, I thought about it some more and found that the newer versions of MapServer (from 6.2) support something called “Mask Layers”  – where one layer is only displayed in places where it intersects another layer.

I realised if I could define something that constitutes an ‘Urban’ area, then I could create a mask layer of these, which could then be used to only display the Streetview mapping in those areas, and all other areas could display a different map – in this case Vector Map District (we used the beta product although are currently updating to the latest version).

I used the Strategi ‘Large Urban Areas’ classification as my means of defining an ‘Urban’ area – with a buffer to take into account suburbia and differences in scale between Strategi and Streetview products.

The resulting set of layers (simplified!) looks a bit like this:

masking example

Using Mask layers in MapServer 6.2 to display only certain parts of a raster image.

Although this doesn’t necessarily look very pretty in the borders between the two products, I feel that the overall result meets the challenge – in urban areas it is now possible to view street names and building details, and in rural areas, contours and other topographic features are more visible. This hopefully provides a flexibility  for users on different types of field trips to successfully implement the background mapping.

Here’s a snippet of the mapfile showing the implementation of the masking, in case you’re really keen…

#VMD_layer(s) defined before mask




#Streetview mask layer


NAME “Streetview_Mask”




#Data comes from a shapefile (polygons of urban areas only):

DATA “streetview_mask”






NAME “Streetview”




#Data is a series of tiff files, location stored in a tileindex

TYPE Raster


TILEINDEX “streetview.shp”

TILEITEM “Location”

#*****The important bit – setting the mask for the layer*****

MASK “Streetview_Mask”



Posted by: benismobile | July 26, 2013

What Women Intent

I noticed a recent BBC news report stating that more women than men in the UK now own a tablet. It seems that the days when an iPad was most frequently coveted by middle-aged men such as me have long gone.

One question this raised in my mind though is why tablets are particularly popular with women in a way that laptops and netbooks were not. Does this tell us anything about the mobile revolution? Does it tell us anything about men and women? Probably not! Perhaps it is just natural that something as convenient as a tablet computer is popular with both men and women.

However I’ll ignore that perfectly reasonable explanation and speculate on the gender angle.

So, certainly in my household the opportunity to sit down at a laptop for, say, thirty minutes uninterrupted is a luxury mostly enjoyed by, well, er… me. My partner has commented that my ability to filter out bickering kids, ignore a saucepan boiling over, forget I started the kids’ bath running and completely not hear the important information she is telling me about the school run tomorrow is nothing short of a supernatural gift. An ability to remain sitting down at a computer when all that is going on is certainly not something I’ve observed in her or other women I know.

So my theory is that tablets are popular with women because they are designed to cope with interruptions (the tablets I mean, not the women). Or at least, the smartphones from which the tablets inherited their OS were designed to be interrupted – by phone calls specifically.

People think of operating systems such as Android as a set of Apps, but really they are a set of interruptible views called Activities (View-Controllers in iOS). The only difference is that the initial Activity in an App has an icon on the Home screen.

Developers are required to implement life-cycle methods on each Activity (AppDelegate in iOS) to ensure that if the OS interrupts the action at any point, the user can pick up again exactly where they left off. This is so critical that in Android the transition from one activity to another is encapsulated in a class of its own called an “Intent”. The name reminds the developer that they might be intending a change in application state but the OS can butt in at any time – so they must make sure it stores everything from the previous Activity first.

This explanation is helpful to me in understanding the success of tablets. When the iPad first came out I have to admit I didn’t think it would be anywhere as popular as the ubiquitous iPhone. At the time, I thought the meteoric success of smartphones was down to their portability and geo-location capabilities. I loved the shiny beauty of the iPad design but couldn’t help thinking it was a bunch of iPhones stuck together. I wondered why I’d want one when I could get a more powerful netbook with a proper keyboard built in. But netbooks are less portable, they take more time to boot up and you have to save what you are doing to ensure you don’t lose your data. The batteries do not last as long and using a keyboard and mouse requires you to sit down. Not good for the interruptible computer user.

This all could be seen as a reason to avoid web apps or hybrid apps that use a WebView embedded into the app. As the Web View or Web Browser uses the stateless HTTP protocol, there are no activity life-cycle methods for developers to honour and maintaining the state between activities is much trickier to get right. So web-based apps could break the interruptible App and annoy users. Especially those who are being constantly interrupted.

Posted by: benismobile | July 23, 2013

Hacking Mapcache with ImageMagick

To generate tiles for the map stack used by FieldTrip GB we are using 4 Mapserver instances deployed to an OpenStack private cloud. This means we can get all our tiles generated relatively quickly using inexpensive commodity hardware. A problem we have is that the resulting PNG tile images look beautiful but are way too big for users to download to their mobile device in any quantity. So we looked to using Mapserver’s built in JPEG format but our cartographers were not happy with the results. One of my colleagues came up with the bright idea of using ImageMagick to compress the PNG to JPEG instead, and the result (using 75% compression) was much better. We can use the ImageMagick command line  with the following syntax:

for var in "$@"
echo "converting $var to jpg";
convert $var -quality 75 `echo $var | tr '.png' '.jpg'`;
# rm $var

and pipe this script using xargs to traverse an existing cache with the PNG generated tiles.

find . -name '*.png' -print0 |  xargs -0 -P4 ../

So the cartographers finally relented and we now have much smaller files to download to devices. The only problem is that the script to run the ImageMagick convert takes for ever to run ( well all right – 2 days). It’s not because ImageMagick is slow at compression – it’s super fast. It’s just that the IO overhead involved is huge as we are iterating over  16 million inodes. So our plan of scaling up commodity hardware (4 CPU virtual machine) is failing. A solution is to do the jpeg conversion at the same time as the tile caching – this way you are only dealing with one tile at the point you are writing to the cache – so there is much less overhead.

So it’s time to hack some of the Mapcache code and get ImageMagic to add the above compression just after it writes the PNG to the cache.

This just involves editing a single source file found in the lib directory of the Mapcache source distribution  ( mapcache-master/lib/cache_disk.c ). I’m assuming below you have already downloaded and compiled Mapcache and also have downloaded ImageMagick packages including the devel package.

First of all include the ImageMagick header file

#include  <wand/magick_wand.h>

Then locate the method  _mapcache_cache_disk_set. This is the method where Mapcache actually writes the image tile to disk.

First we add some variables and an Exception macro at the top of the method.

MagickWand *m_wand = NULL ;
MagickBooleanType status;

#define ThrowWandException(wand) \
{ \
char \
*description; \
ExceptionType \
severity; \
description=MagickGetException(wand,&severity); \
(void) fprintf(stderr,”%s %s %lu %s\n”,GetMagickModule(),description); \
description=(char *) MagickRelinquishMemory(description); \
exit(-1); \

Add then right at the end of the method we add the MagickWand equivalent of the convert command line shown above. The compression code is highlighted

if(ret != APR_SUCCESS) {
ctx->set_error(ctx, 500, "failed to close file %s:%s",filename, apr_strerror(ret,errmsg,120));
return; /* we could not create the file */

// *******ImageMagick code here ********

ctx->log(ctx, MAPCACHE_INFO, “filename for tile: %s”, filename);
MagickWandGenesis() ;
m_wand=NewMagickWand() ;
if (status == MagickFalse)
// MagickSetImageFormat(m_wand, ‘JPG’) ;
char newfilename[200];
strcpy(newfilename, filename) ;
int blen = strlen(newfilename) ;
if(blen > 3)

newfilename[blen-3]=’j’ ;
newfilename[blen-2]=’p’ ;
newfilename[blen-1]=’g’ ;
MagickSetImageCompression(m_wand, JPEGCompression) ;
MagickSetCompressionQuality(m_wand, 75 ) ;
ctx->log(ctx, MAPCACHE_INFO, “filename for new image: %s”, newfilename);
MagickWriteImage(m_wand, newfilename ) ;
/* Clean up */
if(m_wand)m_wand = DestroyMagickWand(m_wand);

And that’s it. Now just the simple matter of working how to compile it, link it etc.

After a lot of hmm’ing and ah-ha’ing (and reinstalling ImageMagick to more recent version using excellent advice from here ) it meant making the following changes to the in mapcache src root dir.


Then run make as usual to compile Mapcache and you’re done! The listing below shows the output and difference in compression:

ls -l MyCache/00/000/000/000/000/000/
total 176
-rw-r–r–. 1 root root 4794 Jul 23 13:56 000.jpg
-rw-r–r–. 1 root root 21740 Jul 23 13:56 000.png
-rw-r–r–. 1 root root 2396 Jul 23 13:56 001.jpg
-rw-r–r–. 1 root root 9134 Jul 23 13:56 001.png
-rw-r–r–. 1 root root 8822 Jul 23 13:56 002.jpg
-rw-r–r–. 1 root root 46637 Jul 23 13:56 002.png
-rw-r–r–. 1 root root 8284 Jul 23 13:56 003.jpg
-rw-r–r–. 1 root root 45852 Jul 23 13:56 003.png
-rw-r–r–. 1 root root 755 Jul 23 13:55 004.jpg
-rw-r–r–. 1 root root 2652 Jul 23 13:55 004.png

original PNG tile

converted to JPEG at 75% compression

Posted by: murrayhking | June 7, 2013

Mbtiles and Openlayers

Mbtiles and Openlayers

I was testing the feasibility of adding an overlay to openlayers map that is displayed on a mobile/tablet device .

The overlay is going to be in mbtiles format the made popular by MapBox.

The mbtiles db will be accessed locally on the device this useful when bandwidth is poor or non 3g tablets .

The mbtiles format is described here.

Its is basically a sqlite database that holds a collection of  x,y,z indexed tiles.

Webkit based browsers including mobile versions support this although its not actually part of the Html5 spec.

The main issue of using mbtiles locally is actually getting the database into the right location.

Another is the speed at which the device can render the images. The overhead in extracting blob images to the resulting  base64 encoded images.

There are a couple of ways this can be done however.

Getting Mbtiles on Device/Browser

With Phonegap

You can use the  FileTransfer object in phonegap to copy the database locally from a server. It will be downloaded to the Documents folder on the iphone by default.

example code to download an mbtiles db.

var fail = function (error) {

var doOnce = window.localStorage.getItem("doOnce");

   window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, function(fileSystem) {
       fileSystem.root.getFile('testDB2.db', {create: true, exclusive: false}, function(fileEntry) {
           var localPath = fileEntry.fullPath;
           if (device.platform === "Android" && localPath.indexOf("file://") === 0) {
               localPath = localPath.substring(7);
           console.log("LOCAL PATH  "+ localPath);
           var ft = new FileTransfer();
           localPath, function(entry) {
               console.log("successful download");
           }, fail);
       }, fail);
     }, fail);

Use the phonegap web sql plugin  and open the database like.


The benefit of using a phonegap sqllite plugin – allows flexibility where you download the mbtile db to and removes the device dependant limits on database size.

Also if a browser drops native web sql support then it doesn’t matter.


Rather than download a remote database you could copy over a local database at startup.

The simple way to add a prepopulated SQLite DB in PhoneGap from this blog

If you want to keep it an entirely non-native web app based solution or desktop browser (webkit based – Chrome Safari you might be able to use a tool like.

There are more suggestion on stackoverflow here but I not tried them.

By using the syncing by creating an empty local mbtiles database and then populating it by inserts via data from the server is going to adversely affect performance. I have not tried this so I dont know how well it would work.

OpenLayers integration

First thing is to subclass an Openlayers TMS class.

* Map with local storage caching.
* @params options:
*     serviceVersion - TMS service version
*     layerName      - TMS layer name
*     type           - layer type
*     isBaseLayer    - is this the base layer?
*     name         - map name
*     url            - TMS URL
*     opacity        - overlay transparency
var MapWithLocalStorage = OpenLayers.Class(OpenLayers.Layer.TMS, {
   initialize: function(options) {

       this.serviceVersion = options.serviceVersion;
       this.layername = options.layerName;
       this.type = options.type;

       this.async = true;

       this.isBaseLayer = options.isBaseLayer;

           this.opacity = options.opacity;

       OpenLayers.Layer.TMS.prototype.initialize.apply(this, [,
   getURLasync: function(bounds, callback, scope) {
       var urlData = this.getUrlWithXYZ(bounds);
       webdb.getCachedTilePath( callback, scope, urlData.x, urlData.y , urlData.z, urlData.url);
   getUrlWithXYZ: function(bounds){
          bounds = this.adjustBounds(bounds);
       var res =;
       var x = Math.round((bounds.left - this.tileOrigin.lon) / (res * this.tileSize.w));
       var y = Math.round((bounds.bottom - / (res * this.tileSize.h));
       var z = this.serverResolutions != null ?
           OpenLayers.Util.indexOf(this.serverResolutions, res) :
  + this.zoomOffset;

       //inverty for openstreetmap rather than google style TMS
       var ymax = 1 << z;
       var y = ymax - y -1;
       var path = this.serviceVersion + "/" + this.layername + "/" + z + "/" + x + "/" + y + "." + this.type;

       var url = this.url;
       if (OpenLayers.Util.isArray(url)) {
           url = this.selectUrl(path, url);
       return { url: url + path, x:x, y:y, z:z};

   getURL: function(bounds) {
       return OpenLayers.Layer.XYZ.prototype.getURL.apply(this, [bounds]);


this.async = true;

as it will have to receive images from the local sqlite database asynchronously  as  web sql has an asynchronous callback style API.

       var ymax = 1 << z;

       var y = ymax – y -1;

All this does is invert the y axis tile to handle openstreetmap not required for google style TMS.

The is a good site that describes the various types of TMS around.

The Database Setup

"use strict";
var webdb = {};

function getWebDatabase(){
   if(typeof(openDatabase) !== 'undefined'){
       webdb = undefined;
   return webdb;
} = function() {
 var dbSize = 50 * 1024 * 1024; // 50MB
 webdb.db = openDatabase("'testDB2", "1.0", "Cached Tiles", dbSize);

webdb.onError = function(tx, e) {
 console.warn("There has been an error: " + e.message);

webdb.onSuccess = function(tx, r) {
 console.log("Successful Database tx " );

webdb.createTablesIfRequired = function() {
   console.log("Creating DataBase Tables");
 var db = webdb.db;
 db.transaction(function(tx) {
   tx.executeSql("CREATE TABLE IF NOT EXISTS " +
                 "tiles(zoom_level INTEGER, tile_column INTEGER, tile_row INTEGER, tile_data TEXT, mapName TEXT)", [], webdb.onSuccess,

                 " tile_index on tiles(zoom_level, tile_column, tile_row, mapName)", [], webdb.onSuccess,

function hexToBase64(str) {
   var hexString = str.replace(/([\da-fA-F]{2}) ?/g, "0x$1 ");
   var hexArray = hexString.split(" ");
   var len = hexArray.length;
   var binary ='';
   for (var i = 0; i < len; i++) {
       binary += String.fromCharCode( hexArray[ i ] )
   //getting a stack error on large images
   //var binary = String.fromCharCode.apply(null, hexArray);
   return window.btoa(binary);

webdb.getCachedTilePath = function(callback, scope, x, y, z, url ){
   var db = webdb.db;
   var resultsCallback = function(tx, rs) {
       console.log('resultsCallback *********************' );
       console.log('rs.rows.length ' + rs.rows.length);

       if(callback) {
           if( rs.rows.length > 0 ) {
               var rowOutput  = rs.rows.item(0);
               var tile_data = rowOutput['tile_data'];
               //strip off the hex prefix
               tile_data = tile_data.substring(2);

           } else {
     , url);
   db.transaction(function(tx) {
       tx.executeSql("SELECT quote(tile_data) as tile_data FROM tiles where zoom_level=? AND tile_column=? AND tile_row=?", [z,x,y], resultsCallback,


When you have larger blobs in the database you can’t use the overloaded array version of String.fromCharCode as I was getting stack memory issue on the device. (iphone).

So you have to loop through and build it manually.

You have to use the quote function on the tile_data blob to turn it into a hex  string.

“SELECT quote(tile_data) as tile_data

Then trim the hex prefix X’ of the hex string before base64ing.

Testing if you just want to test the javascript /html5 with mbtiles you can copy your mbtiles database to the correct folder .

/Users/murrayking/Library/Application Support/iPhone Simulator/6.1/Applications/667F70EF-D002-425D-86C9-5027C965C518/Library/WebKit/LocalStorage/file__0/0000000000000001.db on a mac

or Chrome on mac as well.

Users/murrayking/Library/Application Support/Google/Chrome/Default/databases/http_localhost_8080/13


This approach is a bit convoluted.

Esp  the conversion of the blob to base64 and performance is a bit poor on older devices. But on newer devices its acceptable.  And as devices become more powerful it will become less issue as with all html5 javascript type things.

Not tried it yet on Android but should work. Worked in the Chrome browser on the linux box.

It does allow you to use rich openlayers framework cross platform without having to invest in native versions.

Also you can debug and test using a desktop browser which is fast before doing proper testing on the actual device.

Example Screenshot working on iphone3g using Phonegap and Mbtiles.

Development version based on our Fieldtrip GB app available on android and iphone.

Overlay is historic map in mbtiles format from the National Library of Scotland.


Debugging on Chrome non-native

working on chrome

Posted by: benismobile | March 25, 2013

Fieldtrip GB App

First of all – apologies for this blog going quiet for so long. Due to resource issues its been hard to keep up with documenting our activities. All the same we have been quietly busy continuing work on geo mobile activity and I’m please to announce that we have now releases our Fieldtrip GB app in the Google Play Store  


We expect the iOS version to go through the Apple App Store  in a few weeks.

Over the next few weeks I’ll be posting to blog with details of how we implemented this app and why we choose certain technologies and solutions.

Hopefully this will prove a useful resource to the community out there trying to do similar things.

A brief summary. The app uses PhoneGap and OpenLayers so is largely using HTML5 web technologies but wrapped up in a native framework. The unique mapping uses OS Open data including Strategi , Vector Map District  and Land-Form PANORAMA mashed together with path and cycleway data from OpenStreetMap and Natural England.


Posted by: benismobile | October 28, 2011

Fourth International Augmented Reality Standards Meeting

I’m just back from the Fourth International AR Standards Meeting that took place in Basel, Switzerland and trying hard to collect my thoughts after two days of intense and stimulating discussion. Apart from anything else, it was a great opportunity to finally meet some people I’ve known from email and discussion boards  on “the left hand side of the reality-virtuality continuum“.

Christine  Perry, the driving spirit, inspiration and editor at large of  AR Standards Group has done a fantastic job bringing so many stakeholders together representing Standards Organisations such as the OGC, Khronos, Web3d Consortium, W3C, OMA and WHATWG  Browser and SDK vendors such as Wikitude, Layar, Opera, ARGON and Qualcomm AR and hardware manufacturers ( Canon, SonyEricsson, NVIDIA) as well as several solution providers such as MOB Labs and mCrumbs – oh and a light sprinkling of academics ( Georgia Tech, Fraunhofer iDG ).

I knew I’d be impressed and slightly awe struck by these highly accomplished people, but what did  surprise me was the lack of  any serious turf fighting. Instead, there was a real sense of pioneering spirit in the room.  Of course everyone had their own story to tell (which just happened to be a story that fitted nicely into their organizational interests), but it really was more about people trying to make some sense of a confusing landscape of technologies and thinking in good faith about what we can do to make it easier.  In particular, it seemed clear that the Standards Organizations felt they could separate the problem space fairly cleanly between their specialist area of interest (geospatial, 3d, hardware/firmware, AR content, web etc). The only area where these groups had significant overlap was on sensor APIs, and some actions were taken to link in with the various Working Groups working on sensors to reduce redundancies.

In seemed to me that there was some agreement about how things will look for AR Content Providers and developers (eventually). Most people appeared to favour the idea of  declarative content mark-up language working in combination with a  scripting language (Javascript) similar to the geolocation API model. Some were keen on the idea of this all being embedded into a standard web browsers Document Object Model. Indeed, Rob Manson, from MobLabs has already achieved a prototype AR experience using various existing (pseduo) standards for web sensor and processing APIs. The two existing markup content proposals ARML and KARML are both based on the OGC’s KML, but even here the idea would be to eventually integrate a KML content and styling model into a generic html model, perhaps following the html/css paradigm.

This shared ambition to  converge AR standards with generic web browser standards is  a recognition that the convergence of hardware, sensors, 3d, computer vision and geo location is a bigger phenomenon than AR browsers or augmented reality. AR is just the first manifestation of this convergence and “anywhere, anytime” access to the virtual world as discussed by Rob Manson on his blog.

To a certain extent, the work we have been discussing here on geo mobile blog, using HTML5 to create web based mapping applications, is a precursor to a much broader sensor enabled web that uses devices such as camera, GPS, compass etc. not just to enable 2d mapping content but all kinds of application that can exploit the sudden happen-chance of  millions of people carrying around dozens of sensors, cameras and powerful compute/graphic processors in their pockets.

Coming back from this meeting, I’m feeling pretty upbeat about the prospects for AR and emerging sensor augmented web. Let’s hope we are able to keep the momentum going for the next meeting in Austin.

Posted by: benismobile | July 7, 2011

App Ecosystem

Earlier this week I attended the Open Source Junction Context Aware Mobile Technologies event organized by OSS Watch. Due to a prior engagement I missed the second day and had to leave early to catch a train. It was a pity as the programme was excellent and there was some terrific networking opportunities, although it sounds like I was fortunate to miss the geocaching activity which the twitter feed suggested was very wet and involved an encounter with some bovine aggression.

During the first two sessions I did attend there were quite a few people, including myself, talking about the mobile web approach to app development. I made the comment that the whole mobile web vs. native debate was fascinating and current and that mobile web was losing. But everyone seemed to agree that apps are a pretty bad deal for developers and that making any money from this is about as likely as winning the lottery. This got me thinking on the train to Edinburgh about the “App ecosystem” and what that actually means. A very brief Google search did not enlighten me much so I sketched my own App food chain, shown below.

It no surprise that the user is right at the bottom as all the energy that flows through this ecosystem comes from the guy with the electronic wallet.

But I think it’s going to be a bit of a surprise for app developers ( content providers ) to see themselves at the top of this food chain (along with Apple and Google) as it doesn’t feel like you are king of the jungle when the App retail cut is so high and prices paid by users is so low.

It will be interesting to see if Google, who are not happy with the number of paid apps in the Google Marketplace cut the developer a better deal. Or if the Microsoft Apps built on top of Nokia try to gain market penetration by attracting more high quality content. My guess is not yet. The problem for developers is that the App retailers can grow at the moment just by the sheer number of new people buying smartphones. This is keeping prices artificially low and means app retailers are not competing all that much for content. But smartphone ownership is in fact growing so fast that pretty soon ( approx 2 years?) everyone who wants or can afford a smartphone is going to have one. How do app retailers grow then? They are going to have to get users to part with more money for apps and content either by charging more or attracting advertsing revenue. Even though there are a lot of app developers out there, apps users will pay for are scarce and retailers are going to have to either pay more to attract the best developers and content to their platform, or make life easier for content providers by adopting open standards. So maybe the mobile web might emerge triumphant after all.

Older Posts »