Working with maps can be amazing, displaying data within maps is one of the trickiest parts when you are developing a platform with geospatial data. A little step to display information in a map is the usage of tiles. In this post we will create a small tile server that will serve protobuf tiles using Node.js, Hapi, Redis and Postgres.

In this tutorial we will be using Node, Hapi, Redis and Postgres with Docker and creating a simple configuration with Docker-Compose.

This tutorial was heavily inspired in PostGIS to Protobuf (with vector tiles) please check it out. Keep in mind this tutorial will show a really basic tile server implementation, if you want to find a way more developed and tested implementation you can try TileStache or you may be able to find a lot of great utilities in here. Saying that I hope you enjoy creating your own mini tile server.

Docker and Docker-Compose Setup

Docker will let us create containers and run our project anywhere (as long as the machine supports Docker) leaving aside all the issues of package versions or the cumbersome setup of a database for a simple tutorial like this one. The tutorial will use docker-compose 1.11.2 and Docker 17.03.0-ce, in order to check your version run the following commands:

1
2
3
4
(docker-compose) ➜ dc-005-protobuf-tile-server git:(master) docker-compose --version
docker-compose version 1.11.2, build dfed245
(docker-compose) ➜ dc-005-protobuf-tile-server git:(master) docker --version
Docker version 17.03.0-ce, build 60ccb2265b

Lets start by creating two Dockerfiles in the root directory. The first Dockerfile will be just to create our package.json and npm-shrinkwrap.json, the second one will be the one in charge of starting the tile server.

Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
FROM ubuntu:16.04
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
software-properties-common \
python \
curl \
python-pip \
python-software-properties \
libstdc++-5-dev \
zlib1g-dev \
clang
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash
RUN apt-get install -y --no-install-recommends nodejs
RUN pip install --upgrade pip
RUN pip install mapnik
ENV HOME=/usr/src
WORKDIR $HOME/app
CMD /bin/bash
Dockerfile-setup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM ubuntu:16.04
RUN apt-get update -y \
&& apt-get install -y --no-install-recommends \
software-properties-common \
python \
curl \
python-pip \
python-software-properties \
libstdc++-5-dev \
zlib1g-dev \
clang
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash
RUN apt-get install -y --no-install-recommends nodejs
RUN pip install --upgrade pip
RUN pip install mapnik
ENV HOME=/usr/src
COPY package.json npm-shrinkwrap.json $HOME/app/
WORKDIR $HOME/app
RUN npm install
COPY . $HOME/app
CMD npm run start
EXPOSE 8081

The Dockerfile is self-explanatory we are creating an image with the dependencies we need so our tile server can run but there are a few gotchas within these dependencies:

1.- Mapnik, as the official website states mapnik combines pixel-perfect image output with lightning-fast cartographic algorithms, and exposes interfaces in C++, Python, and Node. We will use this library in order to create the tiles in our server.
2.- The missing package.json and npm-shrinkwrap.json, the reason behind these files not being present at the moment is due to the fact that we want to use Docker and docker-compose to create all files.

Now that we have a Dockerfile we will need to create other containers that will allow an easy development, the first service we will declare in our docker-compose.yml file is a node container. We will be using this container to create the package.json and npm-shrinkwrap.json files. Lets create a docker-compose.yml file in our root directory:

docker-compose.yml
1
2
3
4
5
6
7
8
version: "3"
services:
node:
build:
context: .
dockerfile: Dockerfile-setup
volumes:
- .:/usr/src/app

With this we can run a container which is able to run node, let’s use this container to create our files. Lets build our node container and run the desired commands:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
docker-compose build node
docker-compose run --rm node npm init
docker-compose run --rm node npm install --save good \
good-console \ # log utility
good-file \ # log utility
good-squeeze \ # log utility
hapi \ # server
mapnik \ # mapnik bindings for node
pg-promise \ # postgres
redis \
sphericalmercator \
nodemon \
topojson
docker-compose run --rm node npm shrinkwrap

Now if we look in our root directory we will find our two newly generated files, package.json and npm-shrinkwrap.json. The files may have different a different ownership thus causing writing problems, to fix this we can quickly change the ownership by doing the following:

1
sudo chown $(whoami) package.json npm-shrinkwrap.json

Lets create a server.js file to test our server container:

server.js
1
console.log('testing our tile server');

Lets add a container for our server in our docker-compose.yml:

docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
version: "3"
services:
node:
build:
context: .
dockerfile: Dockerfile-setup
volumes:
- .:/usr/src/app
command: /bin/bash
tile-server:
build: .
command: npm run start-development
volumes:
- .:/usr/src/app
ports:
- "8081:8081"

In our command option we set npm run start-development, currently our package.json lacks this script, let’s add it into the “scripts” field:

package.json
1
2
3
"scripts" : {
"start-development" : "NODE_ENV=development nodemon server.js"
}

We are using nodemon to automatically reload our node server and avoid using ctlr + c and running docker-compose up tile-server for each change in our server files. When changing docker-compose.yml we must restart the docker-compose process using ctlr + c and running docker-compose up tile-server. Create a server.js file to quickly test our configuration:

server.js
1
2
// Just for testing purposes
console.log('testing our tile server');

We can finally test if our setup is working by running docker-compose up tile-server:

1
2
3
4
5
6
7
8
9
docker-compose up tile-server
# Starting dc005protobuftileserver_tile-server_1
# Attaching to dc005protobuftileserver_tile-server_1
# tile-server_1 |
# tile-server_1 | > dc-005-protobuf-tile-server@0.0.1 start-development /usr/src/app
# tile-server_1 | > NODE_ENV=development node server.js
# tile-server_1 |
# tile-server_1 | testing our tile server
# dc005protobuftileserver_tile-server_1 exited with code 0

Our container works, but the container of server.js is just a console.log. We need a server which will receive http requests, let’s create one with the help of hapijs.

The server

We will be using hapi, to create a simple server. Creating a server with hapi is pretty simple:

server.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
const Hapi = require('hapi'); // Require hapi
const server = new Hapi.Server();
const plugins = [];
server.connection({
routes: { cors: true },
host: '0.0.0.0', // If you use localhost Docker won't expose the app
port: 8081 // Make sure this port is not being used by any other app
});
// Register plugins
server.register(plugins, err => {
if(err) {
console.log(['error'], 'There was an error loading plugins...');
console.log(['error'], 'Terminating...');
console.log(['error'], err);
throw err;
}
// Default health check URL
server.route({
method: 'GET',
path: '/',
handler: (request, reply) => reply({ status: 'ok' })
});
// Start server
server.start(err => {
if(err) {
console.log(['error'], 'There was an error at server start...');
console.log(['error'], 'Terminating...');
console.log(['error'], err);
throw err;
}
console.log(['debug'], `Server running at: ${server.info.uri}`);
});
});

Now we can run our server by running docker-composer run tile-server:

1
2
3
4
5
6
7
8
docker-compose up tile-server
# Starting dc005protobuftileserver_tile-server_1
# Attaching to dc005protobuftileserver_tile-server_1
# tile-server_1 |
# tile-server_1 | > dc-005-protobuf-tile-server@0.0.1 start-development /usr/src/app
# tile-server_1 | > NODE_ENV=development node server.js
# tile-server_1 |
# tile-server_1 | [ 'debug' ] 'Server running at: http://0.0.0.0:8081'

We can test our working server by going to our browser and go to http://0.0.0.0:8081

1
2
3
{
"status": "ok"
}

Amazing, our server works. We will need to connect to a database, in this tutorial we will be using Postgres. Lets create a plugin which will help us connect to a database.

Postgres plugin

In order to connect from our server to a database we will need to add a container into our docker-compose.yml, we will be using an image of postgis:

docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
version: "3"
services:
db: +
image: mdillon/postgis +
ports: +
- "9432:5432" +
node:
build:
context: .
dockerfile: Dockerfile-setup
volumes:
- .:/usr/src/app
command: /bin/bash
tile-server:
build: .
env_file: .env
command: npm run start-development
links: +
- db +
depends_on: +
- db +
volumes:
- .:/usr/src/app
ports:
- "8081:8081"

The tile-server container got new options defined, the links option will link to containers in another service. We need this option so we can communicate with our database, it will also express a dependency between services, which will decide startup order. The depends_on option will start services in dependency order, in our case db will be started before our tile-server. If we run docker-compose up tile-server it will also create and start db.

We now have our server and database, as of now our database is empty. Lets get some example data inserted into our database. In this tutorial we will be using the states from Mexico. The following .sql file will create a schema and a table which will contain all the information we need:

1
2
3
4
5
6
7
8
9
10
11
12
psql -U postgres -h localhost -p 9432 -a -f states.sql
# Connect to database to see if data was inserted
psql -U postgres -h localhost -p 9432
# postgres=# select nom_ent from geoms.estados limit 5;
# nom_ent
# ----------------------
# Aguascalientes
# Baja California
# Baja California Sur
# Campeche
# Coahuila de Zaragoza
# (5 rows)

Our database has information and is running correctly, now let’s connect our database to our server by creating a hapi plugin. If you haven’t used hapi plugins please check the docs for more information of how a plugin works.

Create a plugins folder in our root directory and a postgres folder inside it, the postgres plugin will consist of two files a package.json and a index.js. Lets create those files, you should have a structure like the one below:

1
2
3
4
5
6
7
8
9
10
11
12
13
</dc-005-protobuf-tile-server/
▸ node_modules/
▾ plugins/postgres/
index.js
package.json
.env
.gitignore
docker-compose.yml
Dockerfile
Dockerfile-setup
npm-shrinkwrap.json
package.json
server.js

The index.js file will contain the logic to connect to our database and create a connection which will be available for later use in our handlers.

plugins/postgres/index.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const pgp = require('pg-promise')({});
exports.register = function(server, options, next) {
const { DB_HOST, DB_USER, DB_PORT, DB_PASS, DB_DATA } = options;
server.app.conn = pgp({
host: DB_HOST,
port: DB_PORT,
database: DB_DATA,
user: DB_USER,
password: DB_PASS
});
console.log(['debug'], 'Postgres plugin loaded...');
next();
};
exports.register.attributes = {
pkg: require('./package.json')
};
plugins/postgres/package.json
1
2
3
4
{
"name": "postgres‐db",
"version": "1.0.0"
}

The plugins array in our server is empty at the moment, we need to include our newly created plugin into the array so hapi will load it.

1
2
3
4
5
6
7
8
9
10
11
12
const plugins = [
{
register: require('./plugins/postgres'),
options: {
DB_HOST: 'db', // Hostname resolved due to links option in docker-compose
DB_USER: 'postgres',
DB_PORT: '5432',
DB_PASS: '',
DB_DATA: 'postgres'
}
}
];

As soon as we change the plugins array we should see a reload in our server via nodemon:

1
2
3
4
tile-server_1 | [nodemon] restarting due to changes...
tile-server_1 | [nodemon] starting `node server.js`
tile-server_1 | [ 'debug' ] 'Postgres plugin loaded...'
tile-server_1 | [ 'debug' ] 'Server running at: http://0.0.0.0:8081'

Lets see if everything works out, let’s add a little route to test the connection to our database:

server.js
1
2
3
4
5
6
7
8
9
10
11
// ...
server.route({
method: 'GET',
path: '/postgres-test',
handler: (request, reply) => {
const db = request.server.app.conn;
const query = 'SELECT nom_ent FROM geoms.estados';
return db.any(query).then(data => reply(data));
}
});
// ...

Go to http://0.0.0.0:8081/postgres-test and check the result, you should see something like this:

1
2
3
4
5
6
7
8
9
[
{
"nom_ent": "Aguascalientes"
},
{
"nom_ent": "Baja California"
},
{ ... }
]

Layer plugin

To display a tiled map in a browser usually requires the support of a web mapping framework. This framework handles the retrieval of tiles, display, caching, and user navigation. Popular frameworks for tiled maps include Google Maps API, OpenLayers and Leafet. In this tutorial we will be using leaflet
Most tiled web maps follow certain Google Maps conventions:

  • Tiles are 256x256 pixels
  • At the outer most zoom level, 0, the entire world can be rendered in a single map tile.
  • Each zoom level doubles in both dimensions, so a single tile is replaced by 4 tiles when zooming in. This means that about 22 zoom levels are sufficient for most practical purposes.
  • The Web Mercator projection is used, with latitude limits of around 85 degrees.

The de facto OpenStreetMap standard, known as Slippy Map Tilenames[2] or XYZ,[3] follows these conventions and adds more:

  • An X and Y numbering scheme
  • PNG images for tiles
  • Images are served through a REST API, with a URL like http://.../Z/X/Y.png, where Z is the zoom level, and X and Y identify the tile.

Raster tile layers

Raster tile layers deliver basemaps to your client application as image files (for example, JPG or PNG format) that have been prerendered and stored on the server and are displayed as is by the client. Raster tile layers are most appropriate for basemaps that give your maps geographic context such as imagery (as in the World Imagery basemap) or feature-based maps such as in the Topographic, National Geographic, Oceans, and other basemaps. Raster tile layers can also be composed of static operational layers such as thematic maps of your data.

The tile layer format is fast to transmit over the Internet and is easily understood by most common mapping software applications, so these basemaps are compatible not only with ArcGIS and web apps built with the ArcGIS APIs, but also third-party apps that use OGC protocols such as WMS/WMTS. Other benefits of raster tile layers include the following:

  • Work well across a wide range of applications and devices (web, desktop, and mobile), including desktop applications like ArcMap and older versions of web browsers.
  • Provide high-end cartographic capabilities such as advanced label placement and symbology.
  • Support various raster data sources such as imagery and elevation data.
  • Can be printed from web mapping applications.

Vector tile layers

Vector tile layers deliver map data as vector files (for example, PBF format) and include one or more layers that are rendered on the client based on a style delivered with the layer. Vector tiles include similar data to that found in some (but not all) of the available raster tile basemaps, but they store a vector representation of the data; that is, geographic features are represented as points, lines, and polygons in a format understood by the client application. Unlike raster tile layers, vector tile layers can adapt to the resolution of their display device and be restyled for multiple uses. Vector tiles have a smaller file size than raster tiles, which translates to faster maps and better performance. The combination of tile access performance and vector drawing allows the tiles to adapt to any resolution of the display, which may vary across devices.

In the map viewer, client-side drawing of vector tiles allows you to customize the style of the vector tile layer and the contents of the map. Other advantages of vector tile layers include the following:

  • Can be used to generate many different map styles using a single set of vector tiles. You can customize vector tile layers—for example, hide their visibility, change symbols and fonts, change languages for labels, and so on—without having to regenerate tiles.
  • Look great on high-resolution displays (for example, retina devices) that offer much better resolution than low-resolution (96 dpi) raster tiles, without the need for generating separate, high-resolution versions. Vector tiles can be displayed at any scale level with clear symbology and labels in desktop applications such as ArcGIS Pro.
  • Can be generated much more quickly, and with fewer hardware resources, than corresponding raster tiles. This reduces the cost to generate the tiles and improves the speed at which data updates can be made available.
  • Vector tiles are much smaller in size than corresponding raster tiles, reducing the cost to store and serve the tiles.
  • Can be projected into various coordinate systems, using desktop applications like ArcGIS Pro, without distortion of labels and other symbols.

You can add vector tile layers as operational layers or basemaps to the map viewer or scene viewer. You can use maps and scenes with vector tile layers in web apps using a configurable app, Web AppBuilder, or ArcGIS API for JavaScript. Vector tile layers have the best performance on machines with newer hardware, and they can be displayed in Internet Explorer 11 and later and most other current versions of desktop browsers, including Chrome, Firefox, and Safari.

Add more from here https://www.mapbox.com/vector-tiles/.

Now that we understand a little bit more about vector tiles, let’s understand how we will request the tiles to our server. As mentioned before to display tiled maps we require a web mapping framework, at the moment we will use leaflet to display our vector tiles. As of version 1.0.3 leaflet does not allow to load and display vector tiles by using their API. This could be a problem, but thanks to the great effort of contributors and the leaflet plugins system we will be able to render our vector tiles. We will be using the Leaflet.VectorGrid library.

Lets create a html file which will help us render our map. In order for hapi to serve static assets we will need a plugin, inert will help us with this requirement.

1
docker-compose run --rm node npm install --save inert

Add inert to our plugins:

1
2
3
4
5
6
7
8
9
10
11
12
13
const plugins = [
{
register: require('./plugins/postgres'),
options: {
DB_HOST: 'db',
DB_USER: 'postgres',
DB_PORT: '5432',
DB_PASS: '',
DB_DATA: 'postgres'
}
},
{ register: require('inert') }
];

And create an index.html file in our root directory:

index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
!doctype html>
<html class="no-js" lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="x-ua-compatible" content="ie=edge">
<title>dc-005-protobuf-tile-server</title>
<meta name="description" content="">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://unpkg.com/leaflet@1.0.3/dist/leaflet.css" />
<style>
html, body {
height: 100%;
}
body {
margin: 0;
}
#mapid { height: 100%; }
</style>
</head>
<body>
<div id="mapid"></div>
<script src="https://unpkg.com/leaflet@1.0.3/dist/leaflet.js"></script>
<script src="https://unpkg.com/leaflet.vectorgrid@latest/dist/Leaflet.VectorGrid.bundled.js"></script>
<script>
var mymap = L.map('mapid').setView([20.341783, -99.920149], 6);
L.tileLayer('http://{s}.basemaps.cartocdn.com/light_all/{z}/{x}/{y}.png', {
attribution: 'Map data &copy; <a href="http://openstreetmap.org">OpenStreetMap</a> contributors, <a href="http://creativecommons.org/licenses/by-sa/2.0/">CC-BY-SA</a>, Imagery © <a href="http://mapbox.com">Mapbox</a>',
maxZoom: 18
}).addTo(mymap);
</script>
</body>
</html>

We need a route to serve this html, let’s add this to our server:

1
2
3
4
5
6
7
8
9
// ...
server.route({
method: 'GET',
path: '/test-vector-tiles',
handler: {
file: 'index.html'
}
})
// ...

By going to http://0.0.0.0:8081/test-vector-tiles we should see our leaflet map. If using Google Chrome you can open the DevTools and go to the Network tab, select the Img filter and you will be able to see the requests to the carto content delivery network:

1
2
3
Request URL:http://a.basemaps.cartocdn.com/light_all/7/28/56.png
Request Method:GET
Status Code:200 OK (from disk cache)

As you can see a request was made with the {z}/{x}/{y} format, let’s try to create a handler that follows this format.

With a working database available, we will proceed to create our layers plugin. In our plugins folder create a layers folder, as our postgres plugin the layers plugin will consist of two files a package.json and a index.js. Lets create those files, you should have a structure like the one below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
</dc-005-protobuf-tile-server/
▸ node_modules/
▾ plugins/
▾ layers/
index.js
package.json
▸ postgres/
.env
.gitignore
docker-compose.yml
Dockerfile
Dockerfile-setup
npm-shrinkwrap.json
package.json
server.js

At the moment our layers plugin will contain a simple console.log:

plugins/layers/index.js
1
2
3
4
5
6
7
8
exports.register = function(server, options, next) {
console.log(['debug'], 'Layers plugin loaded...');
next();
};
exports.register.attributes = {
pkg: require('./package.json')
};
plugins/layers/package.json
1
2
3
4
{
"name": "layers-plugin",
"version": "1.0.0"
}

And include in our plugins array:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
const plugins = [
{
register: require('./plugins/postgres'),
options: {
DB_HOST: 'db',
DB_USER: 'postgres',
DB_PORT: '5432',
DB_PASS: '',
DB_DATA: 'postgres'
}
},
{ register: require('inert') },
{ register: require('./plugins/layers') }
];

As soon as we change the plugins array we should see a reload in our server via nodemon:

1
2
3
4
5
tile-server_1 | [nodemon] restarting due to changes...
tile-server_1 | [nodemon] starting `node server.js`
tile-server_1 | [ 'debug' ] 'Postgres plugin loaded...'
tile-server_1 | [ 'debug' ] 'Layers plugin loaded...'
tile-server_1 | [ 'debug' ] 'Server running at: http://0.0.0.0:8081'

Lets create a layer folder in our plugins/layers/ folder and create a states.js file:

plugins/layers/layer/state.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
const SphericalMercator = require('sphericalmercator');
const mercator = new SphericalMercator({ size: 256 });
const query = `
SELECT c.topojson as feature, c.cve_ent as cve_ent
FROM geoms.estados c
WHERE (st_intersects(c.geom, ST_MakeEnvelope($1, $2, $3, $4, 4326)))
`;
module.exports = {
handler: (request, reply) => {
let { layername, x , y , z } = request.params;
const db = request.server.app.conn;
const bbox = mercator.bbox(+x, +y, +z, false, 'WGS84' );
return db.any(query, [bbox[0], bbox[1], bbox[2], bbox[3]])
.then(data => reply(data))
.catch(err => {
reply(err);
console.log(['error'], err);
});
}
};

Lets try to find out what is happening in the handler, keep in mind that this handler will run for each tile in the bounding box.

We are importing the sphericalmercator package, which will help us by providing projection math for converting between mercator meters, screen pixels (of 256x256 or configurable-size tiles), and latitude/longitude.

1
2
const SphericalMercator = require('sphericalmercator');
const mercator = new SphericalMercator({ size: 256 });

Next we are declaring a query, we will be using two functions. The first function is st_intersects which returns TRUE if the Geometries/Geography “spatially intersect in 2D”. You may be tempted to use ST_CONTAINS or ST_Within but we must get all the polygons that intersect a given polygon.
alt text

The second function is ST_MakeEnvelope, this function creates a rectangular Polygon formed from the minima and maxima. Remember that this handler will run for each tile in the bounding box, so the polygon that we will form will be just the tile.

We are basically saying get me the topojson and cve_ent fields from the table estados where the the field geom intersects with the rectangular polygon we provide.

1
2
3
SELECT c.topojson as feature, c.cve_ent as cve_ent
FROM geoms.estados c
WHERE (st_intersects(c.geom, ST_MakeEnvelope($1, $2, $3, $4, 4326)))

Now let’s get the request params (which will be our xyz values from the tile rendered) and use them to convert to bbox of the form [w, s, e, n]. The bbox method has the following signature bbox(x, y, zoom, tms_style, srs) the first three parameters are from the tiles xyz format, the tms_style will decide whether to compute using tms-style and srs will set the projection for resulting bbox (WGS84|900913).

1
2
3
4
5
const layername = 'estatal';
let { x , y , z } = request.params;
const db = request.server.app.conn;
// https://www.npmjs.com/package/sphericalmercator#bboxx-y-zoom-tms_style-srs
const bbox = mercator.bbox(+x, +y, +z, false, 'WGS84' );

Now that we have a way to create our polygon via the ST_MakeEnvelope function, let’s run our query. The query resulting mask any has the following signature db.any(query, values) so let’s pass our query and the values as an array from our recently created bbox via the sphericalmercator package and reply with the query result.

1
2
3
4
5
6
7
// https://github.com/vitaly-t/pg-promise#query-result-mask
return db.any(query, [bbox[0], bbox[1], bbox[2], bbox[3]])
.then(data => reply(data))
.catch(err => {
reply(err);
console.log(['error'], err);
});

We have a working handler but we are still missing a route that will execute this handler. Lets go into our index file at our layers plugin folder and register our route:

plugins/layers/index.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
exports.register = function(server, options, next) {
server.route({
method: 'GET',
// Following the format :)
path: '/layers/states/{z}/{x}/{y}.pbf',
config: require('./layer/states')
});
console.log(['debug'], 'Layers plugin loaded...');
next();
};
exports.register.attributes = {
pkg: require('./package.json')
};

Now if we go to http://0.0.0.0:8081/layers/states/7/28/56.pbf we should get a response like the one below:

1
2
3
4
5
6
7
8
9
10
[
{
"feature": {},
"cve_ent": "11"
},
{
"feature": {},
"cve_ent": "13"
}
]

So what is happening? We are getting al the polygons that are intersected by the tile polygon. We can say that at the zoom level 7 tile index 28, 56 there are 10 states that intersect the tile polygon. This is a little step to return a vector tile but at the moment we are just returning a json response. We will use mapnik to create our tile:

plugins/layers/layer/states.js
1
2
3
4
const path = require('path');
const mapnik = require('mapnik');
// http://mapnik.org/documentation/node-mapnik/3.5/#mapnik.registerDatasource
mapnik.registerDatasource(path.join(mapnik.settings.paths.input_plugins, 'geojson.input'));

We are requiring the path and mapnik modules. Mapnik supports a plugin architecture which allows access to a variety of formats, we need to let our mapnik configuration that we’ll be using geojson as a format. Registering geojson.input via registerDatasource will allow us to use GeoJSON as format within the mapnik module. We have just declared our dependencies for the tile creation, now let’s create a function for the creation of our tile:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
const topojson = require('topojson');
const createTile = data => {
const vtile = new mapnik.VectorTile(+z, +x, +y);
const features = data.map(parseEstado);
const geojson = { type: "FeatureCollection", features };
vtile.addGeoJSON(JSON.stringify(geojson), layername, {});
request.server.log(['debug'], 'creating tile');
return vtile;
};
// helper fn
const parseEstado = estado => {
const topo = estado.feature;
const cve_ent = estado.cve_ent;
// https://github.com/topojson/topojson-client/blob/master/README.md#feature
const geometry = topojson.feature(topo, topo.objects[cve_ent]);
return {
type: "Feature",
geometry: geometry.features[0].geometry,
properties: {}
};
};

First we are creating a new vector tile by using the mapnik’s tile generator, which will build a tile according to the Mapbox Vector Tile specification for compressed and simplified tiled vector data. We are requesting topojson data in our query to reduce the data transfer size, the API for the VectorTile object doesn’t won’t accept topoJSON. We will need to transform each topoJSON into a valid GeoJSON, we will use the topojson module to achieve this task. When we have a valid FeatureCollection we may add it to the tile we created by using the addGeoJSON method. Amazing, a tile was created, still we need to compress the data:

1
2
3
4
5
6
7
8
9
10
11
12
13
const zlib = require('zlib');
const deflateTile = vtile => {
return new Promise((resolve, reject) => {
vtile.getData({}, (err, pbf) => {
if(err) reject(err);
zlib.deflate(pbf, (err, res) => {
if(err) reject(err);
resolve(res);
});
});
});
};

The getData method will retrieve the data in the vector tile as a buffer, the signature is the following getData([options], callback), we can tweak the options a little but the defaults should work smoothly. The second parameter is a callback which allows this task to be done in an asynchronous way. We get the data from our tile as a buffer, we just need to compress it. The zlib deflate will compress the buffer we received from the getData method, the signature for the deflate method is deflate(buf[, options], callback) the first parameter being the buffer and the second one being the callback, so yeah this method is also asynchronous. Since both methods are async we will wrap them in a Promise. Finally we just need to set the headers into our reply interface:

1
2
3
4
5
6
const replyProtoBuf = binary => {
reply(binary).type('application/x-protobuf')
.header('Content-Type', 'application/x-protobuf')
.header('Content-Encoding', 'deflate')
return binary;
};

Our states.js file should look like this at the moment:

plugins/layers/layer/state.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
const path = require('path');
const zlib = require('zlib');
const mapnik = require('mapnik');
const SphericalMercator = require('sphericalmercator');
const topojson = require('topojson');
const mercator = new SphericalMercator({
size: 256 //tile size
});
mapnik.registerDatasource(path.join(mapnik.settings.paths.input_plugins, 'geojson.input'));
const query = `
SELECT c.topojson as feature, c.cve_ent as cve_ent
FROM geoms.estados c
WHERE (st_intersects(c.geom, ST_MakeEnvelope($1, $2, $3, $4, 4326)))
`;
module.exports = {
handler: (request, reply) => {
const cache = request.server.app.rc;
const layername = 'estatal';
let { x , y , z } = request.params;
const createTile = data => {
const vtile = new mapnik.VectorTile(+z, +x, +y);
const features = data.map(parseEstado);
const geojson = { type: "FeatureCollection", features };
vtile.addGeoJSON(JSON.stringify(geojson), 'estatal' , {});
request.server.log(['debug'], 'creating tile');
return vtile;
};
const deflateTile = vtile => {
return new Promise((resolve, reject) => {
vtile.getData({}, (err, pbf) => {
if(err) reject(err);
zlib.deflate(pbf, (err, res) => {
if(err) reject(err);
resolve(res);
});
});
});
};
const replyProtoBuf = binary => {
reply(binary).type('application/x-protobuf')
.header('Content-Type', 'application/x-protobuf')
.header('Content-Encoding', 'deflate')
return binary;
};
const db = request.server.app.conn;
const bbox = mercator.bbox(+x, +y, +z, false, 'WGS84' );
db.any(query, [bbox[0], bbox[1], bbox[2], bbox[3]])
.then(createTile)
.then(deflateTile)
.then(replyProtoBuf)
.catch(err => {
console.log(['error'], err);
reply(err);
});
}
};
const parseEstado = estado => {
const topo = estado.feature;
const cve_ent = estado.cve_ent;
const geometry = topojson.feature(topo, topo.objects[cve_ent]);
return {
type: "Feature",
geometry: geometry.features[0].geometry,
properties: {}
};
};

With all this we can simply test with curl if our endpoint works:

1
2
3
4
5
6
7
8
9
10
curl -s -D - http://0.0.0.0:8081/layers/states/7/28/56.pbf -o /dev/null
# HTTP/1.1 200 OK
# content-type: application/x-protobuf
# content-encoding: deflate
# vary: origin
# cache-control: no-cache
# content-length: 5624
# accept-ranges: bytes
# Date: Sun, 26 Mar 2017 07:17:33 GMT
# Connection: keep-alive

The endpoint works and serves vector tiles as protobufs, now we just need to display them in our index.html file:

1
2
3
4
5
6
7
8
9
10
11
L.vectorGrid.protobuf("/layers/states/{z}/{x}/{y}.pbf", {
vectorTileLayerStyles: {
estatal: {
weight: 1,
fillColor: '#9bc2c4',
fillOpacity: 0.6,
fill: true
},
},
maxNativeZoom: 14
}).addTo(mymap);

Lets go to http://0.0.0.0:8081/test-vector-tiles and we should see our tiles working in our maps. Amazing, the styling can be achieved by including properties in your GeoJSON responses by making a join in your query, something like this:

1
2
3
4
5
SELECT c.topojson as feature, c.cve_ent as cve_ent, s.* as properties
FROM geoms.geom_estados c
INNER JOIN my_schema.my_table s
USING (cve_ent)
WHERE (st_intersects(c.geom, ST_MakeEnvelope($1, $2, $3, $4, 4326)))

In your parse function you would have to pass the fields you want into the properties object:

1
2
3
4
5
6
7
8
9
10
const parseEstado = estado => {
const topo = estado.feature;
const cve_ent = estado.cve_ent;
const geometry = topojson.feature(topo, topo.objects[cve_ent]);
return {
type: "Feature",
geometry: geometry.features[0].geometry,
properties: estado.properties
};
};

And define a function to style the features based in the desired properties:

1
2
3
4
5
6
7
8
9
10
11
12
13
L.vectorGrid.protobuf("/layers/states/{z}/{x}/{y}.pbf", {
vectorTileLayerStyles: {
// A function for styling features dynamically, depending on their
// properties and the map's zoom level
estatal: function(properties, zoom) {
weight: 1,
fillColor: '#9bc2c4',
fillOpacity: properties.my_field > 1 ? 0 : 1,
fill: true
},
},
maxNativeZoom: 14
}).addTo(mymap);

We have working tiles with the ability of changing styles easily, if we remember correctly at the beginning of the tutorial we mentioned that we were going to use redis. Lets create a redis plugin to reduce the calls to our database.

Redis plugin

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. Redis is not a plain key-value store, it is actually a data structures server, supporting different kinds of values. What this means is that, while in traditional key-value stores you associated string keys to string values, in Redis the value is not limited to a simple string, but can also hold more complex data structures.

One of the complex structures are binary-safe strings, we will use them to avoid unnecessary database calls. In certain use cases the tile information won’t change in a certain period of time and in order to save resources we can store the result of our handlers with redis.

A redis server is not present in our current configuration, let’s add it to our docker-compose.yml

docker-compose.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: "3"
services:
db:
image: mdillon/postgis
ports:
- "9432:5432"
node:
build:
context: .
dockerfile: Dockerfile-setup
volumes:
- .:/usr/src/app
command: /bin/bash
redis: +
image: redis +
ports: +
- "6379:6379" +
tile-server:
build: .
command: npm run start-development
links:
- db
- redis
depends_on: # Express dependency between services
- db
- redis
volumes:
- .:/usr/src/app
ports:
- "8081:8081"

Create a redis folder within our plugins directory and create a index.js and package.json file.

plugins/redis/index.js
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
const redis = require('redis');
const bluebird = require('bluebird');
// Use bluebird to promisify the redis methods
bluebird.promisifyAll(redis.RedisClient.prototype);
bluebird.promisifyAll(redis.Multi.prototype);
exports.register = function(server, options, next) {
const { REDIS_HOST, REDIS_PORT } = options;
server.app.rc = redis.createClient({
host: REDIS_HOST,
port: REDIS_PORT,
detect_buffers: true
});
// Create a flush route to clean our cache
server.route({ method: 'GET', path: '/cache/flush', config: {
handler: (request, reply) => {
const cache = request.server.app.rc;
cache.flushdb((err, succeeded) => {
console.log(['debug'], 'Redis databases flushed');
reply({ status: 'ok' });
});
}
}});
console.log(['debug'], 'Redis plugin loaded...');
next();
};
exports.register.attributes = {
pkg: require('./package.json')
};

We are using bluebird.promisifyAll in the redis module, which as the documentation explains; promisifies the entire object by going through the object’s properties and creating an async equivalent of each function on the object and its prototype chain. The usage of Promises will help us deal with redis in a much cleaner way. We also included a way to clean our cache via http, of course this endpoint should be only exposed in development or protect the endpoint.

plugins/redis/package.json
1
2
3
4
5
{
"name": "redis-plugin",
"version": "1.0.0"
}

Don’t forget to register our plugin into our array of plugins:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
const plugins = [
{
register: require('./plugins/postgres'),
options: {
DB_HOST: 'db',
DB_USER: 'postgres',
DB_PORT: '5432',
DB_PASS: '',
DB_DATA: 'postgres'
}
},
{
register: require('./plugins/redis'),
options: { REDIS_HOST: 'redis', REDIS_PORT: '6379' }
},
{ register: require('./plugins/layers') },
{ register: require('inert') }
];

Lets use our redis server through our plugin:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
const setTileInCache = tile => {
cache.setAsync(`${layername}/${z}/${x}/${y}`, tile.toString('binary'), cache.print)
return tile;
};
cache.getAsync(`${layername}/${z}/${x}/${y}`)
.then(cachedResult => {
if(cachedResult !== null) {
console.log(['debug'], 'hit in cache');
replyProtoBuf(new Buffer(cachedResult, "binary"));
} else {
console.log(['debug'], 'cache miss');
const db = request.server.app.conn;
const bbox = mercator.bbox(+x, +y, +z, false, 'WGS84' );
db.any(query, [bbox[0], bbox[1], bbox[2], bbox[3]])
.then(createTile)
.then(deflateTile)
.then(replyProtoBuf)
.then(setTileInCache);
}
})
.catch(err => {
console.log(['error'], err);
reply(err);
});

We declare a new function which will take care of caching our binary using redis. Now we have to first check if the key exists withitn our cache, getAsync will take care of that. If we find the key in our cache we create a buffer from our stored binary-safe string and use our replyProtoBuf function to return the desired value. In case we didn’t find our key in redis we follow the procedure to create our tile but including the additional setTileInCache function in the promise chain.

Lets check the results:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
+----------------------------------------------------------------------+
| Results |
+----------------------------------------------------------------------+
| url | database-round-trip | redis |
+---------------------------+---------------------+--------------------+
| layers/states/6/13/28.pbf | 119.39999996684492 | 17.162000003736466 |
+---------------------------+---------------------+--------------------+
| layers/states/6/14/28.pbf | 119.57400001119822 | 22.809999994933605 |
+---------------------------+---------------------+--------------------+
| layers/states/6/13/27.pbf | 118.50199999753386 | 24.090000020805746 |
+---------------------------+---------------------+--------------------+
| layers/states/6/14/27.pbf | 113.42800001148134 | 16.60899998387322 |
+---------------------------+---------------------+--------------------+
| layers/states/6/13/29.pbf | 112.74800001410767 | 16.008000005967915 |
+---------------------------+---------------------+--------------------+
| layers/states/6/14/29.pbf | 110.79499998595566 | 19.1810000105761 |
+---------------------------+---------------------+--------------------+
| layers/states/6/13/26.pbf | 174.31500001112 | 64.97300002956763 |
+---------------------------+---------------------+--------------------+
| layers/states/6/14/26.pbf | 173.37799997767434 | 63.6740000336431 |
+---------------------------+---------------------+--------------------+
| layers/states/6/13/30.pbf | 173.7239999929443 | 69.87500004470348 |
+---------------------------+---------------------+--------------------+
| layers/states/6/14/30.pbf | 172.7309999987483 | 69.79099998716265 |
+---------------------------+---------------------+--------------------+
| Total | 1388.59499997 | 384.173000115 |
+---------------------------+---------------------+--------------------+

The redis usage allows us to reduce quite a bit the response time from our server.