Related Plugins and Tags

QGIS Planet

User history added to osm-reporter app

This weekend I implemented a new feature for my 'just for fun' project osm-reporter. The feature implements timeline reporting for Open Street Map contributors. Its probably easiest to explain with a screenshot:

OSM-Reporter with timelines

Here is another one showing a few charts together:

OSM-Reporter with timelines

I added the feature because I wanted to see how many person days were involved in data gathering for a particular area and feature type. It does have some limitations - it ignores deletion and ownership transfer of features. It does however provide a nice quick overview of effort. Try it on your own neighbourhood and see how much work went into the OSM coverage for your area!

I've also had some really awesome contributions from Yohan Bonifiace (added leaflet support, feature type switching, extent urls) and Sun Ning (added heatmap support). Its really great making a small, simple and limited scope project and seeing it grow with random hacks of kindness from strangers! Here is a little screenshot of the heatmap feature:

Heatmaps

I hope you all enjoy the new version, and look forward to more improvements and suggestions from the community. Its all freely available from my github repository. You can test out the current version of the software by visiting http://osm.linfiniti.com.

Fun with GeoNames

The GeoNames dataset provides a list of global placenames, their location and some additional information such as population and so on. It is published under the Creative Commons Attribution 3.0 License, which allows you to use the data for your own purposes. You can download the data by country, or the entire global dataset. In this article, I will walk you though how I downloaded the entire dataset, loaded it into PostgreSQL and added a geometry column so that I could view it in QGIS. Note that you can substitute these instructions for a specific country's data easily.

First up, lets get the data from the geonames download page!

wget -c http://download.geonames.org/export/dump/allCountries.zip

Note the download is around 196mb so if you live in an internet backwater like I do, expect it to take a little while. If the download gets disconnected, just rerun the same command again - the '-c' option tells wget to continue where it left off last time.

Once the data is downloaded, unzip it:

unzip allCountries.zip

You should now have a text file called allCountries.txt weighing in at just under 900mb. Next we can load it into PostgreSQL using a variation of this article. I highly recommend the use of schemas to partition your database into logical units. In the code listings that follow, it is assumed you have a schema called 'world'. If you need to create it, simply do:

create schema world;

From the psql prompt. Since I am only interested in the geoname table at the moment I simply do this in my database.

create table world.geoname (
         geonameid       int,
         name            varchar(200),
         asciiname        varchar(200),
         alternatenames  varchar(8000),
         latitude        float,
         longitude       float,
         fclass  char(1),
         fcode   varchar(10),
         country varchar(2),
         cc2 varchar(60),
         admin1  varchar(20),
         admin2  varchar(80),
         admin3  varchar(20),
         admin4  varchar(20),
         population      bigint,
         elevation       int,
         gtopo30         int,
         timezone varchar(40),
         moddate         date
 );

You will notice that I extended the alternatenames field size from the original tutorial's 4000 characters to 8000 characters in order to accommodate some longer entries that were causing my imports to fail with 4000 chars.

Next we can import the data (also from the psql prompt):

copy world.geoname (geonameid,name,asciiname,alternatenames,
latitude,longitude,fclass,fcode,country,cc2,
admin1,admin2,admin3,admin4,population,elevation,gtopo30,
timezone,moddate) from '/home/web/allCountries.txt' null as '';

Once again this is similar to the import line used by the original article I used, except I have used a full path to my allCountries.txt file. The import may take a little while depending on the speed of your computer.

When it is completed, you should have a bunch of data (~7.5 million records) in your table:

gis=# select count(*) from world.geoname ;
  count
---------
 7664712
(1 row)

It is going to be useful to have a unique id for every record - QGIS for one would like to have it, so lets add one (in addition to the geonameid field):

alter table world.geoname add column id serial not null;
CREATE UNIQUE INDEX idx_geoname_id ON world.geoname (id);

Because I know I will be using some other fields as basis for subset queries etc., I added some indexes on those too:

CREATE INDEX idx_geoname_population ON world.geoname (population);
CREATE INDEX idx_geoname_fcode ON world.geoname(fcode);

Ok thats all great, but there is no geometry column yet so we can't view this in QGIS easily. So lets GIS enable the table! First we add a new geometry column:

alter table world.geoname add column the_geom geometry;

Now we populate the geometry column:

update world.geoname set the_geom = st_makepoint(longitude,latitude);

Next we add a constraint to ensure that the column always contains a point record or is null

alter table world.geoname add constraint enforce_geotype_the_geom
CHECK (geometrytype(the_geom) = 'POINT'::text OR the_geom IS NULL);

Finally lets index the table on the geometry field:

CREATE INDEX idx_geoname_the_geom ON world.geoname USING gist(the_geom);

Ok next we can connect to the database using QGIS and view our data! Note that you may want to filter the data or set up scale dependent visibility so that QGIS doesn't try to render all 7.5 million points when you zoom out.

I added a query filter like this to the layer properties -> general tab -> subset:

"population" >= 10000 AND fcode != 'PPLA' and fcode != 'PCLI' AND fcode != 'ADM1'

You should experiment and read the geonames documentation in order to define a filter that is useful for your purposes.

Geonames layer loaded in QGIS

The Geonames dataset is a wonderful asset to the GIS community, particularly to those of us who value Free Software and Free Data.

Update 6 Aprl 2011:

I encountered one issue with the above procedure: the SRID for the imported points is -1 when loaded instead of 4326. This cause problems e.g. in mapserver you get an error like this:

ERROR:  Operation on two geometries with different SRIDs

To fix the issue I ran the following update query to assign all points a proper SRID:

update world.geoname set the_geom = ST_SETSRID(the_geom,4326);

Note that it takes quite a while to run.  When it completes, you can verify that the points are all in the correct CRS by running this little query:

gis=# select distinct ST_SRID(the_geom) from world.geoname;

Which should produce something like this:

 st_srid
---------
 4326
(1 row)

For good measure, I also added the geoname table to my geometry columns table:

insert into geometry_columns (f_table_catalog,f_table_schema,f_table_name,f_geometry_column,
coord_dimension,srid,type) values ('','world','geoname','the_geom',2,4326,'POINT');

Lastly I also gave select permissions to my readonly user (which I use when publishing in a web environment):

grant usage on schema world to readonly;
grant select on world to readonly;

I have also fixed the formatting of some of the SQL snippets above for those who struggled to read it within the width of this page.

A workflow for creating beautiful relief shaded dems using GDAL

Sometimes I create hillshades using the QGIS hillshade plugin and then overlay the original DEM over it. I set the DEM to have a false colour pallette and set it to be semi-transparent to produce something like this:

Typical usage of a hillshade with false colour overlay

That is all well and good but a bit impractical. It would be much nice to have the colour pallette permanetly assigned to the hillshade. Also I want to be able to clip and mask the resulting raster to the boundary specified in a shapefile. Fortunately, GDAL provides all the tools you need to make this happen. There are already some nice articles (like this one) that describe parts of this workflow but I am writing this because I wanted to note the additional steps that I took to make this work well for me.

Before you begin

Before you begin you should have:

  1. a raster containing digital elevation data (DEM) - in my case its called 'alt.tif'
  2. a vector layer (typically a shapefile) containing the area of interest for your final product - in my case its called 'tanzania.shp'

Create the hillshade image

The first thing we need to do is generate a hillshade. There is a nice python plugin for doing this in QGIS, you can do it in GRASS using the QGIS-GRASS plugin. But in this article I'm going for an all-GDAL approach so we will be using the handy **gdaldem** command line application.  I won't be explaining the parameters I have used here as they are well explained in the gdaldem manual pages.

So to create our hillshade we do something like this:

::
gdaldem hillshade alt.tif shade.tif -z 5 -s 111120 -az 90

Which will produce something like this: colorbrewer would be a good place to start if you want to learn more.  Another favourite site of mine is colourlovers.com and for this tutorial I decided to use a pallette from there to see how it would turn out.

Once you have selected a suitable colour pallette (around 5 or 6 colour classes should do it), the next thing you need to do is get some information about the range of values contained in your DEM. Once again you can easily point and click your way to this in QGIS, but here is the way to get it in gdal from the command line:

::
gdalinfo -mm alt.tif

The resulting output includes the computed min/max for Band 1 - which is what we are after:

::
Band 1 Block=1334x3 Type=UInt16, ColorInterp=Gray
Computed Min/Max=1.000,5768.000 NoData Value=65535

Ok so our minimum height is 1m and maximum is 5768m - Tanzania is the home of Kilimanjaro after all! So lets split that range up into 5 classes to match the 'Landcarpet Europe' colourlover pallette I selected. I set nearly white as an additional colour for the highest altitude range.

::
65535 255 255 255 5800 254 254 254 3000 121 117 10 1500 151 106 47 800 127 166 122 500 213 213 149 1 218 179 122

The value in the first column is the base of the scale range. The subsequent values are RGB triplets for that range. I saved this as a text file called 'ramp.txt' in the same directory as my 'alt.tiff' dataset. You will notice that I made the value ranges closer together at lower altitudes and wider appart at higher altitudes. You may want to experiment a little to get pleasing results - especially if you have a relatively small number of high lying terrain pixels and the rest confined to lower lying areas.

Also note that I assigned the highest terrain 'nearly white' so that I could reserve white (RGB: 255 255 255) for the nodata value (65535) in this dataset. We will use the fact that white is only used for nodata to our advantage when we do a clip further on in these instructions.

Ok now we can use gdaldem to generate the terrain map:

::
gdaldem color-relief alt.tif ramp.txt relief.tif

This is what my relief map looked like:

The terrain colour map I produced

Don't worry about the fact that it does not resemble the colour pallette you have chosen - it will do in the next step!

Merging the two rasters

The next step is to combine the two products. I used Frank Warmerdam's handy hsv_merge.py script for this purpose.

::
./hsv_merge.py relief.tif shade.tif colour_shade.tif

Which produced this:

The result of merging my hillshade and my terrain colour map

You may have noticed that it is only at this point that the colours of our product resemble the original pallette we used.

One little gotcha with the hsv_merge.py script is that it throws away our nodata values, so what was sea before (and nodata in my original alt.tif dataset) is now white (RGB: 255 255 255).

Clipping and masking

You may have everything you need from the above steps. Alternatively you can also clip and mask the dataset using a shapefile.

::
gdalwarp -co compress=deflate -dstnodata 255 -cutline Tanzania.shp
colour_shade.tif colour_shade_clipped.tif

My final image is now a compressed tif with nodata outside of the country of Tanzania and looks like this:

Final result: A false coloured elevation map for Tanzania

A final note

One of the things I love about the command line is the repeatable and reusable workflows it allows for. I can distill everything in this blog post into a sequence of commands and replay them any time. If you are still stuck doing things in a GUI only way, give BASH a try - you can really do powerful geospatial data processing with it!

QGIS & PostGIS Training at AIMS

This week we (Gavin Fleming, Sam Lee Pan and myself) are doing more training - a week of QGIS and PostGIS. Its a small group this time and they have GIS knowledge already so we get to go much deeper into the nuts & bolts of QGIS than I normally would do on a course (which is a real pleasure for me). Our attendees are a mix of people from industry, government and students - and we have one attendee from Botswana!

Also very interesting is the venue we are using to run the course in. AIMS (African Institute for Mathematical Sciences) is situated a stone's throw from the popular Muizenberg beach,  Cape Town.  Now this (AIMS) is one awesome place.  When you walk in the door there is a statue of Steven Hawking who paid a visit to the center. And the place is brimming with maths students from all over Africa. They are all on fully paid up scholarships to attend a residential diploma course in mathematics which will prepare them to go on to do masters and Phd courses in maths. So they get a room, full board and access to top notch tutors and leave with excellent skills and a network of friends around Africa with whom they can collaborate in future years. The best part is that the whole center runs on Linux. They have labs full of ubuntu machines where they beat out complicated formulas using SAGE and make python jump through mathematical hoops.

Now the funny thing is that I have been dreaming for the last few years of having just such a center for FOSSGIS - where people from all over Africa can come and do a residential course in using and programming FOSSGIS. So it was with fascination, pleasure and yes, a little bit of envy that I was taken around the AIMS facilities. One day hopefully someone with an unbelievable fortune and no idea what to do with it will see this blog post and contact me so that we can start AIFS (African Institute for FOSSGIS Studies) :-).  We are really grateful to AIMS for hosting us this week in their excellent facility, and showing us what a little foresight and ingenuity can achieve.

I'll leave you with a couple of pics of our course attendees doing their thing...

[gallery link="file" columns="2"]

Batch clipping with GDAL and bash

I know the little bash scripts I write and post here are popular so here is another one.  The script sequentially unzips worldclim future climate scenario datasets, and then clips them to the bounding box of Tanzania using gdal. After clipping, it removes the extracted files again so you are left with just your original downloaded zip files, and the clipped files in compressed geotif format.

#!/bin/bash
mkdir clip
for FILE in *.zip
do
  unzip $FILE
  cd 30s/
  for BIL in *.bil
  do
    CLIP=../clip/$(basename $BIL .bil).tif
    gdal_translate -of GTiff -co COMPRESS=LZW -co TILED=YES -projwin 29 0 40 -11 $BIL $CLIP
    rm $BIL
  done
  cd ..
  rm -rf 30s
done

PositionIT Article - A gentle introduction to GIS

After the completion of the 'Gentle Introduction to GIS' resource (available here) work that we did in 2009, Nwabisa Kakaza and I wrote an article on the training resource which now appears in PositionIT (a local GIS industry publication). For your reading pleasure, the article is downloadable here.

Ubuntu Jaunty Post-Installation for FOSSGIS

It's quick (overlooking slow internet access) and easy to set up Ubuntu Jaunty 64 bit for use with the FOSSGIS stack. Here are the packages I install once the base system is set up (including a few non-fossgis ones that are nice to have).

sudo apt-get install build-essential vim grass libqt4-core \
libqt4-debug libqt4-dev libqt4-gui libqt4-qt3support libqt4-sql \
lsb-qt4 qt4-designer   qt4-dev-tools qt4-doc qt4-qtconfig uim-qt gcc \
libapt-pkg-perl resolvconf   gdal-bin libgdal1-dev libgeos-dev proj \
libgdal-doc libhdf4g-dev libhdf4g-run python-dev  python-gdal libgsl0-dev \
g++ libjasper-dev libtiff4-dev subversion libsqlite3-dev sqlite3 ccache \
make libpq-dev flex bison cmake txt2tags python-qt4 python-qt4-dev \
python-sip4 sip4 python-sip4-dev postgresql-8.3-postgis vim ctags \
vim-scripts python-django python-psycopg2 vim apache2 \
sun-java6-fonts python-geoip imagemagick grass-dev ia32-libs \
graphviz txt2tags texlive-fonts-recommended \
texlive-fonts-recommended-doc texlive-latex-base texlive-latex-base-doc \
tipa tex-common texlive-base texlive-base-bin texlive-base-bin-doc \
texlive-common texlive-doc-base gv latex2html texlive-latex-extra \
libqwt5-qt4-dev pyqt4-dev-tools drivel lightning-extension thunderbird-gnome-support \
gwibber python-django-evolution python-django-dmigrations python-django-djblets \
python-django-debug-toolbar python-django-registration python-django-lint \
python-django-tagging awn-applets-c-extras awn-applets-python-extras\
gnome-do tomcat6-admin tomcat6 enigmail

You also need to get rid of this since it doesnt play nice with QGIS:

sudo apt-get remove uim-qt3

Note: the ia32-libs is needed to run google earth under ubuntu 64

Back to Top

Sustaining Members