Related Plugins and Tags

QGIS Planet

WebGL 3D Globe anyone/

Here is a really nice project for you FOSSGIS fans out there: ‘Godzi‘. Created by Pelicanmapping.com (the creators or osgEarth), Godzi is a javascript and webGL implementation of a 3D browsable earth. Take a look at their demo (screenshot below).

Godzi in action

Godzi in action

Of course my QGIS self is now thinking ‘Oh cool we can use this to publish maps to a 3D web globe using QGIS Mapserver!’. Something to add to the todo list for the fledgling QGIS web client project.

pixelstats trackingpixel

WebGL 3D Globe anyone/

Here is a really nice project for you FOSSGIS fans out there: 'Godzi'. Created by Pelicanmapping.com (the creators or osgEarth), Godzi is a javascript and webGL implementation of a 3D browsable earth. Take a look at their demo (screenshot below).

Godzi in action

Of course my QGIS self is now thinking 'Oh cool we can use this to publish maps to a 3D web globe using QGIS Mapserver!'. Something to add to the todo list for the fledgling QGIS web client project.

USB Recovery Script

What do you do when you are managing a remote server and you need to make some critical changes (like to the networking configs) and you feel uncomfortable about the possibility of losing access to the server and never getting it back? This was the situation we were in today. The server is a little esoteric – its a headless box and even in the server center the engineers don’t have any way to log in interactively at the server itself. Luckily the server is running Debian linux and has a usb port so help is at hand via bash!

I wrote this little script which is designed to be run from a cron job, for example every minute.

#!/bin/bash

# This script is to rescue the system from usb while
# testing migration to the new vpn.

# It will mount the last partition of any inserted usb,
# cd to the mount point and try to run a script
# called 'rescue.sh'
# After the script is run it will be renamed to
# rescue.ok
#
# You should set this script to run as a cron job
# at minute intervals.
#
# e.g. # m h  dom mon dow   command
#      * * * * * /root/usbrescue.sh
#

RESCUEFILE=rescue.sh
OKFILE=rescue.ok
LOGFILE=rescue.log
MOUNTPOINT=/mnt/rescue
SCRIPTPATH=${MOUNTPOINT}/${RESCUEFILE}
OKPATH=${MOUNTPOINT}/${OKFILE}
LOGPATH=${MOUNTPOINT}/${LOGFILE}
# Note we ignore partitions on devices sda - sdd as they are internal disks
LASTPARTITION=$(cat /proc/partitions  | awk '{print $4}' | grep -v 'sd[a-d]' \
| grep -v name | grep -v '^$' |sort | tail -1)
if [ $LASTPARTITION != "" ]
then
  if [ ! -b /dev/$LASTPARTITION ]
  then
    echo "Error /dev/$LASTPARTITION is not a block device"
    exit
  else
    echo "OK /dev/$LASTPARTITION is a block device"
  fi
  echo "Device found creating mount point"
  if [ ! -d "$MOUNTPOINT" ]
  then
    mkdir $MOUNTPOINT
  fi
  echo "Mounting...."
  mount /dev/$LASTPARTITION $MOUNTPOINT
  echo "Checking if rescue script exists"
  # Test the rescue script exists(-e) and is not 0 length (-s)
  if [ -e $SCRIPTPATH -a -s $SCRIPTPATH ]
  then
    echo "Making $SCRIPTPATH executable"
    chmod +x $SCRIPTPATH
    echo "Running script"
    $SCRIPTPATH > $LOGPATH 2>&1
    echo "Disabling script"
    mv $SCRIPTPATH $OKPATH
  else
    echo "No Rescue script found"
  fi
  echo "Unmounting.."
  cd /
  umount $MOUNTPOINT
else
  echo "No rescue device found"
fi
echo "done"

 

If you place the script in /root/usbrescue.sh and add a cron job as outlined in the comments, it will poll for devices regularly, mount the last partition available. If it finds a script on that partition labelled rescue.sh, it will run it then rename the script to rescue.ok and write any stderror and stdout logs to rescue.log on the partition. The script could perhaps be improved by adding a lock file so that it does not get run again if it is already running (if it takes longer than a minute to run for example),  buts its a good starting point for a system rescue if things go wrong. Now the engineer on site can simply pop in his usb stick and any recovery commands will be run from it.

pixelstats trackingpixel

USB Recovery Script

What do you do when you are managing a remote server and you need to make some critical changes (like to the networking configs) and you feel uncomfortable about the possibility of losing access to the server and never getting it back? This was the situation we were in today. The server is a little esoteric - its a headless box and even in the server center the engineers don't have any way to log in interactively at the server itself. Luckily the server is running Debian linux and has a usb port so help is at hand via bash!

I wrote this little script which is designed to be run from a cron job, for example every minute.

#!/bin/bash

# This script is to rescue the system from usb while
# testing migration to the new vpn.

# It will mount the last partition of any inserted usb,
# cd to the mount point and try to run a script
# called 'rescue.sh'
# After the script is run it will be renamed to
# rescue.ok
#
# You should set this script to run as a cron job
# at minute intervals.
#
# e.g. # m h  dom mon dow   command
#      * * * * * /root/usbrescue.sh
#

RESCUEFILE=rescue.sh
OKFILE=rescue.ok
LOGFILE=rescue.log
MOUNTPOINT=/mnt/rescue
SCRIPTPATH=${MOUNTPOINT}/${RESCUEFILE}
OKPATH=${MOUNTPOINT}/${OKFILE}
LOGPATH=${MOUNTPOINT}/${LOGFILE}
# Note we ignore partitions on devices sda - sdd as they are internal disks
LASTPARTITION=$(cat /proc/partitions  | awk '{print $4}' | grep -v 'sd[a-d]' \
| grep -v name | grep -v '^$' |sort | tail -1)
if [ $LASTPARTITION != "" ]
then
  if [ ! -b /dev/$LASTPARTITION ]
  then
    echo "Error /dev/$LASTPARTITION is not a block device"
    exit
  else
    echo "OK /dev/$LASTPARTITION is a block device"
  fi
  echo "Device found creating mount point"
  if [ ! -d "$MOUNTPOINT" ]
  then
    mkdir $MOUNTPOINT
  fi
  echo "Mounting...."
  mount /dev/$LASTPARTITION $MOUNTPOINT
  echo "Checking if rescue script exists"
  # Test the rescue script exists(-e) and is not 0 length (-s)
  if [ -e $SCRIPTPATH -a -s $SCRIPTPATH ]
  then
    echo "Making $SCRIPTPATH executable"
    chmod +x $SCRIPTPATH
    echo "Running script"
    $SCRIPTPATH > $LOGPATH 2>&1
    echo "Disabling script"
    mv $SCRIPTPATH $OKPATH
  else
    echo "No Rescue script found"
  fi
  echo "Unmounting.."
  cd /
  umount $MOUNTPOINT
else
  echo "No rescue device found"
fi
echo "done"

If you place the script in /root/usbrescue.sh and add a cron job as outlined in the comments, it will poll for devices regularly, mount the last partition available.

If it finds a script on that partition labelled rescue.sh, it will run it then rename the script to rescue.ok and write any stderror and stdout logs to rescue.log on the partition. The script could perhaps be improved by adding a lock file so that it does not get run again if it is already running (if it takes longer than a minute to run for example), but it's a good starting point for a system rescue if things go wrong. Now the engineer on site can simply pop in his usb stick and any recovery commands will be run from it.

GDAL: efficiency of various compression algorithms

Let’s say you’ve got a lot of raster files you want to host on your map server, or even just one large backdrop. I needed to do the latter, but either way, you’re faced with the same problem: how can I reduce file size and access time? Whatever else I do – such as building pyramids – my raster file is going to be compressed some way or another. I can’t very well have visitors waiting for my server to run through an elaborate (de)compression method, but neither can I store images of the whole of South Africa and the world at several GB per file without perhaps running out of space.

Depending on the amount of space I have on the server, I might be prepared to sacrifice some speed to save space, or vice versa. So what’s the best compression method for my requirements?

I used GDAL’s translate utility (more info here) and went through several of its options to get a better idea of what’s on offer. For compression, my bash command basically looked like this:

  gdal_translate <compression options> -of GTiff input.tif output.tif

To test response times, I ran these commands to export the compressed file to thumbnails of various sizes:

  time gdal_translate -outsize 80 60 -of GTiff output.tif thumb-output.tif
  time gdal_translate -outsize 800 600 -of GTiff output.tif thumb-output.tif
  time gdal_translate -outsize 1280 1024 -of GTiff output.tif thumb-output.tif

(In retrospect, it may have been better to remove the thumbnail between each step rather than overwriting it, to better replicate conditions. Still, I don’t think it made much of a difference.)

My results are summarized in the graphs below, using the following labeling scheme:

Name: Compression options
N/A : -co COMPRESS=NONE
  A : -co COMPRESS=LZW -co PREDICTOR=1
  B : -co COMPRESS=LZW -co PREDICTOR=2
  C : -co COMPRESS=PACKBITS
  D : -co COMPRESS=DEFLATE -co PREDICTOR=1 -co ZLEVEL=1
  E : -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=1
  F : -co COMPRESS=DEFLATE -co PREDICTOR=1 -co ZLEVEL=6
  G : -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=6
  H : -co COMPRESS=DEFLATE -co PREDICTOR=1 -co ZLEVEL=9
  I : -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=9

Fig. 1 [image]

Fig. 1: File size relative to the uncompressed image. Shorter bars are better.

 

Fig. 2 [image]

Fig. 2: Access times in seconds. Shorter bars are better.

 

Going by these results, you can choose whichever compression best suits your requirements. For the shortest access times, you could use “packbits”; however, this also gave the largest (compressed) file size. For the smallest files size, “deflate” with a predictor of 2 and zlevel of 9. For a compromise, you could consider “deflate” with a predictor of 1 and zlevel of either 6 or 9.

Obviously, there are more permutations of this; you could include pyramids as well (use gdaladdo), for example. Or you may be willing to consider JPEG compression if you don’t mind the quality. In any case, I hope this article gives you ideas on where to look!

 

pixelstats trackingpixel

GDAL: efficiency of various compression algorithms

Let's say you've got a lot of raster files you want to host on your map server, or even just one large backdrop. I needed to do the latter, but either way, you're faced with the same problem: how can I reduce file size and access time? Whatever else I do - such as building pyramids - my raster file is going to be compressed some way or another. I can't very well have visitors waiting for my server to run through an elaborate (de)compression method, but neither can I store images of the whole of South Africa and the world at several GB per file without perhaps running out of space.

Depending on the amount of space I have on the server, I might be prepared to sacrifice some speed to save space, or vice versa. So what's the best compression method for my requirements?

I used GDAL's translate utility (more info here) and went through several of its options to get a better idea of what's on offer. For compression, my bash command basically looked like this:

gdal_translate <compression options> -of GTiff input.tif output.tif

To test response times, I ran these commands to export the compressed file to thumbnails of various sizes:

time gdal_translate -outsize 80 60 -of GTiff output.tif thumb-output.tif
time gdal_translate -outsize 800 600 -of GTiff output.tif thumb-output.tif
time gdal_translate -outsize 1280 1024 -of GTiff output.tif thumb-output.tif

(In retrospect, it may have been better to remove the thumbnail between each step rather than overwriting it, to better replicate conditions. Still, I don't think it made much of a difference.)

My results are summarized in the graphs below, using the following labeling scheme:

  Name: Compression options
  N/A : -co COMPRESS=NONE
    A : -co COMPRESS=LZW -co PREDICTOR=1
    B : -co COMPRESS=LZW -co PREDICTOR=2
    C : -co COMPRESS=PACKBITS
    D : -co COMPRESS=DEFLATE -co PREDICTOR=1 -co ZLEVEL=1
    E : -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=1
    F : -co COMPRESS=DEFLATE -co PREDICTOR=1 -co ZLEVEL=6
    G : -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=6
    H : -co COMPRESS=DEFLATE -co PREDICTOR=1 -co ZLEVEL=9
    I : -co COMPRESS=DEFLATE -co PREDICTOR=2 -co ZLEVEL=9

|Fig. 1 [image]|

|Fig. 2 [image]|

Going by these results, you can choose whichever compression best suits your requirements. For the shortest access times, you could use "packbits"; however, this also gave the largest (compressed) file size. For the smallest files size, "deflate" with a predictor of 2 and zlevel of 9. For a compromise, you could consider "deflate" with a predictor of 1 and zlevel of either 6 or 9.

Obviously, there are more permutations of this; you could include pyramids as well (use gdaladdo), for example. Or you may be willing to consider JPEG compression if you don't mind the quality. In any case, I hope this article gives you ideas on where to look!

Adding spatial indexes to all your tables

previously blogged about how to use table spaces and showed a little script to automate updating your spatial indexes to use a given namespace. A common problem with PostGIS is that people forget to build spatial indexes on their tables (I'm guilty of it myself as much as anyone). Assuming you use 'the_geom' as the geometry column for your tables as standard, here is a little automation for creating spatial indexes on all your tables in all your schemas (run after logging in to your database at the psql prompt):

 

\t
\o /tmp/createindexes.sql
select 'CREATE INDEX idx_' || tablename || '_geom ON ' || fullpath || ' 
USING GIST ( the_geom ) TABLESPACE optspace ;'
FROM (select tablename, schemaname || '.' || tablename as fullpath
 FROM pg_catalog.pg_tables where schemaname not in
('pg_catalog','information_schema')) as tablelist;
\o
\i /tmp/createindexes.sql

 

Some notes:

  • the \t psql option disables table headers and footers (it shows tuples only). Its a toggle, so enter it again to re-enable them.
  • the \o option writes all query results to a text file on the file system (under /tmp). It is disabled again by doing \o with no parameters.
  • The select query uses the system pg_catalogue.pg_tables table to pull out all user made tables and formulate a little sql expression to create the index. Example output of this is shown below.
  • the \o disables outpu recording as mentioned above.
  • the \i executes the commands in a file from the filesystem – in our case the file full of  ‘create index’ commands we created using the select statement above.

Example output:

CREATE INDEX idx_a_munic_geom ON za.a_munic
USING GIST ( the_geom ) TABLESPACE optspace ;
 CREATE INDEX idx_c_munic_geom ON za.c_munic
USING GIST ( the_geom ) TABLESPACE optspace ;
 CREATE INDEX idx_d_councl_geom ON za.d_councl
USING GIST ( the_geom ) TABLESPACE optspace ;

Note: You can see a list of your tablespaces like this:

select * from pg_catalog.pg_tablespace;
spcname   | spcowner |   spclocation    | spcacl | spcoptions
------------+----------+------------------+--------+------------
pg_default |       10 |                  |        |
pg_global  |       10 |                  |        |
homespace  |    16384 | /home/pg9-data   |        |
optspace   |    16384 | /opt/pg9-indexes |        |
(4 rows)
pixelstats trackingpixel

Adding spatial indexes to all your tables

I  previously blogged about how to use table spaces and showed a little script to automate updating your spatial indexes to use a given namespace. A common problem with PostGIS is that people forget to build spatial indexes on their tables (I'm guilty of it myself as much as anyone). Assuming you use 'the_geom' as the geometry column for your tables as standard, here is a little automation for creating spatial indexes on all your tables in all your schemas (run after logging in to your database at the psql prompt):
\t
\o /tmp/createindexes.sql
select 'CREATE INDEX idx_' || tablename || '_geom ON ' || fullpath || '
USING GIST ( the_geom ) TABLESPACE optspace ;'
FROM (select tablename, schemaname || '.' || tablename as fullpath
 FROM pg_catalog.pg_tables where schemaname not in
('pg_catalog','information_schema')) as tablelist;
\o
\i /tmp/createindexes.sql

Some notes:

  • the \t psql option disables table headers and footers (it shows tuples only). Its a toggle, so enter it again to re-enable them.
  • the \o option writes all query results to a text file on the file system (under /tmp). It is disabled again by doing \o with no parameters.
  • The select query uses the system pg_catalogue.pg_tables table to pull out all user made tables and formulate a little sql expression to create the index. Example output of this is shown below.
  • the \o disables outpu recording as mentioned above.
  • the \i executes the commands in a file from the filesystem - in our case the file full of  'create index' commands we created using the select statement above.

Example output:

CREATE INDEX idx_a_munic_geom ON za.a_munic
USING GIST ( the_geom ) TABLESPACE optspace ;
 CREATE INDEX idx_c_munic_geom ON za.c_munic
USING GIST ( the_geom ) TABLESPACE optspace ;
 CREATE INDEX idx_d_councl_geom ON za.d_councl
USING GIST ( the_geom ) TABLESPACE optspace ;

Note: You can see a list of your tablespaces like this:

select * from pg_catalog.pg_tablespace;
spcname   | spcowner |   spclocation    | spcacl | spcoptions
------------+----------+------------------+--------+------------
pg_default |       10 |                  |        |
pg_global  |       10 |                  |        |
homespace  |    16384 | /home/pg9-data   |        |
optspace   |    16384 | /opt/pg9-indexes |        |
(4 rows)

Working with tablespaces in PostGIS

The PostgreSQL tablespace system is a convenient way to manage storage space allocation in PostgreSQL and perhaps improve performance. Tablespaces allow you to spread your databases or parts of your databases around different parts of your filesystem, and more importantly onto different physical storage devices. This is important because under ubuntu / debian, the default is to put the database cluster into the /var directory under the root of the filesystem heirarchy. Of course you can always mount the postgres subdirectory under var on a different filesystem, but that doesnt offer much flexibility. Here is a little example of how I spread my ‘gis’ postgis enabled database across two tablespaces – one located on my /home directory (which is /dev/sdb1) and one on my /opt directory (which is /dev/sda1). I’ll put database indexes in the /opt subdir and actual data in the /home subdirectory.

First we create two subdirectories and change their ownership to postgres:

sudo mkdir /opt/pg9-indexes
sudo mkdir /home/pg9-data
sudo chown postgres /opt/pg9-indexes
sudo chown postgres /home/pg9-data

Now let’s go ahead and create the tablespaces in postgres (I used template1, but you can be in any database). You need superuser rights for this:

psql template1
CREATE TABLESPACE optspace LOCATION '/opt/pg9-indexes';
CREATE TABLESPACE homespace LOCATION '/home/pg9-data';

Now let’s create a database, storing its data in /home/pg9-data:

createdb --tablespace=homespace gis

If you look in the pg9-data directory, you will see a new subdirectory has been made:

sudo ls /home/pg9-data/PG_9.0_201008051
17332

Then go ahead and load your data into postgres. Now lets create a spatial index for a table in my ‘world’ schema, setting the tablespace to the /opt partition:

CREATE INDEX idx_cities_geom ON world.cities \
USING GIST ( the_geom ) TABLESPACE optspace;

Now a quick look in our /opt directory should show you the index existing on the separate drive:

sudo ls /opt/pg9-indexes/PG_9.0_201008051
17332

If you want to automate the updating of all indexes for a given schema you can do it in a few steps like this (inspired by comments in this blog article):

psql gis
\o /tmp/updateindexes.sql
select 'ALTER INDEX world.'||indexname||' SET TABLESPACE optspace ;' \
from pg_catalog.pg_indexes where schemaname = 'world' order by tablename;
\q

In this case I am generating a little sql that will update all the indexes in my ‘world’ schema to use the optspace tablespace, and writing it out to a file (/tmp/updateindexes.sql). Next we can run  this on our database (from bash):

cat /tmp/updateindexes.sql | grep ALTER | psql gis

Thats it! One thing to note, if you are a windows user you probably can’t benefit from table spaces as the docs seem to indicate that you need to have a filesystem that supports symlinking….though I haven’t verified this one way or the other myself.

 

http://blog.lodeblomme.be/2008/03/15/move-a-postgresql-database-to-a-different-tablespace/
pixelstats trackingpixel

Working with tablespaces in PostGIS

The PostgreSQL tablespace system is a convenient way to manage storage space allocation in PostgreSQL and perhaps improve performance. Tablespaces allow you to spread your databases or parts of your databases around different parts of your filesystem, and more importantly onto different physical storage devices. This is important because under ubuntu / debian, the default is to put the database cluster into the /var directory under the root of the filesystem heirarchy. Of course you can always mount the postgres subdirectory under var on a different filesystem, but that doesnt offer much flexibility. Here is a little example of how I spread my 'gis' postgis enabled database across two tablespaces - one located on my /home directory (which is /dev/sdb1) and one on my /opt directory (which is /dev/sda1). I'll put database indexes in the /opt subdir and actual data in the /home subdirectory.

First we create two subdirectories and change their ownership to postgres:

sudo mkdir /opt/pg9-indexes
sudo mkdir /home/pg9-data
sudo chown postgres /opt/pg9-indexes
sudo chown postgres /home/pg9-data

Now let's go ahead and create the tablespaces in postgres (I used template1, but you can be in any database). You need superuser rights for this:

psql template1
CREATE TABLESPACE optspace LOCATION '/opt/pg9-indexes';
CREATE TABLESPACE homespace LOCATION '/home/pg9-data';

Now let's create a database, storing its data in /home/pg9-data:

createdb --tablespace=homespace gis

If you look in the pg9-data directory, you will see a new subdirectory has been made:

sudo ls /home/pg9-data/PG_9.0_201008051
17332

Then go ahead and load your data into postgres. Now lets create a spatial index for a table in my 'world' schema, setting the tablespace to the /opt partition:

CREATE INDEX idx_cities_geom ON world.cities \
USING GIST ( the_geom ) TABLESPACE optspace;

Now a quick look in our /opt directory should show you the index existing on the separate drive:

sudo ls /opt/pg9-indexes/PG_9.0_201008051
17332

If you want to automate the updating of all indexes for a given schema you can do it in a few steps like this (inspired by comments in this blog article):

psql gis
\o /tmp/updateindexes.sql
select 'ALTER INDEX world.'||indexname||' SET TABLESPACE optspace ;' \
from pg_catalog.pg_indexes where schemaname = 'world' order by tablename;
\q

In this case I am generating a little sql that will update all the indexes in my 'world' schema to use the optspace tablespace, and writing it out to a file (/tmp/updateindexes.sql). Next we can run  this on our database (from bash):

cat /tmp/updateindexes.sql | grep ALTER | psql gis

Thats it! One thing to note, if you are a windows user you probably can't benefit from table spaces as the docs seem to indicate that you need to have a filesystem that supports symlinking....though I haven't verified this one way or the other myself.

Running Posgresql 8.4 and 9.0 side by side on ubuntu

Ok so I’m feeling a bit left behind – everyone else seems to have jumped over to 9.0 and I have resolutely been sticking to the stable version shipped with ubuntu or debian (depending on the machine I am working on). However a client has a bunch of gis data in pg 9.0 / postgis 1.5 and I needed to be able to replicate their environment. In this article I will describe how I got Postgresql 8.4 running side by side with Postgresql 9.0, both with Postgis installed.

I will assume you already have postgresql 8.4 and postgis installed via Ubuntu apt…

First install postgres 9.0 from ppa

 

Add this to your /etc/apt/sources.list (as root):

# ppa for postgis 9.x
deb http://ppa.launchpad.net/pitti/postgresql/ubuntu natty main
deb-src http://ppa.launchpad.net/pitti/postgresql/ubuntu natty main

Then do:

sudo apt-get update

Note: Although pg9.1 beta packages are available, postgis 1.5  wont work with it so use 9.0

sudo apt-get install postgresql-contrib-9.0 postgresql-client-9.0 \
postgresql-server-dev-9.0 postgresql-9.0

You should check that you have the correct pg_config in your path

pg_config --version

If it is not 9.0, link it:

cd /usr/bin
sudo mv pg_config pg_config.old
sudo ln -s /usr/lib/postgresql/9.0/bin/pg_config .

Download and install postgis

There are no packages available for postgresql 9 so we need to build from source.

cd ~/dev/cpp
wget -c http://postgis.refractions.net/download/postgis-1.5.2.tar.gz
tar xfz postgis-1.5.2.tar.gz
cd postgis-1.5.2
./configure

You should get something like this (note building against postgresql 9)

PostGIS is now configured for x86_64-unknown-linux-gnu
 -------------- Compiler Info -------------  
 C compiler:           gcc -g -O2 
C++ compiler:         g++ -g -O2
 -------------- Dependencies --------------  
GEOS config:          /usr/bin/geos-config 
GEOS version:         3.2.0 
PostgreSQL config:    /usr/bin/pg_config 
PostgreSQL version:   PostgreSQL 9.0.4 
PROJ4 version:        47 
Libxml2 config:       /usr/bin/xml2-config 
Libxml2 version:      2.7.8 
PostGIS debug level:  0
 -------- Documentation Generation --------  
xsltproc:             /usr/bin/xsltproc 
xsl style sheets:      
dblatex:               
convert:              /usr/bin/convert

Now build and install

make
sudo make install

Using 8.4 and 9.0 side by side

Apt will install 9.0 side by side with 8.x without issue – it just sets it to run on a different port (5433 as opposed to the default 5432). This means you can comfortably use both (at the expense of some disk and processor load on your system). In order to make sure you are using the correct database, always add a -p 5433 when you wish to connect to 9.0. Your 8.4 should be running unchanged. The 9.0 install will be ‘blank’. to begin with (no user created tables). I always make myself a superuser on the database and create and account that matches my unix login account so that I can use ident authentication for my day to day work:

sudo su - postgres
createuser -p 5433 -d -E -i -l -P -r -s timlinux

Note the -p option to specify that I am connecting to postgresql 9.0! After you have entered your password twice type “`exit“` to exit the postgres user account.

Install postgis tables and functions into template1

You can create a new template, install postgis functions into each database that needs them separately, or (as I prefer), install them to template1 so that every database you creates will automatically be spatially enabled. I do this because I very seldom need a non-spatial database, and I like to have postgis there as default.

 

createlang -p 5433 plpgsql template1
psql -p 5433 template1 -f \
/usr/share/postgresql/9.0/contrib/postgis-1.5/postgis.sql
psql -p 5433 template1 -f \
/usr/share/postgresql/9.0/contrib/postgis-1.5/spatial_ref_sys.sql

 

Validating that postgis is installed

To do this we will connect to the template1 database and run a simple query:

psql -p 5433 template1
select postgis_lib_version();
\q

Should produce output like this:

WARNING: psql version 8.4, server version 9.0.         
Some psql features might not work.Type "help" for help.
template1=# select postgis_lib_version(); 
postgis_lib_version 
--------------------- 1.5.2(1 row)

Creating a new database

The last thing to do is to create a new database. As you may have noticed above, the 8.4 command line tools are still default as they have PATH preference, so when working explicitly with pg 9.0, its good to do:

export PATH=/usr/lib/postgresql/9.0/bin:$PATH

in your shell to put postgresql first in your path. A quick test will confirm that it has worked:

createdb --version
createdb (PostgreSQL) 9.0.4

With that setup nicely, lets create our new database:

createdb -p 5433 foo
psql -p 5433 -l

You can verify the database was created correctly by using psql -l (once again also explicitly setting the port no.).
Thats all there is to it! Now you can connect to the database from QGIS (just make sure to use the correct port number!), load spatial data into it using shp2pgsl and so on. I for one am looking forward to trying out some of  the new goodies in Postgresql 9.0.

 

Update: There are packages available for postgis (see comments below). Also, for convenience you can set your pg port to 5433 for your session by doing:

 

export PGPORT=5433
pixelstats trackingpixel

Running Posgresql 8.4 and 9.0 side by side on ubuntu

Ok so I'm feeling a bit left behind - everyone else seems to have jumped over to 9.0 and I have resolutely been sticking to the stable version shipped with ubuntu or debian (depending on the machine I am working on). However a client has a bunch of gis data in pg 9.0 / postgis 1.5 and I needed to be able to replicate their environment. In this article I will describe how I got Postgresql 8.4 running side by side with Postgresql 9.0, both with Postgis installed.

I will assume you already have postgresql 8.4 and postgis installed via Ubuntu apt...

First install postgres 9.0 from ppa

Add this to your /etc/apt/sources.list (as root):

# ppa for postgis 9.x
deb http://ppa.launchpad.net/pitti/postgresql/ubuntu natty main
deb-src http://ppa.launchpad.net/pitti/postgresql/ubuntu natty main

Then do:

sudo apt-get update

Note: Although pg9.1 beta packages are available, postgis 1.5  wont work with it so use 9.0

sudo apt-get install postgresql-contrib-9.0 postgresql-client-9.0 \
postgresql-server-dev-9.0 postgresql-9.0

You should check that you have the correct pg_config in your path

pg_config --version

If it is not 9.0, link it:

cd /usr/bin
sudo mv pg_config pg_config.old
sudo ln -s /usr/lib/postgresql/9.0/bin/pg_config .

Download and install postgis

There are no packages available for postgresql 9 so we need to build from source.

cd ~/dev/cpp
wget -c http://postgis.refractions.net/download/postgis-1.5.2.tar.gz
tar xfz postgis-1.5.2.tar.gz
cd postgis-1.5.2
./configure

You should get something like this (note building against postgresql 9)

PostGIS is now configured for x86_64-unknown-linux-gnu
 -------------- Compiler Info -------------
 C compiler:           gcc -g -O2
C++ compiler:         g++ -g -O2
 -------------- Dependencies --------------
GEOS config:          /usr/bin/geos-config
GEOS version:         3.2.0
PostgreSQL config:    /usr/bin/pg_config
PostgreSQL version:   PostgreSQL 9.0.4
PROJ4 version:        47
Libxml2 config:       /usr/bin/xml2-config
Libxml2 version:      2.7.8
PostGIS debug level:  0
 -------- Documentation Generation --------
xsltproc:             /usr/bin/xsltproc
xsl style sheets:
dblatex:
convert:              /usr/bin/convert

Now build and install

make
sudo make install

Using 8.4 and 9.0 side by side

Apt will install 9.0 side by side with 8.x without issue - it just sets it to run on a different port (5433 as opposed to the default 5432). This means you can comfortably use both (at the expense of some disk and processor load on your system). In order to make sure you are using the correct database, always add a -p 5433 when you wish to connect to 9.0. Your 8.4 should be running unchanged. The 9.0 install will be 'blank'. to begin with (no user created tables). I always make myself a superuser on the database and create and account that matches my unix login account so that I can use ident authentication for my day to day work:

sudo su - postgres
createuser -p 5433 -d -E -i -l -P -r -s timlinux

Note the -p option to specify that I am connecting to postgresql 9.0! After you have entered your password twice type ```exit``` to exit the postgres user account.

Install postgis tables and functions into template1

You can create a new template, install postgis functions into each database that needs them separately, or (as I prefer), install them to template1 so that every database you creates will automatically be spatially enabled. I do this because I very seldom need a non-spatial database, and I like to have postgis there as default.

createlang -p 5433 plpgsql template1
psql -p 5433 template1 -f \
/usr/share/postgresql/9.0/contrib/postgis-1.5/postgis.sql
psql -p 5433 template1 -f \
/usr/share/postgresql/9.0/contrib/postgis-1.5/spatial_ref_sys.sql

Validating that postgis is installed

To do this we will connect to the template1 database and run a simple query:

psql -p 5433 template1
select postgis_lib_version();
\q

Should produce output like this:

WARNING: psql version 8.4, server version 9.0.
Some psql features might not work.Type "help" for help.
template1=# select postgis_lib_version();
postgis_lib_version
--------------------- 1.5.2(1 row)

Creating a new database

The last thing to do is to create a new database. As you may have noticed above, the 8.4 command line tools are still default as they have PATH preference, so when working explicitly with pg 9.0, its good to do:

export PATH=/usr/lib/postgresql/9.0/bin:$PATH

in your shell to put postgresql first in your path. A quick test will confirm that it has worked:

createdb --version
createdb (PostgreSQL) 9.0.4

With that setup nicely, lets create our new database:

createdb -p 5433 foo
psql -p 5433 -l
You can verify the database was created correctly by using psql -l (once again also explicitly setting the port no.).
Thats all there is to it! Now you can connect to the database from QGIS (just make sure to use the correct port number!), load spatial data into it using shp2pgsl and so on. I for one am looking forward to trying out some of  the new goodies in Postgresql 9.0.

Update: There are packages available for postgis (see comments below). Also, for convenience you can set your pg port to 5433 for your session by doing:

export PGPORT=5433

Quantum GIS in volunteer fire stations

Here is a nice little article in PC World magazine that mentions the use of QGIS in computer centers in volunteer fire stations. If we have wormed our way into a PC World magazine, I guess we (the Quantum GIS project) have finally ‘made it’ as a popular desktop GIS eh? Thanks William Kinghorn for the pointer!

pixelstats trackingpixel

Quantum GIS in volunteer fire stations

Here is a nice little article in PC World magazine that mentions the use of QGIS in computer centers in volunteer fire stations. If we have wormed our way into a PC World magazine, I guess we (the Quantum GIS project) have finally 'made it' as a popular desktop GIS eh? Thanks William Kinghorn for the pointer!

Building custom QGIS installers for Windows

A client wanted their own custom QGIS build which included some patches and a selection of additional python plugins that were to be available from point of installation. Additionally we wanted to brand the installer with their logo and the splash screen too so that users realise they are using a custom build. The actual build process under windows using msvc is fairly
straightforward (see the INSTALL document in the root of QGIS source tree) so I won’t discuss that here. Rather I am going to show you an approach for building such a custom installer.

Creating a branch

Since we live in git land now, the first thing I am going to do is create a branch for my client’s customisations. If any changes in that branch are useful to the master branch,  I will cherry pick them across later using git cherry-pick.

git checkout -b sansa-branch

Now apply all your code level customisations to the sources in your branch and go ahead and compile everything to make sure it builds, links and installs.

Customising the installer sidebars

The installer sidebars can be altered in the gimp – simply edit :

ms-windows/Installer-Files/sidelogomaster.xcf.bz2

You can open it directly (no need to untar it first). Then use the gimp project to create new versions of:

ms-windows/Installer-Files/WelcomeFinishPage.bmp
ms-windows/Installer-Files/UnWelcomeFinishPage.bmp

Make sure to keep the image dimensions the same. Those can be commited to your local branch:

git commit -m "Customised sidebar images" \
ms-windows/Installer-Files/WelcomeFinishPage.bmp \
ms-windows/Installer-Files/UnWelcomeFinishPage.bmp
Our custom installer running with new sidebar images

Our custom installer running with new sidebar images

 

 

Customising the Splash Screen

The splash screen master can be found here:

images/splash/splash.xcf.bz2

Once again, you can open this directly in the gimp and edit it according to your needs. When you are ready to save the final splash, resize it to 600×287 pixels first. Be careful to also preserve the
transparency in the corners properly or otherwise you will see artifacts when you start QGIS which won’t look nice. The saved splash should be saved as:

images/splash/splash.png
Making a splash: Our custom QGIS build's splash screen

Making a splash: Our custom QGIS build's splash screen

Creating a package tree

There are two ways you can go here…

Way 1:

You can take an existing build of QGIS, extract it under windows, copy your build outputs from msvc over the apps/qgis or apps/qgis-dev  directory and then copy the whole lot over to linux. When it is in your linux box, place it under:

ms-windows/osgeo4w/unpacked

In this case the contents under unpacked should be what was originally under your c:\Program Files\Quantum GIS directory.

One important thing to note is that there are various batch files that are run after QGIS installs. After they are run they are renamed to have a .done extension after them. We need to remove this extension so that they get run after your custom QGIS installer has run. Here are a few lines of bash that do just that:

cd ms-windows
cd osgeo4w
cd unpacked
mv postinstall.bat.done postinstall.bat
cd etc
for FILE in *.done; do mv $FILE \ $(basename $FILE .done); done cd ../../../..

Way 2:

You can run the creatensis.pl script like this and specify a shortname e.g. qgis or qgis-dev.

cd osgeo4w
./creatensis.pl --shortname=qgis

It will take a while to run if you have a slow connection like I do, After it is done, place the build outputs from msvc into the file tree at:

unpacked/apps/qgis

So in my case, msvc installed the QGIS I built into c:\program files\qgis1.7.0 – which now becomes the unpacked/apps/qgis directory.

Understanding the batch files

Another important thing to understand is how the various batch files are called within your package tree. Immediately after installation, the postinstall.bat program is run. Its main job is to set a bunch of  environment variables and then call a some subsidiary scripts living in the etc/postinstall directory. One common issue is that you create your installer, install it on a windows machine, and then nothing happens when you click the icon. Usually this will be caused by a mismatch between you qgis launcher script name and the shortcut that was created to call it. The naming convention used in osgeo4w is to install the current stable version of qgis under apps/qgis and the latest development build under apps/qgis-dev. Typically you will only want to ship one or the other to your users – that is an installer based on the current stable release, or one based on the current QGIS trunk. Either way what is important is that everything gets named consistently. For this article, I will use a shortname of ‘qgis’ throughout. So you should check your postinstall.bat to make sure that its calling an appropriate qgis postinstall script. For example mine looked like this:

echo Running postinstall
qgis-dev.bat...
%COMSPEC% /c etc\postinstall\qgis-dev.bat>>postinstall.log 2>&1 ren etc\postinstall\qgis-dev.bat qgis-dev.bat.done>>postinstall.log 2>&1

But I am standardising on using apps/qgis for my qgis directory, so I need to change it to look like this:

echo Running postinstall
qgis-dev.bat...
%COMSPEC% /c etc\postinstall\qgis.bat>>postinstall.log 2>&1 ren etc\postinstall\qgis.bat qgis.bat.done>>postinstall.log 2>&1
Similarly you need to check in etc/postinstall to make sure that the qgis.bat scrip referred to here exists:
[unpacked] ls etc/postinstall/
grass.bat  msvcrt.bat msys.bat  openssl.bat  pyqt4.bat  qgis-dev.bat qt4-libs.bat  sip.bat

You can see once again the name doesn’t gel, so I will rename mine to ’qgis.bat’:

mv etc/postinstall/qgis-dev.bat
etc/postinstall/qgis.bat

Lastly, are the actual contents of that batch file consistent?

textreplace -std -t
bin\qgis-dev.bat
mkdir "%OSGEO4W_STARTMENU%"
xxmklink "%OSGEO4W_STARTMENU%\Quantum GIS (1.7.0).lnk"  ...
"%OSGEO4W_ROOT%\bin\qgis-dev.bat" " " \ "Quantum GIS - Desktop GIS (1.7.0)" ...
1 "%OSGEO4W_ROOT%\apps\qgis-dev\icons\QGIS.ico"
xxmklink "%ALLUSERSPROFILE%\Desktop\Quantum GIS (1.7.0).lnk" ... "%OSGEO4W_ROOT%\bin\qgis-dev.bat" " " \ "Quantum GIS - Desktop GIS (1.7.0)" ...
1 "%OSGEO4W_ROOT%\apps\qgis-dev\icons\QGIS.ico"
set O4W_ROOT=%OSGEO4W_ROOT% set OSGEO4W_ROOT=%OSGEO4W_ROOT:\=\\%
textreplace -std -t "%O4W_ROOT%\apps\qgis-dev\bin\qgis.reg"
"%WINDIR%\regedit" /s "%O4W_ROOT%\apps\qgis-dev\bin\qgis.reg"

They are not so I will update them to correct that:

textreplace -std -t bin\qgis.bat
mkdir "%OSGEO4W_STARTMENU%"
xxmklink

"%OSGEO4W_STARTMENU%\Quantum GIS (1.7.0).lnk"   ...
"%OSGEO4W_ROOT%\bin\qgis.bat" " " ... "Quantum GIS - Desktop GIS (1.7.0)" 1  ... "%OSGEO4W_ROOT%\apps\qgis\icons\QGIS.ico"
xxmklink "%ALLUSERSPROFILE%\Desktop\Quantum GIS (1.7.0).lnk" \
"%OSGEO4W_ROOT%\bin\qgis.bat" " " ... "Quantum GIS - Desktop GIS (1.7.0)" ... 1 "%OSGEO4W_ROOT%\apps\qgis\icons\QGIS.ico"
 
set O4W_ROOT=%OSGEO4W_ROOT%
set OSGEO4W_ROOT=%OSGEO4W_ROOT:\=\\%
textreplace -std -t "%O4W_ROOT%\apps\qgis\bin\qgis.reg"
"%WINDIR%\regedit" /s "%O4W_ROOT%\apps\qgis\bin\qgis.reg"

Whether or not you actually need to tweak these scripts is largely dependent on your starting point. I can only say I had lots of issues because I had mismatches between references to qgis and qgis-dev to it is useful to understand the sequence of events. The last script we edited will ultimately create the shortcut icon on the users desktop and menu system.

You also need to check that the preremove scripts are similarly consistent, so check these scripts are there and internally correct:

preremove.bat
etc/preremove/qgis-dev.bat

Lastly you should remove bin/qgis.bat if it is present and check that bin/qgis.bat.templ is there. If it is named as qgis-dev.bat.templ rather than qgis then rename accordingly to qgis.bat.templ. Now check inside this script to make sure everything is consistent there too. The following diagram describes these processes visually.

How batch files are used in QGIS windows edition

How batch files are used in QGIS windows edition

 

 

 

Customising what gets shipped.

Here is the opportunity to add some customisations to the installer contents. For example I wanted to include ecw and sid support in my  installer, as well as a bunch of the nicer python plugins that are out there. Simply copy the files you need into the correct place in the unpacked directory tree. For example, I took a range of python plugins from my ~/.qgis/python/plugins/ directory and placed them in:

unpacked/apps/qgis/python/plugins/

e.g.


cd unpacked/apps/qgis/python/plugins/
cp -r ~/.qgis/python/plugins/bcccoltbl1/ .
cp -r ~/.qgis/python/plugins/cadtools/ .
cp -r ~/.qgis/python/plugins/metatools/ .
cp -r ~/.qgis/python/plugins/numericalDigitize/ .
cp -r ~/.qgis/python/plugins/numericalVertexEdit .
cp -r ~/.qgis/python/plugins/openlayers .
cp -r ~/.qgis/python/plugins/openlayersov .
cp -r ~/.qgis/python/plugins/postgis_manager . cp -r ~/.qgis/python/plugins/QGISFileBrowser .
cp -r ~/.qgis/python/plugins/qgsAffine .
cp -r ~/.qgis/python/plugins/rawrasterfileimport .
cp -r ~/.qgis/python/plugins/shadedrelief .
cp -r ~/.qgis/python/plugins/tablemanager .
cp -r ~/.qgis/python/plugins/topocolour .
cp -r ~/.qgis/python/plugins/valuetool .

Building an installer

Now we get to the part you have been waiting for – building an installer. There is a convenience script here that will do most of the work for you:

<qgis source tree root>/ms-windows/quickpackage.sh

Before you run the script, you should check that the shortname inside this file matches the one we used in the steps above (in this case ’qgis’ rather than ‘qgis-dev’).

makensis \
-DVERSION_NUMBER='1.7.0' \
-DVERSION_NAME='Wroclaw' \
-DSVN_REVISION='0' \
-DQGIS_BASE='Quantum GIS' \
-DINSTALLER_NAME='QGIS-1-7-0-Sansa-Edition-Setup.exe' \
-DDISPLAYED_NAME='Quantum GIS 1.7.0' \
-DBINARY_REVISION=1 \
-DINSTALLER_TYPE=OSGeo4W \
-DPACKAGE_FOLDER=osgeo4w/unpacked \
-DSHORTNAME=qgis \
QGIS-Installer.nsi

You can also use the opportunity to customise the name of the installer and the name of the version and release number of QGIS if you like  (though I don’t recommend that latter two much as they provide a good point of reference to users). When you are finished, simply run the quickpackage script and wait a while:

 ./quickpackage.sh

After a few minutes the script will come to an end and you should have a nice shiny new QGIS-1-7-0-Sansa-Edition-Setup.exe (or whatever you called it) created in the ms-windows directory. Now its simple a matter of installing it in a clean windows machine to test it, running it through a virus checker and then putting it online for people to enjoy.

 

Our custom QGIS showing the modified version name in the title bar

Our custom QGIS showing the modified version name in the title bar

 

Ethics

With all the flexibility that QGIS offers, being open source and ‘out there’ for anyone to hack on, its easy to take what the project provides and rebrand it substantially:

  • you can create a custom icon theme
  • you can remove unwanted user interface elements
  • you can replace help text, about text etc
  • you can rename the application itself to something else
  • you can change the application icon

and so on and so on. One thing you may not do is claim the fundamental work to be your own. Any time you make a publically available custom version of QGIS, be sure to remember that you need to redistribute the full source code (including all your customisations) with it in order to comply with the letter and spirit of the GPL.

One final thing to mention is that at the Lisbon hackfest, Martin Dobias and Radim Blazek demonstrated a really slick and easy to use framework for creating stripped down versions of the QGIS user interface. It introduces the idea of profiles that you can nominate at application start up. So a school teacher may for example provide her students with just the few icons and tools they need to complete a particular exercise. Look out for that in a future version of QGIS I guess…..

If anyone is interested in having custom branded versions of QGIS for use within their organisation, we (linfiniti) will be happy to help - just pop us an email.

pixelstats trackingpixel

Building custom QGIS installers for Windows

A client wanted their own custom QGIS build which included some patches and a selection of additional python plugins that were to be available from point of installation. Additionally we wanted to brand the installer with their logo and the splash screen too so that users realise they are using a custom build. The actual build process under windows using msvc is fairly straightforward (see the INSTALL document in the root of QGIS source tree) so I won't discuss that here. Rather I am going to show you an approach for building such a custom installer.

Creating a branch

Since we live in git land now, the first thing I am going to do is create a branch for my client's customisations. If any changes in that branch are useful to the master branch,  I will cherry pick them across later using git cherry-pick:

git checkout -b sansa-branch

Now apply all your code level customisations to the sources in your branch and go ahead and compile everything to make sure it builds, links and installs.

Customising the installer sidebars

The installer sidebars can be altered in the gimp - simply edit :

ms-windows/Installer-Files/sidelogomaster.xcf.bz2

You can open it directly (no need to untar it first). Then use the gimp project to create new versions of:

ms-windows/Installer-Files/WelcomeFinishPage.bmp ms-windows/Installer-Files/UnWelcomeFinishPage.bmp

Make sure to keep the image dimensions the same. Those can be commited to your local branch:

  git commit -m "Customised sidebar images" \
  ms-windows/Installer-Files/WelcomeFinishPage.bmp \
  ms-windows/Installer-Files/UnWelcomeFinishPage.bmp

|Our custom installer running with new sidebar images|

Customising the Splash Screen

The splash screen master can be found here:

/images/splash/splash.xcf.bz2

Once again, you can open this directly in the gimp and edit it according to your needs. When you are ready to save the final splash, resize it to 600x287 pixels first. Be careful to also preserve the transparency in the corners properly or otherwise you will see artifacts when you start QGIS which won't look nice. The saved splash should be saved as:

  /images/splash/splash.png

|Making a splash: Our custom QGIS build's splash screen|

Creating a package tree

There are two ways you can go here...

Way 1:

You can take an existing build of QGIS, extract it under windows, copy your build outputs from msvc over the apps/qgis or apps/qgis-dev  directory and then copy the whole lot over to linux. When it is in your linux box, place it under:

ms-windows/osgeo4w/unpacked

In this case the contents under unpacked should be what was originally under your c:\Program Files\Quantum GIS directory.

One important thing to note is that there are various batch files that are run after QGIS installs. After they are run they are renamed to have a .done extension after them. We need to remove this extension so that they get run after your custom QGIS installer has run. Here are a few lines of bash that do just that:

cd ms-windowscd osgeo4wcd unpackedmv postinstall.bat.done
postinstall.batcd etcfor FILE in *.done; do mv $FILE \
$(basename $FILE .done); done
cd ../../../..

Way 2:

You can run the creatensis.pl script like this and specify a shortname e.g. qgis or qgis-dev.

cd osgeo4w
./creatensis.pl --shortname=qgis

It will take a while to run if you have a slow connection like I do, After it is done, place the build outputs from msvc into the file tree at:

unpacked/apps/qgis

So in my case, msvc installed the QGIS I built into c:\program files\qgis1.7.0 - which now becomes the unpacked/apps/qgis directory.

Understanding the batch files

Another important thing to understand is how the various batch files are called within your package tree. Immediately after installation, the postinstall.bat program is run. Its main job is to set a bunch of  environment variables and then call a some subsidiary scripts living in the etc/postinstall directory. One common issue is that you create your installer, install it on a windows machine, and then nothing happens when you click the icon. Usually this will be caused by a mismatch between you qgis launcher script name and the shortcut that was created to call it. The naming convention used in osgeo4w is to install the current stable version of qgis under apps/qgis and the latest development build under apps/qgis-dev. Typically you will only want to ship one or the other to your users - that is an installer based on the current stable release, or one based on the current QGIS trunk. Either way what is important is that everything gets named consistently. For this article, I will use a shortname of 'qgis' throughout. So you should check your postinstall.bat to make sure that its calling an appropriate qgis postinstall script. For example mine looked like this:

echo Running postinstall
qgis-dev.bat...
%COMSPEC% /c
etc\postinstall\qgis-dev.bat>>postinstall.log 2>&1
ren etc\postinstall\qgis-dev.bat
qgis-dev.bat.done>>postinstall.log 2>&1

But I am standardising on using apps/qgis for my qgis directory, so I need to change it to look like this:

echo Running postinstall
qgis-dev.bat...
%COMSPEC% /c
etc\postinstall\qgis.bat>>postinstall.log 2>&1
ren etc\postinstall\qgis.bat
qgis.bat.done>>postinstall.log 2>&1
Similarly you need to check in etc/postinstall to make sure that the qgis.bat scrip referred to here exists:
[unpacked] ls etc/postinstall/
grass.bat  msvcrt.bat msys.bat  openssl.bat  pyqt4.bat  qgis-dev.bat
qt4-libs.bat  sip.bat

You can see once again the name doesn't gel, so I will rename mine to 'qgis.bat':

mv etc/postinstall/qgis-dev.bat
etc/postinstall/qgis.bat

Lastly, are the actual contents of that batch file consistent?

textreplace -std -t
bin\qgis-dev.batmkdir "%OSGEO4W_STARTMENU%"
xxmklink "%OSGEO4W_STARTMENU%\Quantum GIS (1.7.0).lnk"  ..."%OSGEO4W_ROOT%\bin\qgis-dev.bat"
" " \ "Quantum GIS - Desktop GIS (1.7.0)"  ... 1 "%OSGEO4W_ROOT%\apps\qgis-dev\icons\QGIS.ico"
xxmklink "%ALLUSERSPROFILE%\Desktop\Quantum GIS (1.7.0).lnk" ...
"%OSGEO4W_ROOT%\bin\qgis-dev.bat"
" " \ "Quantum GIS - Desktop GIS (1.7.0)" ...1 "%OSGEO4W_ROOT%\apps\qgis-dev\icons\QGIS.ico"
set O4W_ROOT=%OSGEO4W_ROOT%
set OSGEO4W_ROOT=%OSGEO4W_ROOT:\=\\%
textreplace -std -t
"%O4W_ROOT%\apps\qgis-dev\bin\qgis.reg"
"%WINDIR%\regedit" /s
"%O4W_ROOT%\apps\qgis-dev\bin\qgis.reg"

They are not so I will update them to correct that:

textreplace -std -t bin\qgis.bat
mkdir "%OSGEO4W_STARTMENU%"
xxmklink
"%OSGEO4W_STARTMENU%\Quantum GIS (1.7.0).lnk"   ..."%OSGEO4W_ROOT%\bin\qgis.bat" " " ...
"Quantum GIS - Desktop GIS (1.7.0)" 1  ...
"%OSGEO4W_ROOT%\apps\qgis\icons\QGIS.ico"
xxmklink "%ALLUSERSPROFILE%\Desktop\Quantum GIS (1.7.0).lnk" \"%OSGEO4W_ROOT%\bin\qgis.bat" " " ...
 "Quantum GIS - Desktop GIS (1.7.0)" ...
1 "%OSGEO4W_ROOT%\apps\qgis\icons\QGIS.ico"
set O4W_ROOT=%OSGEO4W_ROOT%
set OSGEO4W_ROOT=%OSGEO4W_ROOT:\=\\%
textreplace -std -t
"%O4W_ROOT%\apps\qgis\bin\qgis.reg""%WINDIR%\regedit" /s
"%O4W_ROOT%\apps\qgis\bin\qgis.reg"

Whether or not you actually need to tweak these scripts is largely dependent on your starting point. I can only say I had lots of issues because I had mismatches between references to qgis and qgis-dev to it is useful to understand the sequence of events. The last script we edited will ultimately create the shortcut icon on the users desktop and menu system.

You also need to check that the preremove scripts are similarly consistent, so check these scripts are there and internally correct:

preremove.batetc/preremove/qgis-dev.bat

Lastly you should remove bin/qgis.bat if it is present and check that bin/qgis.bat.templ is there. If it is named as qgis-dev.bat.templ rather than qgis then rename accordingly to qgis.bat.templ. Now check inside this script to make sure everything is consistent there too. The following diagram describes these processes visually.

How batch files are used in QGIS windows edition

Customising what gets shipped.

Here is the opportunity to add some customisations to the installer contents. For example I wanted to include ecw and sid support in my  installer, as well as a bunch of the nicer python plugins that are out there. Simply copy the files you need into the correct place in the unpacked directory tree. For example, I took a range of python plugins from my ~/.qgis/python/plugins/ directory and placed them in:

unpacked/apps/qgis/python/plugins/

e.g.

cd unpacked/apps/qgis/python/plugins/cp -r ~/.qgis/python/plugins/bcccoltbl1/ .cp -r ~/.qgis/python/plugins/cadtools/ .cp -r ~/.qgis/python/plugins/metatools/ .cp -r ~/.qgis/python/plugins/numericalDigitize/ .cp -r ~/.qgis/python/plugins/numericalVertexEdit .cp -r ~/.qgis/python/plugins/openlayers .cp -r ~/.qgis/python/plugins/openlayersov .cp -r ~/.qgis/python/plugins/postgis_manager .
cp -r ~/.qgis/python/plugins/QGISFileBrowser .cp -r ~/.qgis/python/plugins/qgsAffine .cp -r ~/.qgis/python/plugins/rawrasterfileimport .cp -r ~/.qgis/python/plugins/shadedrelief .cp -r ~/.qgis/python/plugins/tablemanager .cp -r ~/.qgis/python/plugins/topocolour .cp -r ~/.qgis/python/plugins/valuetool .

Building an installer

Now we get to the part you have been waiting for - building an installer. There is a convenience script here that will do most of the work for you:

<qgis source tree root>/ms-windows/quickpackage.sh

Before you run the script, you should check that the shortname inside this file matches the one we used in the steps above (in this case 'qgis' rather than 'qgis-dev').

makensis \
-DVERSION_NUMBER='1.7.0' \
-DVERSION_NAME='Wroclaw' \
-DSVN_REVISION='0' \
-DQGIS_BASE='Quantum GIS' \
-DINSTALLER_NAME='QGIS-1-7-0-Sansa-Edition-Setup.exe' \
-DDISPLAYED_NAME='Quantum GIS 1.7.0' \
-DBINARY_REVISION=1 \
-DINSTALLER_TYPE=OSGeo4W \
-DPACKAGE_FOLDER=osgeo4w/unpacked \
-DSHORTNAME=qgis \
QGIS-Installer.nsi

You can also use the opportunity to customise the name of the installer and the name of the version and release number of QGIS if you like  (though I don't recommend that latter two much as they provide a good point of reference to users). When you are finished, simply run the quickpackage script and wait a while:

./quickpackage.sh

After a few minutes the script will come to an end and you should have a nice shiny new QGIS-1-7-0-Sansa-Edition-Setup.exe (or whatever you called it) created in the ms-windows directory. Now its simple a matter of installing it in a clean windows machine to test it, running it through a virus checker and then putting it online for people to enjoy.

Our custom QGIS showing the modified version name in the title bar

Ethics

With all the flexibility that QGIS offers, being open source and 'out there' for anyone to hack on, its easy to take what the project provides and rebrand it substantially:

  • you can create a custom icon theme
  • you can remove unwanted user interface elements
  • you can replace help text, about text etc
  • you can rename the application itself to something else
  • you can change the application icon

and so on and so on. One thing you may not do is claim the fundamental work to be your own. Any time you make a publically available custom version of QGIS, be sure to remember that you need to redistribute the full source code (including all your customisations) with it in order to comply with the letter and spirit of the GPL.

One final thing to mention is that at the Lisbon hackfest, Martin Dobias and Radim Blazek demonstrated a really slick and easy to use framework for creating stripped down versions of the QGIS user interface. It introduces the idea of profiles that you can nominate at application start up. So a school teacher may for example provide her students with just the few icons and tools they need to complete a particular exercise. Look out for that in a future version of QGIS I guess.....

If anyone is interested in having custom branded versions of QGIS for use within their organisation, we (linfiniti) will be happy to help - just pop us an email.

QGIS at the University of Cape Town (UCT)

One of the things I have always strongly believed is that FOSSGIS is to become pervasive, we need to make inroads into educational institutions - schools and universities - to try to make FOSS GIS part of the standard curriculum so that students realise there are FOSS alternatives to the range of proprietary software they typically get trained to use. If you read my blog regularly, you will have noticed that we quite often present courses, workshops etc at educational institutions to meet this end. This year we added a new partner to the list - the University of Cape Town.  Its been a slightly different approach this time, where Siddique Motala has included QGIS as part of the list of required assignments that students doing entry level GIS need to complete. My involvement consisted of two classroom sessions and the creation of a practical assignment. In the first classroom session, I gave various presentations on FOSS and its place in society, the QGIS project and so on. Last friday we had our practical session where I walked the students through the assignment that they need to complete.

For the assignment (which you can download here) the students need to identify possible areas where a species may occur based on various parameters such as slope steepness, aspect, rainfall and so on. When we initially planned the assignment, the idea was to show the students how to use the GRASS plugin in QGIS and let them use that to complete the analysis steps. However there are so many improvements to the upcoming QGIS 1.7 release that I was able to work out a workflow which does not need GRASS at all (and makes heavy use of the GDAL Tools plugin).  Because of this I created a preview windows installer for the QGIS 1.7  students to do their practical with.

Siddique, the course lecturer has a great approach to his work and is very open minded about FOSS GIS and the introduction of FOSSGIS to his students. I am looking forward to many constructive future collaborations with him in the future and the prospect of seeing FOSSGIS form a greater part of their GIS curriculum. The real boon is that when his students go off to their workplace, they will at least be informed about alternative options that are out there and be able to apply QGIS to their work when needed.

I'm new here ...

And so it begins.

Hey, my name is Rudi, I just started here at Linfiniti. The ink is still wet on my degree certificate, so the idea is for me to learn a few things and hopefully also be useful to other people along the way. I'll be around for 3 months (or longer, wish me luck).

Lots of stuff to do already, so I won't be bored ;) and I'll post again with anything interesting ...

Some notes on the great migration : QGIS svn to git

If you follow the QGIS developer list, you will probably be aware that we are migrating our source code management (SCM) over from SVN to GIT. This is the second great migration we are doing. Early in the project we used to use CVS and then migrated to SVN when Sourceforge introduced SVN into their hosting offering. That SVN repository was then moved over to svn.osgeo.org who have been kindly and reliably providing SVN hosting services to us for the last few years. Our project is growing in the number of developers and contributors and SCM tools are also developing at an exciting pace these days. GIT is going to make our lives easier in many respects and make all who want to dabble in QGIS development first class citizens, with the ability to create their own repo and do their development in a proper SCM environment before contributing their changes back to the main QGIS repository via a pull request if they so desire. I have previously written some thoughts on why it might be nice to migrate to GIT here, so you can get more context for the change there if you like.

In this article I am going to describe in gory detail everything I did to migrate our project to GIT while preserving branches and tags and cleaning things up a little along the way. If you are not interested in the technical details, there is no need to read further, rather pay a visit to our new GITHUB Quantum-GIS repository :-).

The first step was to get the complete git port of svn from Gary's server. This is a git-svn clone from the top level of QGIS trunk.

wget -c http://spatialserver.net/qgis_repo.zip
unzip qgis_repo.zip
cd qgis_all/

Now update it so that it matches the current state of svn head:

git svn fetch
git svn rebase

Now make a local tracking branch for every remote branch in svn:

for remote in `git branch -r `; do \
git branch --track local-${remote} $remote; git checkout local-${remote};\
git svn rebase; git checkout master; done

Note that the above one-liner prepends local- to each branch so that there are no overlaps between local and remote branch names. Now we need clone this multiple times, each time making a new repository from one of the old subdirs under svn's trunk directory:

articles  code_examples  design  documentation_source  external_plugins  qgis  qgiswebclient  tools

In this article we only show the process followed for creating a 'qgis' repository:

git clone --no-hardlinks qgis_all qgiscd qgis

Now make sure to bring all the branches over

for remote in `git branch -r `; do git branch --track `echo $remote | sed 's/^local-//g'` $remote; done

Ok now cut out just the subdir we want and turn it into the top level dir:

git filter-branch -f --subdirectory-filter qgis HEAD -- --all

This above step may take a while and removes all the other subdirs from this clone. When it is done you should see something like:

Rewrite be48e38daa9da6ea875836b1c3bedb3bbc362646 (11499/11499)Ref 'refs/heads/master' was rewritten

The -- --all part at the end is important - it will ensure that tags and branches are preserved as the new head is defined. If you do a ls, you will see that qgis/* now forms the top level directory:

qgis$ ls
BUGS       CMakeLists.txt   COPYING  Doxyfile
images   ms-windows  qgis.1    resources  testsChangeLog
cmake_templates  debian   Exception_to_GPL_for_Qt.txt
INSTALL  PROVENANCE  qgis.dtd  scripts
TODOcmake      CODING           doc      i18n
mac      python      README    src        tools

Now we do a garbage collection and prune:

git gc --aggressive
git prune

I wanted to do a little more cleanup to the branch names so I ran this little one liner to strip out the origin/local- prefix.

for BRANCH in `git branch`; \
do NEWBRANCH=`echo $BRANCH | sed 's/origin\/local-//g'`; \
echo $NEWBRANCH; git branch -m $BRANCH $NEWBRANCH; done

During the migration, svn tags got turned into branches so lets get rid of them by turning them back to tags:

for TAG in `git branch | grep tags`; \
do NEWTAG=$(echo $TAG| sed 's/tags\///g'); echo $NEWTAG; \
git tag $NEWTAG $TAG; git branch -D $TAG; done

There are also a bunch of branches created with @ symbols in them e.g.:

table_join_branch  table_join_branch@13932  threading-branch  threading-branch@13644

We don't need the second of each so lets clear those away:

for BRANCH in `git branch | grep @`; do git branch -D $BRANCH; done

The branch names have not been very consistently given in the past, so I did a little tidy up work in the process:

git branch -D bdworld
git branch -D local-Release-1_6_0
git branch -D local-Release-1_7_0
for BRANCH in `git branch | grep '\-branch'`; do \
NEWBRANCH=$(echo $BRANCH | sed 's/-branch//g'); \
git branch -m $BRANCH $NEWBRANCH; done
for BRANCH in `git branch | grep '_branch'`; do \
NEWBRANCH=$(echo $BRANCH | sed 's/_branch//g'); \
git branch -m $BRANCH $NEWBRANCH; done
for BRANCH in `git branch | grep -v Release | grep -v origin`; do \
NEWBRANCH=$(echo $BRANCH | sed 's/_/-/g'); \
git branch -m $BRANCH $NEWBRANCH; done
for BRANCH in `git branch | grep -v origin`; do \
NEWBRANCH=$(echo $BRANCH | tr '[A-Z]' '[a-z]'); \
git branch -m $BRANCH $NEWBRANCH; done

Finally, I prepend 'dev-' to all the non-release branches:

for BRANCH in `git branch | grep -v Release | grep -v origin | grep -v master`; do \
NEWBRANCH=dev-$BRANCH; \
git branch -m $BRANCH $NEWBRANCH; done

Here is what our final list of branches looks like:

dev-advanced-editing
dev-advanced-printing
dev-advanced-printing2
dev-analysis  dev-composer-redesign
dev-dataprovider-overhaul
dev-datasource
dev-diagram
dev-drag-and-drop
dev-gdalogr-capi
dev-generic-raster-datatype
dev-grass-tidyup
dev-lib-refactoring
dev-mapcanvas
dev-maptips
dev-multiple-layers
dev-ogr-plugin
dev-projections-branch
dev-provider0-9
dev-qgis
dev-qgis-rev-up
dev-qgsproject
dev-qt4-ui-conversion
dev-raster-providers
dev-raster-transparency
dev-rendercontext
dev-renderer
dev-sdts
dev-start
dev-symbology-ng
dev-table-join
dev-threading
dev-trunk
dev-ubuntu-dapper
dev-vector-overlay
dev-vendor
dev-version-1-0
dev-wfs
master
origin/HEAD
origin/master
release-0.10.0
release-0.9.0
release-0.9.2rc1
release-0_11_0
release-0_5_candidate
release-0_6-candidate
release-0_7-candidate
release-0_8_0
release-0_8_1
release-0_9_0
release-0_9_1
release-1_0_0
release-1_0_1
release-1_0_2
release-1_1_0
release-1_2_0
release-1_3_0
release-1_4_0
release-1_5_0
release-1_6_0
release-1_7_0

As you can see from the above list there were a few strays which I manually cleaned up with:

git branch -m  release-0.10.0 release-0_10_0
git branch -D release-0.9.0
git branch -m  release-0.9.2rc1 release-0_9_2rc1
git branch -m  release-0_5_candidate release-0_5rc1
git branch -m  release-0_6-candidate release-0_6rc1
git branch -m  release-0_7-candidate release-0_7rc1

Now push the repo over to github. The last two lines below ensure that all branches and tags are pushed over too.

git config --global user.name "Tim Sutton"
git config --global user.email tim@linfiniti.com
git remote add github
git@github.com:qgis/Quantum-GIS.git
git config --add remote.github.push '+refs/heads/*:refs/heads/*'
git config --add remote.github.push '+refs/tags/*:refs/tags/*'git push github

Now you can check out the git hub repo like this:

git clone git@github.com:qgis/Quantum-GIS.git qgis-github-clone

Please note we are still testing the above repo and we may still drop and recreate it a few times before it can considered final. We will notify folks on qgis-devel mailing list when we are ready to go. If you are a developer, I do encourage you to check out the repo, kick the tires and let me know if there is anything amiss.

Thats the process in a nutshell. The latter parts need to be repeated for each of the underlying subdirectories of trunk. Lets hope you all enjoy the migration to GIT - I know I am looking forward to saying goodbye to SVN.

Addenda:

One thing we noticed is that the tags did not properly contain only the pruned qgis directory - still working to resolve that.

Also, the github repo had a couple of extra branches called origin/master origin/HEAD (besides the actual master and HEAD) branches. From our github clone we removed them like this:

git push origin :origin/HEAD
git push origin :origin/master

The commands are a bit cryptic - the delete a branch on a remote repository.

Wrapping up the QGIS Meeting in Lisbon, April 2011

The meeting was hosted by IGOT (Instituto de Geografia e Ordenamento do Território) which is a research institute within the 'Faculdade de Letras de Universidade de Lisboa'. We would like to extend our heartfelt thanks to the Instute and University for hosting us, and in particular the following academic staff members who supported our meeting:

  • Prof. Doutor Diogo Abreu (Director of Centro de Estudos Geográficos)
  • Prof. Nuno Marques Da Costa
  • Prof. Paulo Morgado
  • Prof. Nelson Mileu
  • Dr. Fernando Benedito

We would also to thank Manuel Ordas (Director) and the staff of the Faculty Informatics Department (Unidade de Informática e Telecomunicações) for the excellent job they did in supporting the meeting with our Internet and IT needs!

The meeting had several sponsors:

We are extremely grateful for their sponsorship.

The meeting was organised by Giovanni Manghi  and Vânia Neves of Faunalia.PT. They did an an incredible job of organising the meeting, printing t-shirts, keeping everyone fed and dealing with all the behind-the-scenes logistics so well that everything in the venue felt seamless and we could really focus on what we were here for. A big personal thank you from me too to Giovanni and Vania for hosting me at their house in the lovely town of Evora when I arrived in Portugal.

Vânia Neves and Giovanni Manghi - meeting organisers and all round superheros!

Town Hall Meeting

On the sunday we had a townhall meeting to discuss various project wide topics. First on the agenda was the QGIS Certification Programme. Plans for the programme are still under development and there has been some progress and discussion on the QGIS-edu mailing list. We still have a lot more to do to make this a reality and will try to get some more movement in place on this topic by the next hackfest.

GIT and Redmine. The topic of GIT and Redmine took up most of the discussion. I spent some time explaining the benifits of moving to GIT as I see them and some of the implications it will have on the project and our working practices. There was some discussion as to whether it was a good idea to migrate to Redmine (a new ticket management system) at the same time. The motivation for moving to GIT is summarised somewhat here. It was agreed that we will move to GIT after we have branched for 1.7 release, but before we make the release annoucenments.  It is proposed that the project master repository will be hosted on github under our organisation area (where the current test QGIS git repository exists). A lot of discussion centered around how the social dynamics would work - and what working practices we would convey to users and developers. In particular there was some question about whether it was a good idea to use the github pull request system and the git ticket queue - the central concern being that we would like the provenance of who is submitting changes and from what repostories they were pulled to be preserved within our ticket system. As such the approach we will take will be to ask contributors to submit their pull requests via our own ticketing system rather than github pull request mechanism, and to ask users not to use the github ticketing system.

A parallel discussion took place about migrating from trac to redmine. The central argument being that in order to provide a platform for users to submit tickets relating to third party plugins, and to provide a platform for plugin writers to easily create a issue tracking system we would like to use the new redmine instance that has been set up on QGIS.org servers. However, this would result in us having two disparate ticketing systems and we would prefer to unify and simplify our web offerings. In addition, a migration to GIT will also entail some substantive updates to our trac instance, so it was  suggested to use the period of change introduced by GIT to also update our issue tracking at the same time.

More details will be thrashed out via the mailing list, but the take-home message is this: we will move to GIT and Redmine after we have branched for release for QGIS 1.7, but before we make the release annoucenments. In the release announcements, we will simultaneously inform our user and developer community that they should use GIT and Redmine henceforth.

The next item on the agenda was the finance reportfrom Paolo Cavallini. It was extremely encouraging to see that the flow of donations has been increasing and has been able to sustain our QGIS meetings thus far. As our developer community grows, more are taking advantage of the funding opportunities provided by the QGIS treasury. We are looking to bring in around 2500 euros in order to support the next meeting (more on that below) so continued support via the donation button on QGIS.org will always be appreciated. There was a collective 'thank you' from those present to the kind focus who have donated to the project thus far! We also had some discussion on how to deal with 'targetted' donations - following the kind donation of 740 euros by Ramon Andinach. In making the donation, the request was made that it should be used towards bug fixing and improving QGIS. It was generally agreed that we are ok with such a donation targeted towards a general area of activity in the project. A further part of the discussion centered around how we could use the money to create some form of bounty system whereby developers or contributors would receive a small amount for each bug they close. The exact system that will be used was discussed at length but remained unresolved. Paolo is going to draw up a proposal which we will discuss further via the mailing list. In general we hope to structure the bug hunting exercise in a way that it will minimise the supervision required from those with commit access and at the same time attract new developer talent into our community.

Following this, we discussed the venue for the next QGIS Meeting. Marco Hugentobler offered to convene the meeting in Zurich in November, which we will all look forward to attending. More details on the venue, dates, accommodation etc. will be made available via our mailing lists in the coming months.

We took the opportunity after the town hall meeting to take a group photo (below).

QGIS Meeting attendees, Lisbon, April 2011 (not all attendees present in photo)

In the next sections I will try to summarise other interesting discussions that took place over the course of the meeting.

Web Infrastructure

A number of us held a meeting to discuss the web infrastructure on QGIS.org. We are in the process of rolling out some new web applications, with Alessandro Pasotti leading the effort. We discussed at length the requirements for the new plugin repository . The new repository will make it easier to administer plugins, prevent plugins becoming orphaned, and will form a platform in the future for building more social based capabilities around the plugin repository. These will include a tagging system, ratings, download stats to reflect popularity, and so on. There are further  plans for the web site, with the general idea of minimising the diversity of different online applications we need to maintain (the new batch all forming part of a single django project), making maintanance easier, and providing richer community based resources to our users in the future.

Borys espousing his ideas at the web infrastructure meeting

QGIS Mapserver

A meeting was held to discuss the QGIS Mapserver project (spearheaded by Marco Hugentobler). Marco is planning various improvements to rendering performance, some of which he worked on during the meeting, including support now for on-the-fly reprojection of rasters via the QGIS Mapserver.

User Interface

A break out meeting was held to discuss the user interface for QGIS and ways in which it can be improved. Unlike the discussion on the same topic we held in Wroclaw, where we got a little lost in the enormity of reviewing every dialog in QGIS, this time we took a different approach. The talk centered around identifying cross cutting user interface issues that can be addressed by the creation of generic Qt widgets. Several different areas were discussed.

First up, we discussed the need for a generic managed tree widget (which will probably be implemented as QgsManagedTreeWidget). The idea here is that there are many places in the user interface where functionality is required to manage lists. In each case this functionality is represented as separate, standalone logic and different user interface elements. An example of this can be seen in the vector layer properties dialog on the actions tab. In this case a list of actions is provided with a row of buttons underneath. In another instance, the new generation symbology symbol layer editor also implements a managed list for the symbol layers, but with completely different button, icons etc. There are numerous other places in the QGIS application that have different, diverging implementations of managed lists, which we will seen to unify via the implementation of a custom widget.

Similar discussions were held about the legend areas, where we plan to introduce additional functionality to support:

  • a table of contents view (as currently available)
  • a providers view -- a tree with top level nodes for each provider and leaves representing each resource from that provider that can be added to the map (via drag and drop). This will allow you to for example browse your file system tree and add one or more vector or raster layers, then from within the WMS provider node, drag in a WMS resource, and so on.
  • a plugins view - a tree which can be filtered and which lists each plugin action available. Plugin actions will be able to be dragged onto the user interface - for example to allow user to create their own toolbars with a mix and match of plugins contained therein.

We discussed other potential views that we could implement in the future.

The last area of the interface that received detailed discussion was the vector NG symbol selector and designer. After a bit of brain storming we mocked up a more ergonomic approach to theselection, customisation and creation of symbols. Once again this will be implemented as a generic widget when we have the needed resources available.

Orfeo Toolbox Integration

I took the opportunity to have a long one on one chat with Julien Malik from the Orfeo Toolbox project. He kindly walked me step by step through the setup and configuration process on my own laptop. The following day Julien spent an hour or so presenting the OTB project to the entire group. The OTB project covers a huge range of functionality of particular interest to those involved in remote sensing activities. It can carry out tasks such as atmospheric correction, image classification, resampling, feathering, mosaicking, pan sharpening and many more to boot. At the end of the more formal part of his presentation, Julien demonstrated live some of the tools and showed the state of integration of OTB within QGIS. To summarise, the OTB project is principally a library (making heavy use of template programming) for C++ programmers to build remote sensing tool chains. On top of this library they have created a number of applications. The approach they have taken to build these applications is to use application generators. Thus in order to create an application for say, reseampling an image, they create a short application that calls the needed library functions and that implements a specific set of API calls that they have defined. This could be considered a metaprogram because it is compiled into a library and then various user interfaces are automatically generated for it. For example a command line app, a Qt widget, a Qt standalone application, a QGIS plugin, and so on. This approach means they can rapidly and easily produce a huge range of tools based on their library, and have them generated into applications useful in different environments. One of the disadvantages is that currently this approach means that when creating an image processing chain, the result of each step in the toolchain needs to be written to disk - which can consume huge amounts of space. However internally, the OTB library has the capability to deal with chaining tools in a very efficient  whereby only the final output gets written to disk and all intermediate products or pipelined from one tool to the next and never written to disk. Although the current implementation uses the formerly described approach of writing each output to disk, there was some discussion about how we can implement the more efficient model in QGIS. Marin Dobias also made some suggestions to create python bindings from the OTB-Applications and then generate QGIS python plugins using the same mechanism as described above. This has three potential benifits:

  • it will remove a bug in OTB when it is linked to QGIS caused by the Ubuntu GDAL & GTIFF library packages
  • it will possibly allow end users to contribute more readily to the python plugins for OTB
  • it will allow the user interface component of the OTB integration to be updated and pushed out to users via the standard QGIS repository mechanism.

I have probably done the OTB project and its project a disservice by giving it such scant coverage above - I believe it can potentially be a huge boost to the QGIS project, bringing it into the hands of remote sensing specialists to instead of just GIS professionals and users.

The Orfeo Toolbox built from the OTB-Applications project

New Plugin In a Weekend

Raymond Nijssen gave a cool little presentation at the end of Sunday showing the plugin he drummed up over the weekend. The plugin implements a vector attribute calculator where the formula used to calculate the new field values can be any evaluated python statement as opposed to the restricted list of operators allowed by the implementation in QGIS core. In addition you can access the geometry column so the computed value can include geometry operations via the python geos bindings. Its really impressive to see just how quickly significant new features and tools can be created for QGIS now that we have a rich library as foundation. Ramon's plugin should make it's way into the python plugin repositories soon. I bumped into Ramon again in the airport on my way home and he kindly stood me a cup of tea and we had an enjoyable chat for half an hour while waiting for our respective flights - thanks Ramon!

Documentation

Jean Roc gave an interesting talk where he showed off the work they had done getting the QGIS documentation (french version) into a print ready state and publishing it on Lulu.com (an on-demand online print service). He had an exemplar copy with him and the product looks great - really professional. He briefly mentioned some of the steps they needed to take to get the documentation ready for printing. At some point we will aim to get an english version onto lulu.com and see if it is able to help plump out the income stream to the QGIS project. If you are a French speaker, you can buy the bound user manual here.

Mobile

At our last developer meeting I didn't notice it to be the case, but this time around virtually every developer was toting a nice Android phone of some sort. I saw some really crazy things (yes I'm talking about you Saber - running a debian install in a chroot in your phone and then vnc'ing into it with a mobile vnc client is pretty nuts), and we had some quite hilarious incidents (imagine 30 GIS geeks ambling around the streets of Lisbon staring at our phones trying to find a restaurant and taking an hour route for what should be a ten minute walk. There were also a few folks running other smartphone platforms like Maemo and so on.

One of the offshoots of all this smartphone mania is that there was quite a lot of interest in seeing QGIS as a mobile platform. Now when I say QGIS, I'm not talking about the whole desktop environment, but rather the core QGIS library, some raster and vector providers and some custom mobile orientated  user interface on top of them. Porting QGIS is not entirely trivial since all the dependencies (GDAL, GEOS, PROJ4 etc.) need to be recompiled for the platform. Qt4 is already available on android (search for Ministro on the app store) so thats at least one thing we don't have to worry about too much.

At the meeting there were a handfull of folks who seem to be well versed in building apps for mobile platforms and they set to work towards getting some basic QGIS port underway. I spent a very informative and interesting hour or two with Gabriele Franch. He demonstrated QtQuick to me and we built a simple hello world app and mixed in native QtWidgets with QtQuick declarative derived user interface components. We also spent some time discussing what kind of user interface and functionality we would like to see make it onto the mobile platform. For me I can see an environment something like:

  • a full screen map canvas
  • tools that appear after swiping your finger from offscreen to onscreen
  • offline editing capability
  • local viewing of QGS projects (perhaps migrated to sqlite3 with sqlite raster support)
  • simple forms and data capture capabilities
  • reading and capturing data from GPS

Many of these were share (or originated) from those at the meeting. Although there is nothing concrete available yet, I get the feeling a mobile version of QGIS won't be long in coming won way or the other - something I will look forward to!

QGIS - PgAdmin3 Integration

Vincent Picavet gave an interesting talk on some work he has done to integrate QGIS and PGAdmin3.  Basically he has implemented a macro in PGAdmin3 that places a query you have created into a notification area in the database. On the QGIS side, he implemented a plugin that observes the database for these notifications, and when one is received, runs the query and displays the results on the QGIS map. This approach means that you can use the comfortable and powerful tools in PGAdmin3 to create queries, and then quickly visualise the results directly in QGIS. There are some existing python plugins that let you create and run arbitrary queries with QGIS and view the results directly, but none will give the wealth of capabilities that PGAdmin3 offers.

Some General Thoughts

This QGIS Meeting was a little different in that it was the first time we have held the meeting during feature freeze so there wasn't the same 'feature fever' that has accompanied our past meetings (something I have always enjoyed). Also we seemed to spend a lot more time engaged in talks and discussions and less time coding - perhaps ironic with the change in moniker from 'hackfest' to 'meeting' for our 6 monthly get-togethers. We still conducted very much an 'unmeeting' with little in the way of formal agenda or session planning before hand, and I think the format works well, although my feeling is that next time we should try to constrain the non coding potions of the event to a single day (or even better, morning session).

There was still quite a bit of work going on in the bug fixing front and some long standing and critical map rendering problems were resolved.

Bug #7 - one of the oldest bugs in our bug queue (5 years old) killed by Juergen Fischer

There were a fair share of new faces at the meeting which is always great, and I had the pleasure of meeting Maxim Dubinin, Alessandro Pasotti, Giovanni Manghi and Vânia Neves in person, all of whom I have known virtually up until now.

In summary, it was a great meeting, personally I had a great time and left enthused more than ever about my favourite FOSSGIS Desktop software!

Back to Top

Sustaining Members