Related Plugins and Tags

QGIS Planet

OpenStreetMap(OSM) cartography using QGIS (part 2)

In my previous blog post l mentioned that l would share on how to download OSM data, load it into Postgres, connect database to QGIS and view OSM data in QGIS. I will further explain how l created the styles in my swatch board using QGIS new symbology....but lets first download the data

OSM test dataset

The OSM project provides free spatial data which you can download and use for your own projects without any restrictions. Visit http://wiki.openstreetmap.org to learn more about OSM. My test dataset was Bayern in Germany therefore l downloaded OSM data for this region from CloudMade downloads site. You can find data for various places in different countries in the world from this site or alternatively use OSM data from Geofabrik. To download data do:

wget -c http://downloads.cloudmade.com/europe/germany/bayern.osm.bz2

Loading data into Postgres

I used osm2pgsql to import OSM data into Postgres. This article has general information on osm2pgsql: http://wiki.openstreetmap.org/wiki/Osm2pgsql. You can also use OSMOSIS to load OSM into Postgres. To install osm2pgsql do:

sudo apt-get install osm2pgsql

then,

time  osm2pgsql -s -v -d osm -C 3000 bayern.osm.bz2

to load the data into Postgres, where osm is the name of the database and bayern.osm.bz2 is the data to be iMported into Postgres. If import is successful, connect the database to QGIS.

Connecting PostGIS to QGIS

There are various GIS packages which one can use to connect to PostGIS but l personally prefer QGIS, mostly because of its clean user-friendly interface. To connect to PostGIS from QGIS, choose Add PostGIS Layer from Layer in the Menu bar. Alternatively click on the Add PostGIS Layer icon which appears in the toolbar. A dialog where you must enter your database credentials will pop-up.

After entering your database information, Click Test Connect. If connection is successful, Click OK to remove the dialog and Connect to view PostGIS tables. The tables should appear as shown below, with names starting with planet_osm. The type column in the dialog below shows the geometry of the feature.

Note that there are two linestring types. In my case, one of the linestrings (planet_osm_roads) represented roads only and the other (planet_osm_line) had both roads and other line features such as rivers. I noticed that some of the roads appeared in the lines table but not in the roads so you must check both tables to avoid omissions. Click on the tables in the list and ADD, to view the features in QGIS.

After connecting the OSM database to QGIS, I opened the attribute tables to analyze the tables. Unlike the normal QGIS shapefiles, OSM stores everything in one huge database. Features are made up of  nodes(basic unit, lat-long coordinates), ways(series of nodes) and relations. Since l had never used OSM data before l read up on the format of OSM data and how attributes were defined from the OSM wiki page http://wiki.openstreetmap.org/wiki/. For more detailed information of how to use OSM data in QGIS you can read this article http://www.qgis.org/wiki/Using_OpenStreetMap_data

OSM External Data - Coastlines

One of the problems l faced was on finding good quality large scale global coastline datasets. I decided to search external data for coastlines after realizing that the coastlines l had downloaded from OSM were not accurate (in most cases my roads would go over the coast). To make sure that my map wouldn't be spoiled by incorrect coastlines l thought having a set of coastlines at various scales (1: 10mil, 1: 2mi, 1: 1mil, 1: 250 000 and 1: 70 000) would be best. But this seemed to be an impossible mission since l only got 1:10mil and 1:1mil (VMap). The naturalearthdata site with coastlines at 10mil, 50 mil and 110 mil can be helpful but there are no 1:2mil, 250th and 70th coastlines. I found this site NOAA coastline extractor but the 1:70th medium resolution coastlines only partially covers the US and the rest of the world is omitted. I asked online but still didn't get the coastlines l wanted and ended up using administrative boundaries http://gadm.org/world at small scales.

Applying QGIS Styles to OSM data

To apply swatch board styles to OSM roads using QGIS right click the layer name (in my case planet_osm_roads) and click Properties. Click the New Symbology button from the Symbols tab and click OK when asked if you wish to use the new symbology implementation for the layer. The Properties dialog box will slightly change. The advantage of using new symbology is that you can create your own styles, save them within QGIS and quickly re-access them.

Click the dropdown next to Single Symbol and select Categorised. The Categorised symbol renderer renders all features from a layer using a single user-defined symbol, which color reflects the value of a selected feature’s attribute. Click the drop-down next to column and choose highway as the attribute. The highway tag in OSM is the primary tag used for any kind of road, street or way. Click classify and you will get a list of all the road classes, though they will have the same symbol.

Using Categorized Symbology in QGIS

To apply individual styles to road classes, double click the symbol to open the a Style Selector and click Change. A Symbol properties dialog which you can use to create your own symbol will appear.

Symbol properties dialog-box

To create a symbol with a fill and outline color, select Simple Line as your Symbol Type. Use Color to set color , Pen width to set the width of the road and Pen style to set the style of the line (solid, dash or dot). Click the Add symbol layer button at the left of the Symbol Properties dialog to add the outline color for the road. A new Simple Line will be added to the Symbol layers list. Use Color to set the color and increase the Pen width to a value slightly higher than the fill. Click the blue arrow to move the outline color below fill color. You will be able to see the symbol you created from the symbol preview.

Road Symbol Preview

If you're happy then click OK and select the Save as Style button to save the style you created. Create as many styles as you like in the same manner and save them. The Style manager stores all your styles and allows you to add, edit, remove, export and import styles. After creating and saving styles you should have a list of all your styles.

Saved QGIS styles

My Layer Properties dialog after categorizing and styling the roads is shown below.

Roads Styles

Part of my roads rendered using QGIS new symbology with categorized classes.

Roads with styles created using QGIS new symbology

Roads with styles created using QGIS new symbology (click to enlarge)

pixelstats trackingpixel

OpenStreetMap(OSM) cartography using QGIS (part 2)

In my previous blog post l mentioned that l would share on how to download OSM data, load it into Postgres, connect database to QGIS and view OSM data in QGIS. I will further explain how l created the styles in my swatch board using QGIS new symbology....but lets first download the data

OSM test dataset

The OSM project provides free spatial data which you can download and use for your own projects without any restrictions. Visit http://wiki.openstreetmap.org to learn more about OSM. My test dataset was Bayern in Germany therefore l downloaded OSM data for this region from CloudMade downloads site. You can find data for various places in different countries in the world from this site or alternatively use OSM data from Geofabrik. To download data do:

wget -c http://downloads.cloudmade.com/europe/germany/bayern.osm.bz2

Loading data into Postgres

I used osm2pgsql to import OSM data into Postgres. This article has general information on osm2pgsql: http://wiki.openstreetmap.org/wiki/Osm2pgsql. You can also use OSMOSIS to load OSM into Postgres. To install osm2pgsql do:

sudo apt-get install osm2pgsql

then,

time  osm2pgsql -s -v -d osm -C 3000 bayern.osm.bz2

to load the data into Postgres, where osm is the name of the database and bayern.osm.bz2 is the data to be iMported into Postgres. If import is successful, connect the database to QGIS.

Connecting PostGIS to QGIS

There are various GIS packages which one can use to connect to PostGIS but l personally prefer QGIS, mostly because of its clean user-friendly interface. To connect to PostGIS from QGIS, choose Add PostGIS Layer from Layer in the Menu bar. Alternatively click on the Add PostGIS Layer icon image0 which appears in the toolbar. A dialog where you must enter your database credentials will pop-up.

image1

After entering your database information, Click Test Connect. If connection is successful, Click OK to remove the dialog and Connect to view PostGIS tables. The tables should appear as shown below, with names starting with planet_osm. The type column in the dialog below shows the geometry of the feature.

image2

Note that there are two linestring types. In my case, one of the linestrings (planet_osm_roads) represented roads only and the other (planet_osm_line) had both roads and other line features such as rivers. I noticed that some of the roads appeared in the lines table but not in the roads so you must check both tables to avoid omissions. Click on the tables in the list and ADD, to view the features in QGIS.

After connecting the OSM database to QGIS, I opened the attribute tables to analyze the tables. Unlike the normal QGIS shapefiles, OSM stores everything in one huge database. Features are made up of  nodes(basic unit, lat-long coordinates), ways(series of nodes) and relations. Since l had never used OSM data before l read up on the format of OSM data and how attributes were defined from the OSM wiki pagehttp://wiki.openstreetmap.org/wiki/. For more detailed information of how to use OSM data in QGIS you can read this article http://www.qgis.org/wiki/Using_OpenStreetMap_data

OSM External Data - Coastlines

One of the problems l faced was on finding good quality large scale global coastline datasets. I decided to search external data for coastlines after realizing that the coastlines l had downloaded from OSM were not accurate (in most cases my roads would go over the coast). To make sure that my map wouldn't be spoiled by incorrect coastlines l thought having a set of coastlines at various scales (1: 10mil, 1: 2mi, 1: 1mil, 1: 250 000 and 1: 70 000) would be best. But this seemed to be an impossible mission since l only got 1:10mil and 1:1mil (VMap). The naturalearthdata site with coastlines at 10mil, 50 mil and 110 mil can be helpful but there are no 1:2mil, 250th and 70th coastlines. I found this site NOAA coastline extractor but the 1:70th medium resolution coastlines only partially covers the US and the rest of the world is omitted. I asked online but still didn't get the coastlines l wanted and ended up using administrative boundaries http://gadm.org/world at small scales.

Applying QGIS Styles to OSM data

To apply swatch board styles to OSM roads using QGIS right click the layer name (in my case planet_osm_roads) and click Properties. Click the New Symbology button from the Symbols tab and click OK when asked if you wish to use the new symbology implementation for the layer. The Properties dialog box will slightly change. The advantage of using new symbology is that you can create your own styles, save them within QGIS and quickly re-access them.

Click the dropdown next to Single Symbol and select Categorised. The Categorised symbol renderer renders all features from a layer using a single user-defined symbol, which color reflects the value of a selected feature’s attribute. Click the drop-down next to column and choose highway as the attribute. The highway tag in OSM is the primary tag used for any kind of road, street or way. Click classify and you will get a list of all the road classes, though they will have the same symbol.

image3

To apply individual styles to road classes, double click the symbol to open the a Style Selector and click Change. A Symbol properties dialog which you can use to create your own symbol will appear.

image4

To create a symbol with a fill and outline color, select Simple Line as your Symbol Type. Use Color to set color , Pen width to set the width of the road and Pen style to set the style of the line (solid, dash or dot). Click the Add symbol layer button at the left of the Symbol Properties dialog to add the outline color for the road. A new Simple Line will be added to the Symbol layers list. Use Color to set the color and increase the Pen width to a value slightly higher than the fill. Click the blue arrow to move the outline color below fill color. You will be able to see the symbol you created from the symbol preview.

image5

If you're happy then click OK and select the Save as Style button to save the style you created. Create as many styles as you like in the same manner and save them. The Style manager stores all your styles and allows you to add, edit, remove, export and import styles. After creating and saving styles you should have a list of all your styles.

image6

My Layer Properties dialog after categorizing and styling the roads is shown below.

image7

Part of my roads rendered using QGIS new symbology with categorized classes.

Roads with styles created using QGIS new symbology

GDAL Native Raster Stats in QGIS

During the Lisbon hackfest (between all those meetings!) I worked on updating the raster providers to be able to generate stats themselves. The raster provider base class has a default (and historically inefficient as we all know) implementation for collecting stats by walking over every cell in the raster twice. This method remains, but the providers can now supply stats themselves (heopfully using more efficient mechanisms). I have implemented gdal native stats already. Over the last few weeks I did some further testing and cleanups to this work. From my testing some of my worst case files opened in ~1minute rather than ~8+ minutes. It would be great if interested parties could test (details below) before I put it into master.

git remote add timlinux git://github.com/timlinux/Quantum-GIS.git
git fetch timlinux
git branch --track timlinux raster-stats
git checkout raster-stats

I look forward to your feedback.

 

pixelstats trackingpixel

GDAL Native Raster Stats in QGIS

During the Lisbon hackfest (between all those meetings!) I worked on updating the raster providers to be able to generate stats themselves. The raster provider base class has a default (and historically inefficient as we all know) implementation for collecting stats by walking over every cell in the raster twice. This method remains, but the providers can now supply stats themselves (heopfully using more efficient mechanisms). I have implemented gdal native stats already. Over the last few weeks I did some further testing and cleanups to this work. From my testing some of my worst case files opened in ~1minute rather than ~8+ minutes. It would be great if interested parties could test (details below) before I put it into master.

git remote add timlinux git://github.com/timlinux/Quantum-GIS.git
git fetch timlinux
git branch --track timlinux raster-stats
git checkout raster-stats

I look forward to your feedback.

OpenStreetMap(OSM) cartography using QGIS (part 1)

Hi My name is Petty, I worked as an Intern at Linfiniti after completing my honours degree in GIS last year.

Petronella-Linfiniti member

Petronella Tizora-Linfiniti member

A few months ago l did a project with Linfiniti, so I'll be sharing on the steps l took and a few cartography hints and tips which might come in handy if you're interested in making beautiful useful maps. The idea was to create an original, visually attractive and consistent cartographic scheme for representing OSM datasets using QGIS. The first step l took was to build a swatch board.

Swatch Board

The purpose of the swatch board was to help me plan and organize color schemes and feature scales. The advantage of planning your work before you start a project is that you have a basic picture of your output before you start working and you also save time by referring to your initial plan and keeping on track with predefined symbology. My swatch board was a table in Open Office with descriptions of features, colour values and classification fields.

I defined symbology bearing in mind that the map background may be raster (e.g. NASA blue marble or global DEM) or vector (like areas from OSM). I also looked at Google, Bing, OSM and Yahoo maps and websites such as ColourLovers www.colourlovers.com (cool site) to get inspiration on colours to use in my map. All attractive maps seem to have one thing in common....contrasting colours. A normal user should be able to distinguish between freeways and main roads without having a second look at the map. The choices of colors you select for you maps influence the message you send to any viewer. Deciding on which colours to use on a map can be difficult especially if you want to create a unique map. However there are various sites which can help you create themes for your map. Below are some of the sites which l found very useful.

Color Schemer

Color schemer allows you to pick a color or enter its RGB or Hex value and generates matching colors based on your first choice. You can also lighten or darken the scheme. Color Scheme Designer works in an almost similar manner and further allows you to adjust, export and even view colors as a color blind person would see them.

Color Brewer

Color brewer lets you view various color schemes based on the number of classes and scheme you choose for your map. It also provides some advice through "learn more" articles.

ColorLovers

Color lovers is a site where people create and share palettes, patterns and colors. You can also search for palettes and patterns and use them in your own projects.

I generated colors using Color Schemer and used some of them to make a palette in the ColourLovers site. Click on the image to see RGB values for roads in my map.

Roads colour palette generated using coulorlovers

Below is a screen-shot of part of my swatch board

Swatch board

I also created another table which defined scale breaks and line thicknesses for each feature.

Scale breaks and line thicknesses

After all colours, scales and road thicknesses were defined, it was time to get it working on the actual data. Read Part 2 of this document to learn how to download OSM data and load it into Postgres database and apply styles into QGIS .

 

 

 

 

 

 


pixelstats trackingpixel

OpenStreetMap(OSM) cartography using QGIS (part 1)

Hi My name is Petty, I worked as an Intern at Linfiniti after completing my honours degree in GIS last year.

Petronella-Linfiniti member

A few months ago l did a project with Linfiniti, so I'll be sharing on the steps l took and a few cartography hints and tips which might come in handy if you're interested in making beautiful useful maps. The idea was to create an original, visually attractive and consistent cartographic scheme for representing OSM datasets using QGIS. The first step l took was to build a swatch board.

Swatch Board

The purpose of the swatch board was to help me plan and organize color schemes and feature scales. The advantage of planning your work before you start a project is that you have a basic picture of your output before you start working and you also save time by referring to your initial plan and keeping on track with predefined symbology. My swatch board was a table in Open Office with descriptions of features, colour values and classification fields.

I defined symbology bearing in mind that the map background may be raster (e.g. NASA blue marble or global DEM) or vector (like areas from OSM). I also looked at Google, Bing, OSM and Yahoo maps and websites such as ColourLovers www.colourlovers.com (cool site) to get inspiration on colours to use in my map. All attractive maps seem to have one thing in common....contrasting colours. A normal user should be able to distinguish between freeways and main roads without having a second look at the map. The choices of colors you select for you maps influence the message you send to any viewer. Deciding on which colours to use on a map can be difficult especially if you want to create a unique map. However there are various sites which can help you create themes for your map. Below are some of the sites which l found very useful.

Color Schemer

Color schemer allows you to pick a color or enter its RGB or Hex value and generates matching colors based on your first choice. You can also lighten or darken the scheme.Color Scheme Designer works in an almost similar manner and further allows you to adjust, export and even view colors as a color blind person would see them.

image1

Color Brewer

Color brewer lets you view various color schemes based on the number of classes and scheme you choose for your map. It also provides some advice through "learn more" articles.

image2

ColorLovers

Color lovers is a site where people create and share palettes, patterns and colors. You can also search for palettes and patterns and use them in your own projects.

image3

I generated colors using Color Schemer and used some of them to make a palette in the ColourLovers site. Click on the image to see RGB values for roads in my map.

image4

Below is a screen-shot of part of my swatch board

image5

I also created another table which defined scale breaks and line thicknesses for each feature.

image6

After all colours, scales and road thicknesses were defined, it was time to get it working on the actual data. Read Part 2 of this document to learn how to download OSM data and load it into Postgres database and apply styles into QGIS .

Using QtCreator with QGIS

Getting up and running with QtCreator and QGIS
Firstly let me preface this article by saying that I am not a huge fan of IDEs - I've been using VIM for more than a decade and it's a tough habit to break. That said, QtCreator has a lot going for it (including a special concession to VIM users) and it is a lot easier to use for debugging that crusty old gdb on the command line. So in the article below I am going to run through an optimal (from my point of view anyway) setup for using QtCreator to build, develop and debug QGIS with.
QtCreator is a newish IDE from the makers of the Qt library. With QtCreator you can build any C++ project, but it's really optimised for people working on Qt(4) based applications (including mobile apps). Everything I describe below assumes you are running Ubuntu 11.04 'Natty'.

Installing QtCreator

This part is easy:

sudo apt-get install qtcreator qtcreator-doc

After installing you should find it in your gnome menu.

Setting up your project


I'm assuming you have already got a local Quantum-GIS clone containing the source code, and have installed all needed build dependencies etc. There are detailed in instructions on doing that here:
http://github.com/qgis/Quantum-GIS/blob/master/CODING

On my system I have checked out the code into $HOME/dev/cpp/Quantum-GIS and the rest of the article is written assuming that, you should update these paths as appropriate for your local system.
On launching QtCreator do:

File->Open File or Project

Then use the resulting file selection dialog to browse to and open this file:

$HOME/dev/cpp/Quantum-GIS/CMakeLists.txt

 

Open the top level CMakeLists.txt

Open the top level CMakeLists.txt

 

 

Next you will be prompted for a build location. I create a specific build dir for QtCreator to work in under:

$HOME/dev/cpp/Quantum-GIS/build-master-qtcreator

Its probably a good idea to create separate build directories for different branches if you can afford the disk space.

Build location

Build location

 

 

Next you will be asked if you have any CMake build options to pass to CMake. We will tell CMake that we want a debug build by adding this option:

-DCMAKE_BUILD_TYPE=Debug
CMake Parameters

CMake Parameters

 

 

Thats the basics of it. When you complete the Wizard, QtCreator will start scanning the source tree for autocompletion support and do some other housekeeping stuff in the background. We want to tweak a few things before we start to build though.

Setting up your build environment

Click on the 'Projects' icon on the left of the QtCreator window.

Project properties

Project properties

 

 

Select the build settings tab (normally active by default).

Build settings

Build settings

 

 

We now want to add a custom process step. Why? Because QGIS can currently only run from an install directory, not its build directory, so we need to ensure that it is installed whenever we build it. Under 'Build Steps', click on the 'Add Build  Step' combo button and choose 'Custom Process Step'.

Add custom step

Add custom step

 

 

Now we set the following details:

Enable custom process step [yes]
Command: make
Working directory: $HOME/dev/cpp/Quantum-GIS/build-master-qtcreator
Command arguments: install
Creating a custom process step

Creating a custom process step

 

 

You are almost ready to build. Just one note: QtCreator will need write permissions on the install prefix. By default (which I am using here) QGIS is going to get installed to /usr/local. For my purposes on my development machine, I just gave myself write permissions to the /usr/local directory.

To start the build, click that big hammer icon on the bottom left of the window.

The build icon

The build icon

 

 

Setting your run environment

As mentioned above, we cannot run QGIS from directly in the build directly, so we need to create a custom run target to tell QtCreator to run QGIS from the install dir (in my case /usr/local/). To do that, return to the projects configuration screen.

Project properties

Project properties

Now select the 'Run Settings' tab

The run settings tab

The run settings tab

We need to update the default run settings from using the 'qgis' run configuration (shown below) to using a custom one.

Default executable which we want to override

Default executable which we want to override

Do do that, click the 'Add v' combo button next to the Run configuration combo and choose 'Custom Executable' from the top of the list.

Add a custom executable

Add a custom executable

Now in the properties area set the following details:

Executable: /usr/local/bin/qgis
Arguments :
Working directory:
$HOME
Run in terminal: [no]
Debugger: C++ [yes]
          Qml [no]

Then click the 'Rename' button and give your custom executable a meaning full name e.g. 'Installed QGIS'

Custom executable properties

Custom executable properties

Running and debugging

Now you are ready to run and debug QGIS. To set a break point, simply open a source file and click in the left column.

Adding a breakpoint

Adding a breakpoint

Now launch QGIS under the debugger by clicking the icon with a bug on it in the bottom left of the window.

Run QGIS in debug mode

Run QGIS in debug mode

Conclusion

There are a few other handy tips and tricks you can do in QtCreator which I will try to cover in a future article. With the information presented above you should at least be on your way to having QGIS setup in QtCreator. Having an interactive debugger at your fingertips is a great way to learn to code in QGIS as it is easy to follow the logic flow from one class / to the next. It also makes it easy for casual developers to dive in and find a bug that is irritating them and either send us a comprehensive bug report or send use a fix :-) .

pixelstats trackingpixel

Using QtCreator with QGIS

Getting up and running with QtCreator and QGIS

Firstly let me preface this article by saying that I am not a huge fan of IDEs - I've been using VIM for more than a decade and it's a tough habit to break. That said, QtCreator has a lot going for it (including a special concession to VIM users) and it is a lot easier to use for debugging that crusty old gdb on the command line. So in the article below I am going to run through an optimal (from my point of view anyway) setup for using QtCreator to build, develop and debug QGIS with.

QtCreator is a newish IDE from the makers of the Qt library. With QtCreator you can build any C++ project, but it's really optimised for people working on Qt(4) based applications (including mobile apps). Everything I describe below assumes you are running Ubuntu 11.04 'Natty'.

Installing QtCreator

This part is easy:

sudo apt-get install qtcreator qtcreator-doc

After installing you should find it in your gnome menu.

Setting up your project

I'm assuming you have already got a local Quantum-GIS clone containing the source code, and have installed all needed build dependencies etc. There are detailed in instructions on doing that here:

http://github.com/qgis/Quantum-GIS/blob/master/CODING

On my system I have checked out the code into:

$HOME/dev/cpp/Quantum-GIS

and the rest of the article is written assuming that, you should update these paths as appropriate for your local system.

On launching QtCreator do:

File->Open File or Project

Then use the resulting file selection dialog to browse to and open this file:

  $HOME/dev/cpp/Quantum-GIS/CMakeLists.txt



|Open the top level CMakeLists.txt|

Next you will be prompted for a build location. I create a specific build dir for QtCreator to work in under:

$HOME/dev/cpp/Quantum-GIS/build-master-qtcreator

Its probably a good idea to create separate build directories for different branches if you can afford the disk space.

Build location

Next you will be asked if you have any CMake build options to pass to CMake. We will tell CMake that we want a debug build by adding this option:

 -DCMAKE_BUILD_TYPE=Debug


|CMake Parameters|

That's the basics of it. When you complete the Wizard, QtCreator will start scanning the source tree for autocompletion support and do some other housekeeping stuff in the background. We want to tweak a few things before we start to build though.

Setting up your build environment

Click on the 'Projects' icon on the left of the QtCreator window.

Project properties

Select the build settings tab (normally active by default).

Build settings

We now want to add a custom process step. Why? Because QGIS can currently only run from an install directory, not its build directory, so we need to ensure that it is installed whenever we build it. Under 'Build Steps', click on the 'Add Build  Step' combo button and choose 'Custom Process Step'.

Add custom step

Now we set the following details:

  Enable custom process step [yes]
  Command: make
  Working directory: $HOME/dev/cpp/Quantum-GIS/build-master-qtcreator
  Command arguments: install

|Creating a custom process step|

You are almost ready to build. Just one note: QtCreator will need write permissions on the install prefix. By default (which I am using here) QGIS is going to get installed to /usr/local. For my purposes on my development machine, I just gave myself write permissions to the /usr/local directory.

To start the build, click that big hammer icon on the bottom left of the window.

The build icon

Setting your run environment

As mentioned above, we cannot run QGIS from directly in the build directly, so we need to create a custom run target to tell QtCreator to run QGIS from the install dir (in my case /usr/local/). To do that, return to the projects configuration screen.

Project properties

Now select the 'Run Settings' tab

The run settings tab

We need to update the default run settings from using the 'qgis' run configuration (shown below) to using a custom one.

Default executable which we want to override

Do do that, click the 'Add v' combo button next to the Run configuration combo and choose 'Custom Executable' from the top of the list.

Add a custom executable

Now in the properties area set the following details:

Executable: /usr/local/bin/qgis
Arguments :
Working directory:
$HOME
Run in terminal: [no]
Debugger: C++ [yes]
          Qml [no]

Then click the 'Rename' button and give your custom executable a meaning full name e.g. 'Installed QGIS'

Custom executable properties

Running and debugging

Now you are ready to run and debug QGIS. To set a break point, simply open a source file and click in the left column.

|Adding a breakpoint|

Now launch QGIS under the debugger by clicking the icon with a bug on it in the bottom left of the window.

Run QGIS in debug mode

Conclusion

There are a few other handy tips and tricks you can do in QtCreator which I will try to cover in a future article. With the information presented above you should at least be on your way to having QGIS setup in QtCreator. Having an interactive debugger at your fingertips is a great way to learn to code in QGIS as it is easy to follow the logic flow from one class / to the next. It also makes it easy for casual developers to dive in and find a bug that is irritating them and either send us a comprehensive bug report or send use a fix :-).

Frank Warmerdam to become a Google guy

If you follow the osgeo planet blog aggregator, you may have noticed Frank Warmerdam's recent blog post mentioning that he is off to work for Google. If I think back to the blog posts I have written, GDAL features in a great many of them - its a mainstay tool for me and barely a day goes buy that I don't gdal_translate some file or what-have-you. Hell, I just finished running a batch job that ran for about 3 weeks comprised of nothing more than bash script and GDAL commands.

In QGIS too, GDAL is one of the key underpinnings of our project - it is only now 8 years or so into the project that we are starting to abstract the use of GDAL for reading rasters away a little to make space of other raster provider types. And of course OGR is one of the main ways that our users interact with vector data from within QGIS.

Personally, Frank has also been extremely helpful whenever I have wondered into the #gdal IRC channel looking for help. Reading his blog post made me think that Google doesn't quite realise yet just how lucky they are to get him. If Google has any heart, they will make Frank a 'minister without portfolio' and just tell him to keep working on GDAL and enjoy the free food in their cafetarias. Here's hoping at least that he still gets to spend a little time on the software that so many of use love using and depend on.

pixelstats trackingpixel

Frank Warmerdam to become a Google guy

If you follow the osgeo planet blog aggregator, you may have noticed Frank Warmerdam's recent blog post mentioning that he is off to work for Google. If I think back to the blog posts I have written, GDAL features in a great many of them - its a mainstay tool for me and barely a day goes buy that I don't gdal_translate some file or what-have-you. Hell, I just finished running a batch job that ran for about 3 weeks comprised of nothing more than bash script and GDAL commands.

In QGIS too, GDAL is one of the key underpinnings of our project - it is only now 8 years or so into the project that we are starting to abstract the use of GDAL for reading rasters away a little to make space of other raster provider types. And of course OGR is one of the main ways that our users interact with vector data from within QGIS.

Personally, Frank has also been extremely helpful whenever I have wondered into the #gdal IRC channel looking for help. Reading his blog post made me think that Google doesn't quite realise yet just how lucky they are to get him. If Google has any heart, they will make Frank a 'minister without portfolio' and just tell him to keep working on GDAL and enjoy the free food in their cafetarias. Here's hoping at least that he still gets to spend a little time on the software that so many of use love using and depend on.

QGIS Users Map Updates

One of the things we recently added to qgis.org is a new, django based, QGIS community map. So far we have 60 users registered and growing. If I get a chance, I will import the old users from the community map I used to run on QGIS.org some years ago. Today I made a few tweaks, including a 'meet a random user' option. Its fun to see the faces of our users appear on the map - some of which are now becoming familiar faces. In the future, I will write a little joomla module to put random users on the qgis.org home page too.

Our new random user feature - Hello Andreas! :-) Click for larger view.

Our new random user feature - Hello Andreas! :-) Click for larger view.

By the way, all of the code for the new web apps we have written is freely available here. I look forward to pull requests with customisations etc. - and to see others making use of these apps for their own projects.

 

pixelstats trackingpixel

QGIS Users Map Updates

One of the things we recently added to qgis.org is a new, django based, QGIS community map. So far we have 60 users registered and growing. If I get a chance, I will import the old users from the community map I used to run on QGIS.org some years ago. Today I made a few tweaks, including a 'meet a random user' option. Its fun to see the faces of our users appear on the map - some of which are now becoming familiar faces. In the future, I will write a little joomla module to put random users on the qgis.org home page too.

Our new random user feature - Hello Andreas! :-) Click for larger view.

By the way, all of the code for the new web apps we have written is freely available here. I look forward to pull requests with customisations etc. - and to see others making use of these apps for their own projects.

Listing the number of records in all postgresql tables

I love bash and the gnu tools provided on a Linux system. I am always writing little oneliners that help me do my work. Today I was generating some fixtures for django (yes I am finally properly learning to use unit testing in django). I wanted to know how many records where in each table in my database. Here is the little one liner I came up with:

 

for TABLE in $(psql foo-test -c "\dt" | awk '{print $3}'); do \
psql foo-test -c "select '$TABLE', count(*) from $TABLE;" | grep -A1 "\-\-\-\-" | tail -1; done

 

It produces output that looks something like this:

auth_group |     0
auth_group_permissions |     0
auth_message |     0
auth_permission |   273
auth_user |   366
auth_user_groups |     0
auth_user_user_permissions |     0
etc.

 

 

 

pixelstats trackingpixel

Listing the number of records in all postgresql tables

I love bash and the gnu tools provided on a Linux system. I am always writing little oneliners that help me do my work. Today I was generating some fixtures for django (yes I am finally properly learning to use unit testing in django). I wanted to know how many records where in each table in my database. Here is the little one liner I came up with:

for TABLE in $(psql foo-test -c "\dt" | awk '{print $3}'); do \
psql foo-test -c "select '$TABLE', count(*) from $TABLE;" | grep -A1 "\-\-\-\-" | tail -1; done

It produces output that looks something like this:

auth_group |     0
auth_group_permissions |     0
auth_message |     0
auth_permission |   273
auth_user |   366
auth_user_groups |     0
auth_user_user_permissions |     0
etc.

Some thoughts on the future of QGIS

Today we (the QGIS project) sent out the release announcement for QGIS 1.7. This release took much longer than we would have liked – it included many new features and we implemented many changes to our project infrastructure – all of which consumed time and slowed the release process. There was quite a bit of debate around the release, roughly divided into 3 camps:

- lets get the release out now, its overdue and we need to move on
- lets hold up until there are no more critical bugs
- lets put the release out but mark it as beta, release candidate or similar

Its always difficult to keep everyone happy, and some comments especially for example ones like this from Markus Neteler (of GRASS fame) are especially resonant:

If QGIS crashes several times in a row, I will tend to abandon it as a user.

 Recently I had to downgrade (like other colleagues I know) to QGIS 1.5
 since 1.6 was unusable for me. And if QGIS 1.7 is even worse (I stopped
 trying trunk which I used to do elsewhere), I will not even try to install it.
 Sorry if that sounds harsh but I start to no agree much with the apparent
 QGIS' policy of "we-release-with-most-features-possible-but-with-low-stability"
 which impacts more and more the recent QGIS releases.
 Please note that I (happily) follow QGIS since V 0.x.

 A plea: Please don't overload QGIS with new features at the cost of even
 destabilizing the existing good material.

Markus raises some good points and generally I would agree that we do chase features over stability. Personally I don’t experience too many stability issues with QGIS but it could be that I am in the habit of subconsciously working around issues may cause it to throw a wobbly. It’s a difficult trade off though – QGIS in many respects is still an immature GIS with many features that people expect from a GIS missing. This is especially true if you consider the ‘core’ QGIS application without the GRASS plugin (which introduces many tools from the excellent GRASS toolset). In order to meet the mainstream GIS user’s needs, we need to close this feature gap – and we have been making rapid strides in this direction over the last few years. But closing this feature GAP means that we don’t tend to have bug fix only releases and a strong focus on code stabilisation.

Much of these advances in functionality have been due to features directly funded by users (typically corporate or local government).  Usually they will contract a developer independently, the developer will build their requirements and if the features are of general interest, incorporate them into the general QGIS source code base. This is a great model and has really allowed the QGIS featureset to ramp up quickly – and to build up a small cottage industry of code development work around QGIS.

The downside to this model is that these same clients typically don’t exhibit much interest in funding some of the less sexy aspects of the project – things like writing unit tests, maintaining API documentation, maintaining overall GUI consistency and user experience often fall by the way side. We have soft policies for these things – new features to QGIS core should be accompanied by unit tests, all code should be documented properly and adhere to our coding standards, we have a basic Human Interface Guide (HIG) and so on. However, to really follow through with these QA aspects of the project is probably a full time job  for multiple developers and at the moment its achieved as a patchy effort as and when volunteer developers have time to do so.

In the short term we plan to try address the concerns of Markus and others by backporting fixes from the master branch into the 1.7 branch and putting out maintenance releases (i.e. 1.7.1, 1.7.2 and so on) which will hopefully provide a more stable platform into the future. Even this can be difficult though given the points I made above about feature ramp up. This is especially true of the upcoming development cycle where we are going to be paving the way for 2.0 – which will include removing redundant elements of the application. Redundant you may ask? Currently we have two labelling systems, two rendering systems, a growing number of deprecated API calls and so on. For 2.0 we will spring cleaning a lot of this away. This is good, but it also means that the delta between the master branch and the release branch will become rather large and backporting fixes may not always prove to be very easy.

In doing a bit of navel gazing around the 1.7 release, I updated the download stats for the 1.6 release. Here is a quick overview  (click the image for full size view):

Cumulative download chart for QGIS 1.6

Cumulative download chart for QGIS 1.6

Download decay chart for QGIS 1.6

Download decay chart for QGIS 1.6

As you can see from the charts, QGIS is becoming more and more popular, we are just shy of 190,000 downloads for the windows standalone installer for QGIS 1.6. (based on our awstats logs). These figures may not totally accurately reflect our user base though. On the one hand, a download does not necessarily represent a user since the internet is full of bots and spiders crawling web sites and so on. On the other hand, a single download may be used by multiple, tens or even hundreds of users. Now the interesting thing is that given that user base, the donations and sponsorship the project receives is probably equivalent to the purchase price of only one or two ESRI Arc* software licenses. When you think about the sheer number of people we are enabling to have access to GIS software at not cost, it is quite a stunning revelation.

At the same time, there are 190,000 users out there whose expectations need to be met by a very small and relatively unfunded group of active developers and community members. The QGIS developer mailing list has had quite a few active discussions about funding models, sponsored bug fixing and the like over the last few months. In the past we have had problems with ‘directed‘ funding for features or bug fixes not materialising or placing unreasonable expectations on a project that is essentially volunteer driven. It has long been by dream that we can create an undirected fund for QGIS that is there to fully sponsor several of us developers and community members to work on those ‘unsexy’ things I listed above. I think directed funding to target bugs and specific features can be a valuable source of improvement to the project, but I think we will only see real improvement gains to the project when we have sufficient undirected funding to afford us the luxury of not having day jobs that interfere with our QGIS hobby, and really focus on the things that we often don’t get time to do in our capacity as hobbyists.

Hopefully as our user base grows and the software reaches a critical mass in Government and the private sector, my dream of working on QGIS full time, and others who share this dream, will become a reality. In the mean time we will continue to try to make the best software that we can and improve the quality of the work we do so that everyone who tries QGIS enjoys the experience and never feels the need to shell out for a commercial software vendors license.

This turned into a bit of an essay….but it is probably useful to put these things out there so that folks get a better feel for how the project ticks, and where they can meaningfully contribute help. Many of the very active developers and community members of the QGIS project don’t ‘blog’ and otherwise keep a low profile, but I want to just say: “you guys and girls are awesome, I am really going to enjoy using your work in QGIS 1.7!”.

 

pixelstats trackingpixel

Some thoughts on the future of QGIS

Today we (the QGIS project) sent out the release announcement for QGIS 1.7. This release took much longer than we would have liked - it included many new features and we implemented many changes to our project infrastructure - all of which consumed time and slowed the release process. There was quite a bit of debate around the release, roughly divided into 3 camps:

- lets get the release out now, its overdue and we need to move on
- lets hold up until there are no more critical bugs
- lets put the release out but mark it as beta, release candidate or similar

Its always difficult to keep everyone happy, and some comments especially for example ones like this from Markus Neteler (of GRASS fame) are especially resonant:

If QGIS crashes several times in a row, I will tend to abandon it as a user.

 Recently I had to downgrade (like other colleagues I know) to QGIS 1.5
 since 1.6 was unusable for me. And if QGIS 1.7 is even worse (I stopped
 trying trunk which I used to do elsewhere), I will not even try to install it.
 Sorry if that sounds harsh but I start to no agree much with the apparent
 QGIS' policy of "we-release-with-most-features-possible-but-with-low-stability"
 which impacts more and more the recent QGIS releases.
 Please note that I (happily) follow QGIS since V 0.x.

 A plea: Please don't overload QGIS with new features at the cost of even
 destabilizing the existing good material.

Markus raises some good points and generally I would agree that we do chase features over stability. Personally I don't experience too many stability issues with QGIS but it could be that I am in the habit of subconsciously working around issues may cause it to throw a wobbly. It's a difficult trade off though - QGIS in many respects is still an immature GIS with many features that people expect from a GIS missing. This is especially true if you consider the 'core' QGIS application without the GRASS plugin (which introduces many tools from the excellent GRASS toolset). In order to meet the mainstream GIS user's needs, we need to close this feature gap - and we have been making rapid strides in this direction over the last few years. But closing this feature GAP means that we don't tend to have bug fix only releases and a strong focus on code stabilisation.

Much of these advances in functionality have been due to features directly funded by users (typically corporate or local government). Usually they will contract a developer independently, the developer will build their requirements and if the features are of general interest, incorporate them into the general QGIS source code base. This is a great model and has really allowed the QGIS featureset to ramp up quickly - and to build up a small cottage industry of code development work around QGIS.

The downside to this model is that these same clients typically don't exhibit much interest in funding some of the less sexy aspects of the project - things like writing unit tests, maintaining API documentation, maintaining overall GUI consistency and user experience often fall by the way side. We have soft policies for these things - new features to QGIS core should be accompanied by unit tests, all code should be documented properly and adhere to our coding standards, we have a basic Human Interface Guide (HIG) and so on. However, to really follow through with these QA aspects of the project is probably a full time job  for multiple developers and at the moment its achieved as a patchy effort as and when volunteer developers have time to do so.

In the short term we plan to try address the concerns of Markus and others by backporting fixes from the master branch into the 1.7 branch and putting out maintenance releases (i.e. 1.7.1, 1.7.2 and so on) which will hopefully provide a more stable platform into the future. Even this can be difficult though given the points I made above about feature ramp up. This is especially true of the upcoming development cycle where we are going to be paving the way for 2.0 - which will include removing redundant elements of the application. Redundant you may ask? Currently we have two labelling systems, two rendering systems, a growing number of deprecated API calls and so on. For 2.0 we will spring cleaning a lot of this away. This is good, but it also means that the delta between the master branch and the release branch will become rather large and backporting fixes may not always prove to be very easy.

In doing a bit of navel gazing around the 1.7 release, I updated the download stats for the 1.6 release. Here is a quick overview  (click the image for full size view):

Cumulative download chart for QGIS 1.6

Download decay chart for QGIS 1.6

As you can see from the charts, QGIS is becoming more and more popular, we are just shy of 190,000 downloads for the windows standalone installer for QGIS 1.6. (based on our awstats logs). These figures may not totally accurately reflect our user base though. On the one hand, a download does not necessarily represent a user since the internet is full of bots and spiders crawling web sites and so on. On the other hand, a single download may be used by multiple, tens or even hundreds of users. Now the interesting thing is that given that user base, the donations and sponsorship the project receives is probably equivalent to the purchase price of only one or two ESRI Arc* software licenses. When you think about the sheer number of people we are enabling to have access to GIS software at not cost, it is quite a stunning revelation.

At the same time, there are 190,000 users out there whose expectations need to be met by a very small and relatively unfunded group of active developers and community members. The QGIS developer mailing list has had quite a few active discussions about funding models, sponsored bug fixing and the like over the last few months. In the past we have had problems with 'directed' funding for features or bug fixes not materialising or placing unreasonable expectations on a project that is essentially volunteer driven. It has long been by dream that we can create an undirected fund for QGIS that is there to fully sponsor several of us developers and community members to work on those 'unsexy' things I listed above. I think directed funding to target bugs and specific features can be a valuable source of improvement to the project, but I think we will only see real improvement gains to the project when we have sufficient undirected funding to afford us the luxury of not having day jobs that interfere with our QGIS hobby, and really focus on the things that we often don't get time to do in our capacity as hobbyists.

Hopefully as our user base grows and the software reaches a critical mass in Government and the private sector, my dream of working on QGIS full time, and others who share this dream, will become a reality. In the mean time we will continue to try to make the best software that we can and improve the quality of the work we do so that everyone who tries QGIS enjoys the experience and never feels the need to shell out for a commercial software vendors license.

This turned into a bit of an essay....but it is probably useful to put these things out there so that folks get a better feel for how the project ticks, and where they can meaningfully contribute help. Many of the very active developers and community members of the QGIS project don't 'blog' and otherwise keep a low profile, but I want to just say: "you guys and girls are awesome, I am really going to enjoy using your work in QGIS 1.7!".

Scale-dependent generalization in PostGIS and QGIS

If you’re using QGIS and PostGIS, this article is for you. If not, it could still show you a useful approach, so don’t close the tab yet!

Consider this scenario: you have a large vector dataset which you have combined from several mapsheets (for example, the standard 1:50 000 topo sheets issued by the Surveyor-General). You have loaded it into a PostGIS database, some sort of mapserver is implemented, and all is well. You only need to set this up to render in a timely fashion when queried by a user.

Obviously, you will need to generalize the data to cut down on processing / data retrieval time. Imagine having land use data at 1:50 000 for the whole KZN province – if you host this on a server and someone decides to look at the whole lot at once, it will never render (and nothing else will happen on the server until that one user gets bored and quits).

But not to worry, I only need to generalize the data, right? For example:

SELECT
ST_SimplifyPreserveTopology(the_geom,1000) AS the_geom, landuse
INTO landuse_generalized_1000 FROM "1_50000_landuse_Union";

Chances are, however, that this won’t work as well as it should (if at all) straight off the bat. In my case, I encountered three problems:

  1. The data had lots of tiny internal rings which would still get generalized, but would thereby become invisible polygons with two vertices and zero area (or very thin triangles, but either way). They obviously still need to be loaded, with all their attribute data, making the generalization less useful than it could have been.
  2. The join boundaries from the earlier union (which turned umpteen map sheets into a single dataset) were still around and would need to be dissolved away before generalization, otherwise there’d be a bunch of slivers / gaps in between polygons from different map tiles.
  3. ST_SimplifyPreserveTopology() only accepts Polygon geometries, but once you aggregate the polygons by some attribute to get rid of the boundaries between map tiles, you will have a Multipolygon geometry type.

Hunting around (mainly on http://postgis.reflections.net/) for functions that could be useful, and with many suggestions from Tim (who owns this blog), I eventually came up with a nice procedure.

First apply a small amount of generalization (case-dependent – leave it out if you need the data at full detail), then remove the interior rings. This turns your polygons into Linestrings, but you can just re-polygon-ize them in the same step:

SELECT
ST_MakePolygon(ST_ExteriorRing(ST_SimplifyPreserveTopology(the_geom,10)))
AS the_geom, "LAND_USE" AS usetype INTO landuse1
FROM "1_50000_landuse_Union";

This bit is fairly elementary: to get rid of the tile joins, run a Union, then dissolve by land cover type:

SELECT ST_Union(the_geom) AS the_geom, usetype
INTO landuse2 FROM landuse1
GROUP BY usetype;

(Note that this gets rid of all your other unmentioned attributes, so be sure to list them if you want to keep them!)

Last but not least, to get the data back into a type that can be accepted as input by ST_SimplifyPreserveTopology, you need to run a multipart to singlepart. Although PostGIS does not have a one-step, built-in Multipart to Singlepart function, here’s how to fake one:

SELECT (ST_Dump(the_geom)).geom AS the_geom, usetype
INTO landuse3 FROM landuse2;

It’s important to note that you have to specify (ST_Dump(the_geom)).geom AS the_geom, because the output data type of ST_Dump() is not a geometry. Instead, it’s a special data type called “geometry_dump” which is specfic to the “Dump” family of functions. It contains extra data which in this case we don’t need, but the content of “geom” (which is part of the geometry_dump data structure) is equivalent to plain old geometry, so just use that.

Now at last you’re free to generalize the data:

SELECT ST_SimplifyPreserveTopology(the_geom,250) AS the_geom, usetype
INTO landuse_generalized_250 FROM landuse3;

SELECT ST_SimplifyPreserveTopology(the_geom,500) AS the_geom, usetype
INTO landuse_generalized_500 FROM landuse_generalized_250;

SELECT ST_SimplifyPreserveTopology(the_geom,1000) AS the_geom, usetype
INTO landuse_generalized_1000 FROM landuse_generalized_500;

Now you’ve got four datasets: the slightly simplified one you’ve been working from, and the three you generalized. Come to think of it, “landuse3″ is a bad name, so just rename it to, say, “landuse_full”. Also, if your new datasets came out well, you can probably get rid of your intermediate tables at this point (landuse1 and landuse2, in this case).

image: generalized data compared to original

Compared to the original (green), the generalized version (orange outlines) is highly simplified ...

image: zoomed out

... but at the appropriate zoom level, the differences are minor.

You can now delete those tiny, invisible polygons, too. Keep in mind at what scale you’d like to display which layer and view that layer in QGIS at that scale. For safety’s sake, create a test dataset:

SELECT * INTO test1000 FROM landuse_generalized_1000;

Calculate the area in a new column created for the purpose:

ALTER TABLE test1000 ADD area float;
UPDATE test1000 SET area = ST_Area(the_geom);

And get rid of polygons by area:

DELETE FROM test1000 WHERE area < 10;

The maximum area of the polygons that you can remove without making a visual or practical difference to the final map is something you’ll need to determine by trial and error. In the case of my most generalized dataset, I could get rid of about 70 000 entries by throwing out every polygon with an area smaller than 50 000, but obviously this will be highly case-dependent. Still, needing to render about 120 000 rather than 190 000 entries for a visually identical result did speed things up a bit.

Once you’re sure, run those same Alter, Update and Delete statements on the real dataset and remove the test. Even if something horrible happens, you can rest assured knowing that everything was done with SQL statements and getting it all back is as simple as rerunning some commands.

image: deployed example

In its final form together with other map layers. Although the data is simplified, it's not noticeable.

As for the practical application of these various levels of generalized data, QGIS has a handy bit of functionality: scale dependent rendering. In your layer properties (once you’ve loaded all needed datasets into your map), go to the “General” tab and check the “Use scale dependent rendering” box, then set whatever scales seem appropriate. Have your layers switch out at the applicable scales for your map, and the user will never even realize that he’s looking at different datasets with different levels of detail. So in the end, the user gets to look at land use for the whole KZN, and your server gets to not crash. Everybody’s happy!

pixelstats trackingpixel

Scale-dependent generalization in PostGIS and QGIS

If you're using QGIS and PostGIS, this article is for you. If not, it could still show you a useful approach, so don't close the tab yet!

Consider this scenario: you have a large vector dataset which you have combined from several mapsheets (for example, the standard 1:50 000 topo sheets issued by the Surveyor-General). You have loaded it into a PostGIS database, some sort of mapserver is implemented, and all is well. You only need to set this up to render in a timely fashion when queried by a user.

Obviously, you will need to generalize the data to cut down on processing / data retrieval time. Imagine having land use data at 1:50 000 for the whole KZN province - if you host this on a server and someone decides to look at the whole lot at once, it will never render (and nothing else will happen on the server until that one user gets bored and quits).

But not to worry, I only need to generalize the data, right? For example:

SELECT
ST_SimplifyPreserveTopology(the_geom,1000) AS the_geom, landuse
INTO landuse_generalized_1000 FROM "1_50000_landuse_Union";

Chances are, however, that this won't work as well as it should (if at all) straight off the bat. In my case, I encountered three problems:

  1. The data had lots of tiny internal rings which would still get generalized, but would thereby become invisible polygons with two vertices and zero area (or very thin triangles, but either way). They obviously still need to be loaded, with all their attribute data, making the generalization less useful than it could have been.
  2. The join boundaries from the earlier union (which turned umpteen map sheets into a single dataset) were still around and would need to be dissolved away before generalization, otherwise there'd be a bunch of slivers / gaps in between polygons from different map tiles.
  3. ST_SimplifyPreserveTopology() only accepts Polygon geometries, but once you aggregate the polygons by some attribute to get rid of the boundaries between map tiles, you will have a Multipolygon geometry type.

Hunting around (mainly on http://postgis.reflections.net/) for functions that could be useful, and with many suggestions from Tim (who owns this blog), I eventually came up with a nice procedure.

First apply a small amount of generalization (case-dependent - leave it out if you need the data at full detail), then remove the interior rings. This turns your polygons into Linestrings, but you can just re-polygon-ize them in the same step:

SELECT
ST_MakePolygon(ST_ExteriorRing(ST_SimplifyPreserveTopology(the_geom,10)))
AS the_geom, "LAND_USE" AS usetype INTO landuse1
FROM "1_50000_landuse_Union";

This bit is fairly elementary: to get rid of the tile joins, run a Union, then dissolve by land cover type:

SELECT ST_Union(the_geom) AS the_geom, usetype
INTO landuse2 FROM landuse1
GROUP BY usetype;

(Note that this gets rid of all your other unmentioned attributes, so be sure to list them if you want to keep them!)

Last but not least, to get the data back into a type that can be accepted as input by ST_SimplifyPreserveTopology, you need to run a multipart to singlepart. Although PostGIS does not have a one-step, built-in Multipart to Singlepart function, here's how to fake one:

SELECT (ST_Dump(the_geom)).geom AS the_geom, usetype
INTO landuse3 FROM landuse2;

It's important to note that you have to specify (ST_Dump(the_geom))***.geom*** AS the_geom, because the output data type of ST_Dump() is not a geometry. Instead, it's a special data type called "geometry_dump" which is specfic to the "Dump" family of functions. It contains extra data which in this case we don't need, but the content of "geom" (which is part of the geometry_dump data structure) is equivalent to plain old geometry, so just use that.

Now at last you're free to generalize the data:

SELECT ST_SimplifyPreserveTopology(the_geom,250) AS the_geom, usetype
INTO landuse_generalized_250 FROM landuse3;

SELECT ST_SimplifyPreserveTopology(the_geom,500) AS the_geom, usetype
INTO landuse_generalized_500 FROM landuse_generalized_250;

SELECT ST_SimplifyPreserveTopology(the_geom,1000) AS the_geom, usetype
INTO landuse_generalized_1000 FROM landuse_generalized_500;

Now you've got four datasets: the slightly simplified one you've been working from, and the three you generalized. Come to think of it, "landuse3" is a bad name, so just rename it to, say, "landuse_full". Also, if your new datasets came out well, you can probably get rid of your intermediate tables at this point (landuse1 and landuse2, in this case).

image: generalized data compared to original

image: zoomed out

You can now delete those tiny, invisible polygons, too. Keep in mind at what scale you'd like to display which layer and view that layer in QGIS at that scale. For safety's sake, create a test dataset:

SELECT * INTO test1000 FROM landuse_generalized_1000;

Calculate the area in a new column created for the purpose:

ALTER TABLE test1000 ADD area float;
UPDATE test1000 SET area = ST_Area(the_geom);

And get rid of polygons by area:

DELETE FROM test1000 WHERE area < 10;

The maximum area of the polygons that you can remove without making a visual or practical difference to the final map is something you'll need to determine by trial and error. In the case of my most generalized dataset, I could get rid of about 70 000 entries by throwing out every polygon with an area smaller than 50 000, but obviously this will be highly case-dependent. Still, needing to render about 120 000 rather than 190 000 entries for a visually identical result did speed things up a bit.

Once you're sure, run those same Alter, Update and Delete statements on the real dataset and remove the test. Even if something horrible happens, you can rest assured knowing that everything was done with SQL statements and getting it all back is as simple as rerunning some commands.

image: deployed example

As for the practical application of these various levels of generalized data, QGIS has a handy bit of functionality: scale dependent rendering. In your layer properties (once you've loaded all needed datasets into your map), go to the "General" tab and check the "Use scale dependent rendering" box, then set whatever scales seem appropriate. Have your layers switch out at the applicable scales for your map, and the user will never even realize that he's looking at different datasets with different levels of detail. So in the end, the user gets to look at land use for the whole KZN, and your server gets to not crash. Everybody's happy!

Recovering root user access for a mysql server

So this weekend I was trying to get into a mysql server I co-administer and nobody knew the root password anymore. Here is a short procedure to regain admin privileges to the database. I assign myself (rather than root user) admin privileges  so that I don’t interfere with any cron jobs etc. that may have been set up to take advantage of the root account.

Edit as root the /etc/init.d/mysql script and change the listing in the ‘start’ section from this:

 

case "${1:-''}" in
 'start')
 sanity_checks;
 # Start daemon
 log_daemon_msg "Starting MySQL database server" "mysqld"
 if mysqld_status check_alive nowarn; then
 log_progress_msg "already running"
 log_end_msg 0
 else
 /usr/bin/mysqld_safe > /dev/null 2>&1 &

Now change the last line listed above to look like this:

/usr/bin/mysqld_safe --init-file=/tmp/mysql-pwd-reset.sql > /dev/null 2>&1 &

The contents of the above script should look something like this:

 

CREATE USER 'foouser'@'localhost' IDENTIFIED BY 'somepassword';
GRANT ALL PRIVILEGES ON *.* TO 'foouser'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;

 

Next start mysql again:

sudo /etc/init.d/mysql start

 

You should now have root access to the database:

mysqlshow -u foouser -p

Finally shutdown mysql, remove your changes in /etc/init.d/mysql and then restart the database.

pixelstats trackingpixel

Recovering root user access for a mysql server

So this weekend I was trying to get into a mysql server I co-administer and nobody knew the root password anymore. Here is a short procedure to regain admin privileges to the database. I assign myself (rather than root user) admin privileges  so that I don't interfere with any cron jobs etc. that may have been set up to take advantage of the root account.

Edit as root the /etc/init.d/mysql script and change the listing in the 'start' section from this:

case "${1:-''}" in
 'start')
 sanity_checks;
 # Start daemon
 log_daemon_msg "Starting MySQL database server" "mysqld"
 if mysqld_status check_alive nowarn; then
 log_progress_msg "already running"
 log_end_msg 0
 else
 /usr/bin/mysqld_safe > /dev/null 2>&1 &

Now change the last line listed above to look like this:

/usr/bin/mysqld_safe --init-file=/tmp/mysql-pwd-reset.sql > /dev/null 2>&1 &

The contents of the above script should look something like this:

CREATE USER 'foouser'@'localhost' IDENTIFIED BY 'somepassword';
GRANT ALL PRIVILEGES ON *.* TO 'foouser'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;

Next start mysql again:

sudo /etc/init.d/mysql start

You should now have root access to the database:

mysqlshow -u foouser -p

Finally shutdown mysql, remove your changes in /etc/init.d/mysql and then restart the database.

Back to Top

Sustaining Members