Related Plugins and Tags

QGIS Planet

QGIS Mobile GSoC has started

Yesterday, we had our first meeting with Marco Bernasocchi, who just started his Google Summer of Code project. The project goals are:

  • porting QGIS to the Android platform
  • adapt the QGIS GUI for tablet computers
  • write a driver for the built-in GPS
  • create a QGIS “mini” application for mobile phones

Marco Hugentobler is mentoring the project and updated information will be available on a QGIS Wiki page. We wish Marco good luck and are looking forward to a portable QGIS this year!

QGIS Mobile GSoC has started

Yesterday, we had our first meeting with Marco Bernasocchi, who just started his Google Summer of Code project. The project goals are:

  • porting QGIS to the Android platform
  • adapt the QGIS GUI for tablet computers
  • write a driver for the built-in GPS
  • create a QGIS “mini” application for mobile phones

Marco Hugentobler is mentoring the project and updated information will be available on a QGIS Wiki page. We wish Marco good luck and are looking forward to a portable QGIS this year!

QGIS Mobile GSoC has started

Yesterday, we had our first meeting with Marco Bernasocchi, who just started his Google Summer of Code project. The project goals are:

  • porting QGIS to the Android platform
  • adapt the QGIS GUI for tablet computers
  • write a driver for the built-in GPS
  • create a QGIS “mini” application for mobile phones

Marco Hugentobler is mentoring the project and updated information will be available on a QGIS Wiki page. We wish Marco good luck and are looking forward to a portable QGIS this year!

QGIS Mobile GSoC has started

Yesterday, we had our first meeting with Marco Bernasocchi, who just started his Google Summer of Code project. The project goals are:

  • porting QGIS to the Android platform
  • adapt the QGIS GUI for tablet computers
  • write a driver for the built-in GPS
  • create a QGIS “mini” application for mobile phones

Marco Hugentobler is mentoring the project and updated information will be available on a QGIS Wiki page. We wish Marco good luck and are looking forward to a portable QGIS this year!

extending RedCloth markup

There are various approaches when trying to extend the Textile markup that RedCloth understands with own tags or syntax. Some approaches documented on the net have changed or don’t work any more, since RedCloth has been rewritten in Version 4.

Below is a fairly robust aproach, that is based on the assumption, that RedCloth leaves HTML tags inside the markup untouched and passes them on to the application consuming the translated markup.

The below code has been used in the Madek project. Madek, a Ruby On Rails application, uses irwi which provides a Wiki to Madek. And finally irwi uses RedCloth to do the rendering from Textile to HTML. That’s where we hook in.

We want to have a few special tags, which make the life of the Madek Wiki admin simpler, by letting him write the following inside the Textile markup:

[media=210      | Das Huhn]
[screenshot=210 | Das Huhn]
[video=210      | Das Huhn]

That will finally produce this HTML output:

<a href="/media_entries/210">Das Huhn</a>
<img src="/media_entries/210/image" title="Das Huhn"/>
<video src="/media_entries/210/image" title="Das Huhn"/>
  <a href='/media_entries/210'>(see video)</a>
</video>

Here’s the implementation:

class RedClothMadek

  ActionView::Base.sanitized_allowed_tags << 'video'

  def initialize
    require 'redcloth'
  end

  def format( text )
    ::RedCloth.new( replace_madek_tags(text) ).to_html
  end

  # Transforms the follwing Textile markups:
  # 
  #   [media=210      | Das Huhn] -> <a href="/media_entries/210">Das Huhn</a>
  #   [screenshot=210 | Das Huhn] -> <img src="/media_entries/210/image" title="Das Huhn"/>
  #   [video=210      | Das Huhn] -> <video src="/media_entries/210/image" title="Das Huhn"/>
  #                                    <a href='/media_entries/210'>(see video)</a>
  #                                  </video>
  #
  def replace_madek_tags( text )

    # unfortunately having multiple matches in gsub doesn't seem to work, therefore
    # we fall back to $1 $2
    #
    text.gsub(/[\s*media\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,txt|

           "<a href='/media_entries/#{$1}'>#{h($2)}</a>"                 }.

         gsub(/[\s*screenshot\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,title|

           "<img src='/media_entries/#{$1}/image' title='#{h($2)}'/>"    }.

         gsub(/[\s*video\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,title|
           "<video src='/media_entries/#{$1}/image' title='#{h($2)}'>" +
             "<a href='/media_entries/#{$1}'>(see Wideo)</a>" +
           "</video>"  }
  end

end

And finally, irwi needs to be told to use that formatter instead of RedCloth directly. From config/environment.rb:

require "#{Rails.root}/lib/red_cloth_madek.rb"
Irwi.config.formatter = RedClothMadek.new

Tomáš Pospíšek

extending RedCloth markup

There are various approaches when trying to extend the Textile markup that RedCloth understands with own tags or syntax. Some approaches documented on the net have changed or don’t work any more, since RedCloth has been rewritten in Version 4.

Below is a fairly robust aproach, that is based on the assumption, that RedCloth leaves HTML tags inside the markup untouched and passes them on to the application consuming the translated markup.

The below code has been used in the Madek project. Madek, a Ruby On Rails application, uses irwi which provides a Wiki to Madek. And finally irwi uses RedCloth to do the rendering from Textile to HTML. That’s where we hook in.

We want to have a few special tags, which make the life of the Madek Wiki admin simpler, by letting him write the following inside the Textile markup:

[media=210      | Das Huhn]
[screenshot=210 | Das Huhn]
[video=210      | Das Huhn]

That will finally produce this HTML output:

<a href="/media_entries/210">Das Huhn</a>
<img src="/media_entries/210/image" title="Das Huhn"/>
<video src="/media_entries/210/image" title="Das Huhn"/>
  <a href='/media_entries/210'>(see video)</a>
</video>

Here’s the implementation:

class RedClothMadek

  ActionView::Base.sanitized_allowed_tags << 'video'

  def initialize
    require 'redcloth'
  end

  def format( text )
    ::RedCloth.new( replace_madek_tags(text) ).to_html
  end

  # Transforms the follwing Textile markups:
  # 
  #   [media=210      | Das Huhn] -> <a href="/media_entries/210">Das Huhn</a>
  #   [screenshot=210 | Das Huhn] -> <img src="/media_entries/210/image" title="Das Huhn"/>
  #   [video=210      | Das Huhn] -> <video src="/media_entries/210/image" title="Das Huhn"/>
  #                                    <a href='/media_entries/210'>(see video)</a>
  #                                  </video>
  #
  def replace_madek_tags( text )

    # unfortunately having multiple matches in gsub doesn't seem to work, therefore
    # we fall back to $1 $2
    #
    text.gsub(/[\s*media\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,txt|

           "<a href='/media_entries/#{$1}'>#{h($2)}</a>"                 }.

         gsub(/[\s*screenshot\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,title|

           "<img src='/media_entries/#{$1}/image' title='#{h($2)}'/>"    }.

         gsub(/[\s*video\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,title|
           "<video src='/media_entries/#{$1}/image' title='#{h($2)}'>" +
             "<a href='/media_entries/#{$1}'>(see Wideo)</a>" +
           "</video>"  }
  end

end

And finally, irwi needs to be told to use that formatter instead of RedCloth directly. From config/environment.rb:

require "#{Rails.root}/lib/red_cloth_madek.rb"
Irwi.config.formatter = RedClothMadek.new

Tomáš Pospíšek

extending RedCloth markup

There are various approaches when trying to extend the Textile markup that RedCloth understands with own tags or syntax. Some approaches documented on the net have changed or don’t work any more, since RedCloth has been rewritten in Version 4.

Below is a fairly robust aproach, that is based on the assumption, that RedCloth leaves HTML tags inside the markup untouched and passes them on to the application consuming the translated markup.

The below code has been used in the Madek project. Madek, a Ruby On Rails application, uses irwi which provides a Wiki to Madek. And finally irwi uses RedCloth to do the rendering from Textile to HTML. That’s where we hook in.

We want to have a few special tags, which make the life of the Madek Wiki admin simpler, by letting him write the following inside the Textile markup:

[media=210      | Das Huhn]
[screenshot=210 | Das Huhn]
[video=210      | Das Huhn]

That will finally produce this HTML output:

<a href="/media_entries/210">Das Huhn</a>
<img src="/media_entries/210/image" title="Das Huhn"/>
<video src="/media_entries/210/image" title="Das Huhn"/>
  <a href='/media_entries/210'>(see video)</a>
</video>

Here’s the implementation:

class RedClothMadek

  ActionView::Base.sanitized_allowed_tags << 'video'

  def initialize
    require 'redcloth'
  end

  def format( text )
    ::RedCloth.new( replace_madek_tags(text) ).to_html
  end

  # Transforms the follwing Textile markups:
  # 
  #   [media=210      | Das Huhn] -> <a href="/media_entries/210">Das Huhn</a>
  #   [screenshot=210 | Das Huhn] -> <img src="/media_entries/210/image" title="Das Huhn"/>
  #   [video=210      | Das Huhn] -> <video src="/media_entries/210/image" title="Das Huhn"/>
  #                                    <a href='/media_entries/210'>(see video)</a>
  #                                  </video>
  #
  def replace_madek_tags( text )

    # unfortunately having multiple matches in gsub doesn't seem to work, therefore
    # we fall back to $1 $2
    #
    text.gsub(/[\s*media\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,txt|

           "<a href='/media_entries/#{$1}'>#{h($2)}</a>"                 }.

         gsub(/[\s*screenshot\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,title|

           "<img src='/media_entries/#{$1}/image' title='#{h($2)}'/>"    }.

         gsub(/[\s*video\s*=\s*(\d+)\s*\|\s*([^]]+)\s*]/) { |number,title|
           "<video src='/media_entries/#{$1}/image' title='#{h($2)}'>" +
             "<a href='/media_entries/#{$1}'>(see Wideo)</a>" +
           "</video>"  }
  end

end

And finally, irwi needs to be told to use that formatter instead of RedCloth directly. From config/environment.rb:

require "#{Rails.root}/lib/red_cloth_madek.rb"
Irwi.config.formatter = RedClothMadek.new

Tomáš Pospíšek

annotating third party web pages

Problem: I have a web site/page that I visit regularily which I want to annotate with my notes.

More specifically, I was regularly searching through the Homegate real estate hub looking for a new home. It goes without saying that I was again and again forgetting which objects I had already looked at, which objects were really interesting and I should check out more closely.

Therefore the need to annotate search results.

The here presented approach should be applicable for annotation of other web pages as well. It’s based on the observation, that restful web applications need to operate with asset IDs. These asset IDs can be reused to enrich the asset locally with additional data, such as notes. Thus we’re looking for those IDs in specific elements of the page and add a bit of HTML markup to those places.

Of course that aproach only works as long as the web site doesn’t heavily change its markup and doesn’t rendomly change asset IDs.

Here’s a screenshot of regular Homegate search results:

And here’s the same page after scripting it with Greasemonkey:

You’ll notice the input field with the comment in it.

The idea is simple: add an input box to each search result, where you write your comment. When the focus leaves the input box, the comment is stored to localStorage.

Starting with Greasemonkey is quite easy. There are howtos and templates to start from, such as this one.

However, regular JavaScripting and Greasemonkey JavaScripting do not work in exactly the same way:

One of the differences is that Greasemonkey creates a separate JavaScript environment, in which the Greasemonkey scripts are executed. That is calling Javascript contained in the page from Greasemonkey and the inverse calling Greasemonkey scripts from the page is not possible by default. This is on purpose, so that the web page can not detect and not interfere with the Greasemonkey scripts, because you want your Greasemonkey scripts to work allways on some page, whether or not that page likes it or not. Therefore no interference is possible.

Greasemonkey however provides a standard way to access the web page’s scripts, and that’s through the “unsafeWindow” object, which is a reference to the web page’s environment.

I had two mechanisms I had to make accessible using the “unsafeWindow” handle:

  • the first was accessing JQuery, which is included by default by the Homegate page. Since I needed to use JQuery functions in my Greasemonkey script, I got a reference to it via the standard Greasemonkey precedure:
    var jQuery = unsafeWindow['jQuery'];
  • the second mechanism that needed to cross the boundaries between the web page and Greasemonkey was callbacks from the web page to my Greasemonkey script. This is necessary, because I’m attaching “input” elements to each search result, which contain a note and which, “onblur”, need to call a function that saves the content of the input box. Here’s part that constructs the input element:
    jQuery("<input onclick='event.cancelBubble = true;'" +
                 " onblur='saveComment(this, immoID);'>").insertAfter(immoElement);

And this is the function that gets called back by “onblur”:

    unsafeWindow.saveComment = function(element, immoID) {
      unsafeWindow.localStorage.setItem(immoID, element.value);
    };

The next interesting thing you’ll note is usage of ‘locaStorage’. Support for the latter in browsers does not seem mature yet. One problem I’ve encountered when developing under Firefox 3.6 was that saving to ‘localStorage’ was not possible when cookies were disabled (see this report). Thus you’ll need to permanently enable cookies for Homegate in order for the script to be able to save its data.

Finally, while developing, a major problem was, that Firefox did not show me errors in the Greasemonkey scripts. Thus either the script would work or not work and fail completely silently. That made debugging a bit painful.

So now, here’s the script.

Tomáš Pospíšek

PS: This script also lives at userscripts.org

annotating third party web pages

Problem: I have a web site/page that I visit regularily which I want to annotate with my notes.

More specifically, I was regularly searching through the Homegate real estate hub looking for a new home. It goes without saying that I was again and again forgetting which objects I had already looked at, which objects were really interesting and I should check out more closely.

Therefore the need to annotate search results.

The here presented approach should be applicable for annotation of other web pages as well. It’s based on the observation, that restful web applications need to operate with asset IDs. These asset IDs can be reused to enrich the asset locally with additional data, such as notes. Thus we’re looking for those IDs in specific elements of the page and add a bit of HTML markup to those places.

Of course that aproach only works as long as the web site doesn’t heavily change its markup and doesn’t rendomly change asset IDs.

Here’s a screenshot of regular Homegate search results:

And here’s the same page after scripting it with Greasemonkey:

You’ll notice the input field with the comment in it.

The idea is simple: add an input box to each search result, where you write your comment. When the focus leaves the input box, the comment is stored to localStorage.

Starting with Greasemonkey is quite easy. There are howtos and templates to start from, such as this one.

However, regular JavaScripting and Greasemonkey JavaScripting do not work in exactly the same way:

One of the differences is that Greasemonkey creates a separate JavaScript environment, in which the Greasemonkey scripts are executed. That is calling Javascript contained in the page from Greasemonkey and the inverse calling Greasemonkey scripts from the page is not possible by default. This is on purpose, so that the web page can not detect and not interfere with the Greasemonkey scripts, because you want your Greasemonkey scripts to work allways on some page, whether or not that page likes it or not. Therefore no interference is possible.

Greasemonkey however provides a standard way to access the web page’s scripts, and that’s through the “unsafeWindow” object, which is a reference to the web page’s environment.

I had two mechanisms I had to make accessible using the “unsafeWindow” handle:

  • the first was accessing JQuery, which is included by default by the Homegate page. Since I needed to use JQuery functions in my Greasemonkey script, I got a reference to it via the standard Greasemonkey precedure:
    var jQuery = unsafeWindow['jQuery'];
  • the second mechanism that needed to cross the boundaries between the web page and Greasemonkey was callbacks from the web page to my Greasemonkey script. This is necessary, because I’m attaching “input” elements to each search result, which contain a note and which, “onblur”, need to call a function that saves the content of the input box. Here’s part that constructs the input element:
    jQuery("<input onclick='event.cancelBubble = true;'" +
                 " onblur='saveComment(this, immoID);'>").insertAfter(immoElement);

And this is the function that gets called back by “onblur”:

    unsafeWindow.saveComment = function(element, immoID) {
      unsafeWindow.localStorage.setItem(immoID, element.value);
    };

The next interesting thing you’ll note is usage of ‘locaStorage’. Support for the latter in browsers does not seem mature yet. One problem I’ve encountered when developing under Firefox 3.6 was that saving to ‘localStorage’ was not possible when cookies were disabled (see this report). Thus you’ll need to permanently enable cookies for Homegate in order for the script to be able to save its data.

Finally, while developing, a major problem was, that Firefox did not show me errors in the Greasemonkey scripts. Thus either the script would work or not work and fail completely silently. That made debugging a bit painful.

So now, here’s the script.

Tomáš Pospíšek

PS: This script also lives at userscripts.org

annotating third party web pages

Problem: I have a web site/page that I visit regularily which I want to annotate with my notes.

More specifically, I was regularly searching through the Homegate real estate hub looking for a new home. It goes without saying that I was again and again forgetting which objects I had already looked at, which objects were really interesting and I should check out more closely.

Therefore the need to annotate search results.

The here presented approach should be applicable for annotation of other web pages as well. It’s based on the observation, that restful web applications need to operate with asset IDs. These asset IDs can be reused to enrich the asset locally with additional data, such as notes. Thus we’re looking for those IDs in specific elements of the page and add a bit of HTML markup to those places.

Of course that aproach only works as long as the web site doesn’t heavily change its markup and doesn’t rendomly change asset IDs.

Here’s a screenshot of regular Homegate search results:

And here’s the same page after scripting it with Greasemonkey:

You’ll notice the input field with the comment in it.

The idea is simple: add an input box to each search result, where you write your comment. When the focus leaves the input box, the comment is stored to localStorage.

Starting with Greasemonkey is quite easy. There are howtos and templates to start from, such as this one.

However, regular JavaScripting and Greasemonkey JavaScripting do not work in exactly the same way:

One of the differences is that Greasemonkey creates a separate JavaScript environment, in which the Greasemonkey scripts are executed. That is calling Javascript contained in the page from Greasemonkey and the inverse calling Greasemonkey scripts from the page is not possible by default. This is on purpose, so that the web page can not detect and not interfere with the Greasemonkey scripts, because you want your Greasemonkey scripts to work allways on some page, whether or not that page likes it or not. Therefore no interference is possible.

Greasemonkey however provides a standard way to access the web page’s scripts, and that’s through the “unsafeWindow” object, which is a reference to the web page’s environment.

I had two mechanisms I had to make accessible using the “unsafeWindow” handle:

  • the first was accessing JQuery, which is included by default by the Homegate page. Since I needed to use JQuery functions in my Greasemonkey script, I got a reference to it via the standard Greasemonkey precedure:
    var jQuery = unsafeWindow['jQuery'];
  • the second mechanism that needed to cross the boundaries between the web page and Greasemonkey was callbacks from the web page to my Greasemonkey script. This is necessary, because I’m attaching “input” elements to each search result, which contain a note and which, “onblur”, need to call a function that saves the content of the input box. Here’s part that constructs the input element:
    jQuery("<input onclick='event.cancelBubble = true;'" +
                 " onblur='saveComment(this, immoID);'>").insertAfter(immoElement);

And this is the function that gets called back by “onblur”:

    unsafeWindow.saveComment = function(element, immoID) {
      unsafeWindow.localStorage.setItem(immoID, element.value);
    };

The next interesting thing you’ll note is usage of ‘locaStorage’. Support for the latter in browsers does not seem mature yet. One problem I’ve encountered when developing under Firefox 3.6 was that saving to ‘localStorage’ was not possible when cookies were disabled (see this report). Thus you’ll need to permanently enable cookies for Homegate in order for the script to be able to save its data.

Finally, while developing, a major problem was, that Firefox did not show me errors in the Greasemonkey scripts. Thus either the script would work or not work and fail completely silently. That made debugging a bit painful.

So now, here’s the script.

Tomáš Pospíšek

PS: This script also lives at userscripts.org

Visually sorting images under Linux

to visually sort images under Linux doesn’t seem to be a trivial task. I duckduckgo‘ed for a long time and had a look at various image and file managing applications before finding gthumb. And even there, you first need to create a “catalog” and within the catalog a “library” which will finally allow you to manually sort your images. All of which is not documented.

Once you’ve sorted your images, you’d possibly want to export the sorting? Again, no trace of any help or documentation: gthumb catalogs are saved under $HOME/.local/share/gthumb/catalogs/foobar.catalog.

Tomáš Pospíšek

Visually sorting images under Linux

to visually sort images under Linux doesn’t seem to be a trivial task. I duckduckgo‘ed for a long time and had a look at various image and file managing applications before finding gthumb. And even there, you first need to create a “catalog” and within the catalog a “library” which will finally allow you to manually sort your images. All of which is not documented.

Once you’ve sorted your images, you’d possibly want to export the sorting? Again, no trace of any help or documentation: gthumb catalogs are saved under $HOME/.local/share/gthumb/catalogs/foobar.catalog.

Tomáš Pospíšek

Visually sorting images under Linux

to visually sort images under Linux doesn’t seem to be a trivial task. I duckduckgo‘ed for a long time and had a look at various image and file managing applications before finding gthumb. And even there, you first need to create a “catalog” and within the catalog a “library” which will finally allow you to manually sort your images. All of which is not documented.

Once you’ve sorted your images, you’d possibly want to export the sorting? Again, no trace of any help or documentation: gthumb catalogs are saved under $HOME/.local/share/gthumb/catalogs/foobar.catalog.

Tomáš Pospíšek

Randall Monroe's Radiation Dose Data

Below is data from Randall Monroe’s Radiation Dose chart normalized to a year:

The spreadsheet with the data can be had here (in open document format (open office)) or in Excel format (don’t know if it renders well in Excel and others, since it was exported from open office).

Tomáš Pospíšek

Randall Monroe's Radiation Dose Data

Below is data from Randall Monroe’s Radiation Dose chart normalized to a year:

The spreadsheet with the data can be had here (in open document format (open office)) or in Excel format (don’t know if it renders well in Excel and others, since it was exported from open office).

Tomáš Pospíšek

Randall Monroe's Radiation Dose Data

Below is data from Randall Monroe’s Radiation Dose chart normalized to a year:

The spreadsheet with the data can be had here (in open document format (open office)) or in Excel format (don’t know if it renders well in Excel and others, since it was exported from open office).

Tomáš Pospíšek

Ubuntu PostGIS package for PostgreSQL 9.0

A while ago I’ve compiled PostGIS 1.5.2 for PostgreSQL 9.0.

To install it on Ubuntu the following steps are required:

apt-get install python-software-properties

add-apt-repository ppa:ubuntugis/ubuntugis-unstable
add-apt-repository ppa:pitti/postgresql
add-apt-repository ppa:pi-deb/gis

apt-get update

apt-get install postgresql-9.0-postgis

A basic template database can be created with the following commands:

sudo su - postgres

createdb template_postgis
psql -q -d template_postgis -f /usr/share/postgresql/9.0/contrib/postgis-1.5/postgis.sql
psql -q -d template_postgis -f /usr/share/postgresql/9.0/contrib/postgis-1.5/spatial_ref_sys.sql
psql -q -d template_postgis -f /usr/share/postgresql/9.0/contrib/postgis_comments.sql
cat <<EOS | psql -d template_postgis
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
REVOKE ALL ON SCHEMA public FROM public;
GRANT USAGE ON SCHEMA public TO public;
GRANT ALL ON SCHEMA public TO postgres;
GRANT SELECT, UPDATE, INSERT, DELETE
  ON TABLE public.geometry_columns TO PUBLIC;
GRANT SELECT, UPDATE, INSERT, DELETE
  ON TABLE public.spatial_ref_sys TO PUBLIC;
EOS

To test database creation you can do the following:

createdb --template template_postgis test_gis
psql -d test_gis -c "select postgis_lib_version();"

Ubuntu supports parallel installations of different PostgreSQL versions. So if you have already PostgreSQL 8.x installed, PostgreSQL 9.0 will probably be configured to listen on port 5433. So you have to add the option -p 5433 to each command and to specify the full path for the executables. For example:

/usr/lib/postgresql/9.0/bin/psql -p 5433 -d test_gis -c "select postgis_lib_version();"

Ubuntu PostGIS package for PostgreSQL 9.0

A while ago I’ve compiled PostGIS 1.5.2 for PostgreSQL 9.0.

To install it on Ubuntu the following steps are required:

apt-get install python-software-properties

add-apt-repository ppa:ubuntugis/ubuntugis-unstable
add-apt-repository ppa:pitti/postgresql
add-apt-repository ppa:pi-deb/gis

apt-get update

apt-get install postgresql-9.0-postgis

A basic template database can be created with the following commands:

sudo su - postgres

createdb template_postgis
psql -q -d template_postgis -f /usr/share/postgresql/9.0/contrib/postgis-1.5/postgis.sql
psql -q -d template_postgis -f /usr/share/postgresql/9.0/contrib/postgis-1.5/spatial_ref_sys.sql
psql -q -d template_postgis -f /usr/share/postgresql/9.0/contrib/postgis_comments.sql
cat <<EOS | psql -d template_postgis
UPDATE pg_database SET datistemplate = TRUE WHERE datname = 'template_postgis';
REVOKE ALL ON SCHEMA public FROM public;
GRANT USAGE ON SCHEMA public TO public;
GRANT ALL ON SCHEMA public TO postgres;
GRANT SELECT, UPDATE, INSERT, DELETE
  ON TABLE public.geometry_columns TO PUBLIC;
GRANT SELECT, UPDATE, INSERT, DELETE
  ON TABLE public.spatial_ref_sys TO PUBLIC;
EOS

To test database creation you can do the following:

createdb --template template_postgis test_gis
psql -d test_gis -c "select postgis_lib_version();"

Ubuntu supports parallel installations of different PostgreSQL versions. So if you have already PostgreSQL 8.x installed, PostgreSQL 9.0 will probably be configured to listen on port 5433. So you have to add the option -p 5433 to each command and to specify the full path for the executables. For example:

/usr/lib/postgresql/9.0/bin/psql -p 5433 -d test_gis -c "select postgis_lib_version();"

OpenLayers plugin visits code sprint

A short visit and 7 hours train ride to the OpenLayers code sprint mainly for a presentation at the Swiss MapFish user group meeting in Lausanne, resulted in a new release of the QGIS OpenLayers plugin. The OpenLayers plugin adds WebKit based layers to QGIS and ships with OpenStreetMap-, Google- and Yahoo-Layers.

Changes in this release:

  • Update to OpenLayers trunk
  • Google Layers using API V3 (no API key necessary)
  • Code refactoring for adding new layer types with one line of code (and some HTML…)

The next planned step is integrating this plugin with the very nice Openlayers Overview plugin from Luiz Motta.

Information for adding your own layers and a bug tracker is now available at hub.qgis.org/projects/openlayers

Web based printing with QGIS server

QGIS server is already known as a full featured, WMS 1.3 compliant map server (see e.g. ETHZ, Linfiniti or 3LIZ).

For the city of Uster, Switzerland, Sourcepole recently extended QGIS server with the possibility to use the print composer via WMS in order to offer printing functionality for web maps. A very nice GeoExt based client can be found at http://gis.uster.ch/webgis/. Andreas Neumann used and extended the GeoExt PrintProvider and PrintExtent classes which allows the user to intuitively select a layout, extent, scale, rotation and resolution for printing. The widget then sends a GetPrint request to QGIS server and a printable PDF is returned.

Now to the technical details: in order to use the print capability of QGIS server, the published project needs to contain one or more print compositions. QGIS server includes information about the available composer templates in the GetCapabilities response:

<WMS_Capabilities>
...
<ComposerTemplates xsi:type="wms:_ExtendedCapabilities">
<ComposerTemplate width="297" height="210" name="Composer 1">
<ComposerMap width="102" height="95" name="map0"/>
<ComposerMap width="133" height="79.87912087912088" name="map1"/>
<ComposerLabel name="hello_world"/>
</ComposerTemplate>
</ComposerTemplates>
...

With a 'GetPrint' request, the client may now request printable output in png or pdf format. The client needs to select an available composer template and needs to pass the selected extents for the composer maps. Additionally, there is the possibility to replace the text of composer labels. An example for a 'GetPrint' request is (note the new 'TEMPLATE' and 'map0:extent' Parameters):

http://localhost/fcgi-bin/qgis_mapserver/qgis_mapserv.fcgi?MAP=/home/marco/geodaten/projekte/composertest.qgs&SERVICE;=WMS&VERSION;=1.3.0&REQUEST;=GetPrint&TEMPLATE;=Composer 1&map0;:extent=693457.466131,227122.338236,700476.845177,230609.807051&BBOX;=693457.466131,227122.338236,700476.845177,230609.807051&CRS;=EPSG:21781&WIDTH;=1467&HEIGHT;=729&LAYERS;=layer0,layer1&STYLES;=,&FORMAT;=pdf&DPI;=300&TRANSPARENT;=true

In detail, the following parameters can be used to set properties for composer maps:
  • <mapname>:EXTENT=<xmin,ymin,xmax, ymax> //mandatory
  • <mapname>:ROTATION=<double> //optional, defaults to 0
  • <mapname>:SCALE=<double> //optional. Forces scale denominator as server and client may have different scale calculations
  • <mapname>:LAYERS=<comma separated list with layer names> //optional. Defaults to all layer in the WMS request
  • <mapname>:STYLES=<comma separated list with style names> //optional
  • <mapname>:GRID_INTERVAL_X=<double> //set the grid interval in x-direction for composer grids
  • <mapname>:GRID_INTERVAL_Y=<double> //set the grid interval in x-direction for composer grids

To replace text in composer labels, the label needs to have a text id (can be assigned in the print composer user interface of the composer label). The text to insert into the label can be passed in the GetPrint request with an additional parameter =text (of course, the label id should be different to the standard WMS parameters...).

Besides printing, an additional nice feature in the most recent developer version is the possibility to also use relative pathes in published project files. Like this, it's possible to conveniently export a project file recursively together with the corresponding data.

Finally, I'd like to thank the city of Uster and Andreas Neumann for the generous support and the creative ideas during the implementation of the printing functionality.

  • <<
  • Page 5 of 6 ( 115 posts )
  • >>
  • news

Back to Top

Sustaining Members