Related Plugins and Tags

QGIS Planet

(Fr) Rechercher une adresse avec QGIS

Sorry, this entry is only available in French.

QGIS Versioning now supports foreign keys!

QGIS-versioning is a QGIS and PostGIS plugin dedicated to data versioning and history management. It supports :

  • Keeping full table history with all modifications
  • Transparent access to current data
  • Versioning tables with branches
  • Work offline
  • Work on a data subset
  • Conflict management with a GUI

QGIS versioning conflict management

In a previous blog article we detailed how QGIS versioning can manage data history, branches, and work offline with PostGIS-stored data and QGIS. We recently added foreign key support to QGIS versioning so you can now historize any complex database schema.

This QGIS plugin is available in the official QGIS plugin repository, and you can fork it on GitHub too !

Foreign key support

TL;DR

When a user decides to historize its PostgreSQL database with QGIS-versioning, the plugin alters the existing database schema and adds new fields in order to track down the different versions of a single table row. Every access to these versioned tables are subsequently made through updatable views in order to automatically fill in the new versioning fields.

Up to now, it was not possible to deal with primary keys and foreign keys : the original tables had to be constraints-free.  This limitation has been lifted thanks to this contribution.

To make it simple, the solution is to remove all constraints from the original database and transform them into a set of SQL check triggers installed on the working copy databases (SQLite or PostgreSQL). As verifications are made on the client side, it’s impossible to propagate invalid modifications on your base server when you “commit” updates.

Behind the curtains

When you choose to historize an existing database, a few fields are added to the existing table. Among these fields, versioning_ididentifies  one specific version of a row. For one existing row, there are several versions of this row, each with a different versioning_id but with the same original primary key field. As a consequence, that field cannot satisfy the unique constraint, so it cannot be a key, therefore no foreign key neither.

We therefore have to drop the primary key and foreign key constraints when historizing the table. Before removing them, constraints definitions are stored in a dedicated table so that these constraints can be checked later.

When the user checks out a specific table on a specific branch, QGIS-versioning uses that constraint table to build constraint checking triggers in the working copy. The way constraints are built depends on the checkout type (you can checkout in a SQLite file, in the master PostgreSQL database or in another PostgreSQL database).

What do we check ?

That’s where the fun begins ! The first thing we have to check is key uniqueness or foreign key referencing an existing key on insert or update. Remember that there are no primary key and foreign key anymore, we dropped them when activating historization. We keep the term for better understanding.

You also have to deal with deleting or updating a referenced row and the different ways of propagating the modification : cascade, set default, set null, or simply failure, as explained in PostgreSQL Foreign keys documentation .

Nevermind all that, this problem has been solved for you and everything is done automatically in QGIS-versioning. Before you ask, yes foreign keys spanning on multiple fields are also supported.

What’s new in QGIS ?

You will get a new message you probably already know about, when you try to make an invalid modification committing your changes to the master database

Error when foreign key constraint is violated

Partial checkout

One existing Qgis-versioning feature is partial checkout. It allows a user to select a subset of data to checkout in its working copy. It avoids downloading gigabytes of data you do not care about. You can, for instance, checkout features within a given spatial extent.

So far, so good. But if you have only a part of your data, you cannot ensure that modifying a data field as primary key will keep uniqueness. In this particular case, QGIS-versioning will trigger errors on commit, pointing out the invalid rows you have to modify so the unique constraint remains valid.

Error when committing non unique key after a partial checkout

Tests

There is a lot to check when you intend to replace the existing constraint system with your own constraint system based on triggers. In order to ensure QGIS-Versioning stability and reliability, we put some special effort on building a test set that cover all use cases and possible exceptions.

What’s next

There is now no known limitations on using QGIS-versioning on any of your database. If you think about a missing feature or just want to know more about QGIS and QGIS-versioning, feel free to contact us at [email protected]. And please have a look at our support offering for QGIS.

Many thanks to eHealth Africa who helped us develop these new features. eHealth Africa is a non-governmental organization based in Nigeria. Their mission is to build stronger health systems through the design and implementation of data-driven solutions.

QGIS 3 and performance analysis

Context

Since last year we (the QGIS communtity) have been using QGIS-Server-PerfSuite to run performance tests on a daily basis. This way, we’re able to monitor and avoid regressions according to some test scenarios for several QGIS Server releases (currently 2.18, 3.4, 3.6 and master branches). However, there are still many questions about performance from a general point of view:

  • What is the performance of QGIS Server compared to QGIS Desktop?
  • What are the implications of feature simplification for polygons and lines?
  • Does the symbology have a strong impact on performance and in which proportion?

Of course, it’s a broad and complex topic because of the numerous possibilities offered by the rendering engine of QGIS. In this article we’ll look at typical use cases with geometries coming from a PostgreSQL database.

Methodology

The first way to monitor performance is to measure the rendering time. To do so, the Map canvas refreshis activated in the Settings of QGIS Desktop. In this way we can get the rendering time from within the Rendering tab of log messages in QGIS Desktop, as well as from log messages written by QGIS Server.

The rendering time retrieved with this method allows to get the total amount of time spent in rendering for each layer (see the source code).

But in the case of QGIS Server another interesting measure is the total time spent for a specific request, which may be read from log messages too. There are indeed more operations achieved for a single WMS request than a simple rendering in QGIS Desktop:

The rendering time extracted from QGIS Desktop corresponds to the core rendering time displayed in the sequence diagram above. Moreover, to be perfectly comparable, the rendering engine must be configured in the same way in both cases. In this way, and thanks to PyQGIS API, we can retrieve the necessary information from the Python console in QGIS Desktop, like the extent or the canvas size, in order to configure the GetMap WMS request with the appropriate WIDTH,, HEIGHT , and BBOX parameters.

Another way to examine the performance is to use a profiler in order to inspect stack traces. These traces may be represented as a FlameGraph. In this case, debug symbols are necessary, meaning that the rendering time is not representative anymore. Indeed, QGIS has to be compiled in Debug mode.

Polygons

For these tests we use the same dataset as that for the daily performance tests, which is a layer of polygons with 282,776 features.

Feature simplification deactivated

Let’s first have a look at the rendering time and the FlameGraph when the simplification is deactivated. In QGIS Desktop, the mean rendering time is 2591 ms. Using to the PyQGIS API we are able to get the extent and the size of the map to render the map again but using a GetMap WMS request this time.

In this case, the rendering time is 2469 ms and the total request time is 2540 ms. For the record, the first GetMap request is ignored because in this case, the whole QGIS project is read and cached, meaning that the total request time is much higher. But according to those results, the rendering time for QGIS Desktop and QGIS Server are utterly similar, which makes sense considering that the same rendering engine is used, but it is still very reassuring :).

Now, let’s take a look to the FlameGraph to detect where most of the time is spent.

 

Undoubtedly the FlameGraph’s are similar in both cases, meaning that if we want to improve the performance of QGIS Server we need to improve the performance of the core rendering engine, also used in QGIS Desktop. In our case the main method is QgsMapRendererParallelJob::renderLayerStatic where most of the time is spent in:

Methods Desktop % Server %
QgsExpressionContext::setFeature 6.39 6.82
QgsFeatureIterator::nextFeature 28.77 28.41
QgsFeatureRenderer::renderFeature 29.01 27.05

Basically, it may be simplified like:

Clearly, the rendering takes about 30% of the total amount of time. In this case geometry simplification could potentially help.

Feature simplification activated

Geometry simplification, available for both polygons and lines layers, may be activated and configured through layer’s Properties in the Rendering tab. Several parameters may be set:

  • Simplification may be deactivated
  • Threshold for a more drastic simplification
  • Algorithm
  • Provider simplification
  • Scale

Once the simplification activated, we varied the threshold as well as the algorithm in order to detect performance jumps:

The following conclusions can be drawn:

  • The Visvalingam algorithm should be avoided because it begins to be efficient with a high threshold, meaning a significant lack of precision in geometries
  • The ideal threshold for Snap To Grid and Distance algorithms seems to be 1.05. Indeed, considering that it’s a very low threshold, the precision of geometries is still pretty good for a major improvement in rendering time though

For now, these tests have been run on the full extent of the layer. However, we still have a Maximum scale parameter to test, so we’ve decreased the scale of the layer:

And in this case, results are pretty interesting too:

Several conclusions can be drawn:

  • Visvalingam algorithm should be avoided at low scale too
  • Snap To Grid seems counter-productive at low scale
  • Distance algorithm seems to be a good option

Lines

For these tests we also use the same dataset as that for daily performance tests, which is a layer of lines with 125,782 features.

Feature simplification activated

In the same way as for polygons we have tested the effect of the geometric simplification on the rendering time, as well as algorithms and thresholds:

In this case we have exactly the same conclusion as for polygons: the Distance algorithm should be preferred with a threshold of 1.05.

For QGIS Server the mean rendering time is about 1180 ms with geometry simplification compared to 1108 ms for QGIS Desktop, which is totally consistent. And looking at the FlameGraph we note that once again most of the time is spent in accessing the PostgreSQL database (about 30%) and rendering features (about 40%).

 

 

 

 

 

Symbology

Another parameter which has an obvious impact on performance is the symbology used to draw the layers. Some features are known to be time consuming, but we’ve felt that a a thorough study was necessary to verify it.

 

Firstly, we’ve studied the influence of the width as well as the Single Symbol type on the rendering time.

Some points are noteworthy:

Simple Line is clearly the less time consuming

– Beyond the default 0.26 line width, rendering time begins to raise consequently with a clear jump in performance

 

Another interesting feature is the Draw effects option, allowing to add some fancy effects (shadow, glow, …).

However, this feature is known to be particularly CPU consuming. Actually, rendering all the 125,782 lines took so long that we had to to change to a lower scale, with just some a few dozen lines. Results are unequivocal:

 

The last thing we wanted to test for symbology is the effect of the Categorized classification. Here are the results for some classifications with geometry simplification activated:

  • No classification: 1108 ms
  • A simple classification using the column “classification” (8 symbols): 1148 ms
  • A classification based on a stupid expression “classification x 3″ (8 symbols): 1261 ms
  • A classification based on string comparison “toponyme like ‘Ruisseau*'” (2 symbols): 1380 ms
  • A classification with a specific width line for each category (8 symbols): 1850 ms

Considering that a simple classification does not add an excessive extra-cost, it seems that the classification process itself is not very time consuming. However, as soon as an expression is used, we can observe a slight jump in performance.

Labeling

Another important part to study regarding performance is labeling and the underlying positioning. For this test we decreased the scale and varied the Placement parameter without tuning anything.

Clearly, the parallel labeling is much more time consuming than the other placements. However, as previously stated, we used the default parameters for each positioning, meaning that the number of labels really drawn on the map differs from a placement to another.

Points

The last kind of geometries we have to study is points. Similarly to polygons and lines, we used the same dataset as that of performance tests, that is a layer with 435588 points.

In the case of points geometries geometry simplification is of course not available. So we are going to focus on symbology and the impact of marker size.

Obviously Font Marker must be used carefully because of the underlying jump in performance, as well as SVG Symbols. Moreover, contrary to Simple Marker, an increase of the size implies a drastic augmentation in time rendering.

General conclusion

Based on this factual study, several conclusions can be drawn.

Globally, FlameGraph for QGIS Desktop and QGIS Server are completely similar as well as rendering time.

It means that if we want to improve the performance of QGIS Server, we have to work on the desktop configuration and the rendering engine of the QGIS core library.

Extracting generic conclusions from our tests is very difficult, because it clearly depends on the underlying data. But let’s try to suggest some recommendations :).

Firstly, geometry simplification seems pretty efficient with lines and polygons as soon as the algorithm is chosen cautiously, and as long as your features include many vertices. It seems that the Distance algorithm with a 1.05 threshold is a good choice, with both high and low scale. However, it’s not a magic solution!

Secondly, a special care is needed with regards to symbology. Indeed, in some cases, a clear jump in performance is notable. For example, fancy effects and Font Marker SVG Symbol have to be used with caution if you’re picky on rendering time.

Thirdly, we have to be aware of the extra cost caused by labeling, especially the Parallel  placement for line geometries. On this subject, a not very well-known parameter allows to drastically reduce labeling time: the PAL candidates option. Actually, we may decrease the labeling time by reducing the number of candidates. For an explicit use case, you can take a look at the daily reports.

In any case, improving server performance in a substantial way means improving the QGIS core library directly.

Especially, we noticed thanks to FlameGraph that most of the time is spent in drawing features and managing the data from the PostgreSQL database. By the way, a legitimate question is: “How much time do we spend on waiting for the database?”. To be continued 😉

If you hit performance issues on your specific configuration or want to improve QGIS awesomeness, we provide a unique QGIS support offer at http://qgis.oslandia.com/ thanks to our team of specialists!

Funding for selective masking in QGIS is now complete!

Few months ago, Oslandia launched QGIS lab’s , a place to advertise our new ideas for QGIS, but also a place to help you find co funders to make dreams become reality.

The first initiative is about label selective masking, a feature that will allow us to achieve even more professional rendering for our maps.

Selective masking

 

Thanks to the commitment of the Swiss QGIS user group and local authorities, this work is now funded !

We now can start working hard to deliver you this great feature for QGIS 3.10

Thanks again to our funders

A last word, this is not a classical crowd funding initiative, but a classical contract for each funder.

No more reason not to contribute to free and open source software!

Stand-alone PyQGIS scripts with OSGeo4W

PyQGIS scripts are great to automate spatial processing workflows. It’s easy to run these scripts inside QGIS but it can be even more convenient to run PyQGIS scripts without even having to launch QGIS. To create a so-called “stand-alone” PyQGIS script, there are a few things that need to be taken care of. The following steps show how to set up PyCharm for stand-alone PyQGIS development on Windows10 with OSGeo4W.

An essential first step is to ensure that all environment variables are set correctly. The most reliable approach is to go to C:\OSGeo4W64\bin (or wherever OSGeo4W is installed on your machine), make a copy of qgis-dev-g7.bat (or any other QGIS version that you have installed) and rename it to pycharm.bat:

Instead of launching QGIS, we want that pycharm.bat launches PyCharm. Therefore, we edit the final line in the .bat file to start pycharm64.exe:

In PyCharm itself, the main task to finish our setup is configuring the project interpreter:

First, we add a new “system interpreter” for Python 3.7 using the corresponding OSGeo4W Python installation.

To finish the interpreter config, we need to add two additional paths pointing to QGIS\python and QGIS\python\plugins:

That’s it! Now we can start developing our stand-alone PyQGIS script.

The following example shows the necessary steps, particularly:

  1. Initializing QGIS
  2. Initializing Processing
  3. Running a Processing algorithm
import sys

from qgis.core import QgsApplication, QgsProcessingFeedback
from qgis.analysis import QgsNativeAlgorithms

QgsApplication.setPrefixPath(r'C:\OSGeo4W64\apps\qgis-dev', True)
qgs = QgsApplication([], False)
qgs.initQgis()

# Add the path to processing so we can import it next
sys.path.append(r'C:\OSGeo4W64\apps\qgis-dev\python\plugins')
# Imports usually should be at the top of a script but this unconventional 
# order is necessary here because QGIS has to be initialized first
import processing
from processing.core.Processing import Processing

Processing.initialize()
QgsApplication.processingRegistry().addProvider(QgsNativeAlgorithms())
feedback = QgsProcessingFeedback()

rivers = r'D:\Documents\Geodata\NaturalEarthData\Natural_Earth_quick_start\10m_physical\ne_10m_rivers_lake_centerlines.shp'
output = r'D:\Documents\Geodata\temp\danube3.shp'
expression = "name LIKE '%Danube%'"

danube = processing.run(
    'native:extractbyexpression',
    {'INPUT': rivers, 'EXPRESSION': expression, 'OUTPUT': output},
    feedback=feedback
    )['OUTPUT']

print(danube)

Call for testing: GRASS GIS with Python 3

Please help us testing the Python3 support in the yet unreleased GRASS GIS trunk (i.e., version “grass77” which will be released as “grass78” in the near future).

1. Why Python 3?

Python 2 is end-of-life (EOL); the current Python 2.7 will retire in 11 months from today (see https://pythonclock.org). We want to follow the “Moving to require Python 3” and complete the change to Python 3. And we need a broader community testing.

2. Download and test!

Packages are available at time:

3. Instructions for testing

4. Problems found? Please report them to us

Problems and bugs can be reported in the GRASS GIS trac. Code changes are welcome!

Thanks for testing grass77!

The post Call for testing: GRASS GIS with Python 3 appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

Dealing with delayed measurements in (Geo)Pandas

Yesterday, I learned about a cool use case in data-driven agriculture that requires dealing with delayed measurements. As Bert mentions, for example, potatoes end up in the machines and are counted a few seconds after they’re actually taken out of the ground:

Therefore, in order to accurately map yield, we need to take this temporal offset into account.

We need to make sure that time and location stay untouched, but need to shift the potato count value. To support this use case, I’ve implemented apply_offset_seconds() for trajectories in movingpandas:

    def apply_offset_seconds(self, column, offset):
        self.df[column] = self.df[column].shift(offset, freq='1s')

The following test illustrates its use: you can see how the value column is shifted by 120 second. Geometry and time remain unchanged but the value column is shifted accordingly. In this test, we look at the row with index 2 which we access using iloc[2]:

    def test_offset_seconds(self):
        df = pd.DataFrame([
            {'geometry': Point(0, 0), 't': datetime(2018, 1, 1, 12, 0, 0), 'value': 1},
            {'geometry': Point(-6, 10), 't': datetime(2018, 1, 1, 12, 1, 0), 'value': 2},
            {'geometry': Point(6, 6), 't': datetime(2018, 1, 1, 12, 2, 0), 'value': 3},
            {'geometry': Point(6, 12), 't': datetime(2018, 1, 1, 12, 3, 0), 'value':4},
            {'geometry': Point(6, 18), 't': datetime(2018, 1, 1, 12, 4, 0), 'value':5}
        ]).set_index('t')
        geo_df = GeoDataFrame(df, crs={'init': '31256'})
        traj = Trajectory(1, geo_df)
        traj.apply_offset_seconds('value', -120)
        self.assertEqual(traj.df.iloc[2].value, 5)
        self.assertEqual(traj.df.iloc[2].geometry, Point(6, 6))

From CSV to GeoDataFrame in two lines

Pandas is great for data munging and with the help of GeoPandas, these capabilities expand into the spatial realm.

With just two lines, it’s quick and easy to transform a plain headerless CSV file into a GeoDataFrame. (If your CSV is nice and already contains a header, you can skip the header=None and names=FILE_HEADER parameters.)

usecols=USE_COLS is also optional and allows us to specify that we only want to use a subset of the columns available in the CSV.

After the obligatory imports and setting of variables, all we need to do is read the CSV into a regular DataFrame and then construct a GeoDataFrame.

import pandas as pd
from geopandas import GeoDataFrame
from shapely.geometry import Point

FILE_NAME = "/temp/your.csv"
FILE_HEADER = ['a', 'b', 'c', 'd', 'e', 'x', 'y']
USE_COLS = ['a', 'x', 'y']

df = pd.read_csv(
    FILE_NAME, delimiter=";", header=None,
    names=FILE_HEADER, usecols=USE_COLS)
gdf = GeoDataFrame(
    df.drop(['x', 'y'], axis=1),
    crs={'init': 'epsg:4326'},
    geometry=[Point(xy) for xy in zip(df.x, df.y)])

It’s also possible to create the point objects using a lambda function as shown by weiji14 on GIS.SE.

Visualization of borehole logs with QGIS

At Oslandia, we have been working on a component based on the QGIS API for the visualization of well and borehole logs.

This component is aiming at displaying data collected vertically along wells dug underground. It mainly focuses on data organized in series of contiguous samples, and is generic enough to be used for both vertical (where the Y axis corresponds to a depth underground) and horizontal data (where the X axis corresponds to time for example). One of the main concerns was to ensure good display performances with an important volume of sample points (usually hundred of thousands sample points).

We have already been working on plot components in the past, but for specific QGIS plugins (both for timeseries and stratigraphic logs) and thought we will put these past experiences to good use by creating a more generic library.

We decided to represent sample points as geometry features in order to be able to reuse the rich symbology engine of QGIS. This allows users to represent their data whatever they like without having to rewrite an entire symbology engine. We also benefit from all the performance optimizations that have been added and polished over the years (on-the-fly geometry simplification for example).

Albeit written in Python, we achieved good display performances. The key trick was to avoid copy of data between the sample points read by QGIS and the Python graphing component. It is achieved thanks to the fact that the PyQGIS API has some functions that respect the buffer protocol.

You can have a look at the following video to see this component integrated in an existing plugin.

Visualization of borehole logs withing QGIS

It is distributed as usual under an open source license and the code repository can be found on GitHub.

Do not hesitate to contact us ( [email protected] ) if you are interested in any enhancements around this component.

PyQGIS101 part 10 published!

PyQGIS 101: Introduction to QGIS Python programming for non-programmers has now reached the part 10 milestone!

Beyond the obligatory Hello world! example, the contents so far include:

If you’ve been thinking about learning Python programming, but never got around to actually start doing it, give PyQGIS101 a try.

I’d like to thank everyone who has already provided feedback to the exercises. Every comment is important to help me understand the pain points of learning Python for QGIS.

I recently read an article – unfortunately I forgot to bookmark it and cannot locate it anymore – that described the problems with learning to program very well: in the beginning, it’s rather slow going, you don’t know the right terminology and therefore don’t know what to google for when you run into issues. But there comes this point, when you finally get it, when the terminology becomes clearer, when you start thinking “that might work” and it actually does! I hope that PyQGIS101 will be a help along the way.

QGIS Server 3 : OGC Certification work for WFS 1.1.0

QGIS Server is an open source OGC data server which uses QGIS engine as backend. It becomes really awesome because a simple desktop qgis project file can be rendered as web services with exactly the same rendering, and without any mapfile or xml coding by hand.

QGIS Server provides a way to serve OGC web services like WMS, WCS, WFS and WMTS resources from a QGIS project, but can also extend services like GetPrint which takes advantage of QGIS’s map composer power to generate high quality PDF outputs.

Since the 3.0.2 version, QGIS Server is certified as an official OGC reference implementation for WMS 1.3.0 and reports are generated in a daily continuous integration to avoid regressions.

 

Thus, the next step was naturally to take a look at the WFS 1.1.0 thanks to the support of the QGIS Grant Program

Side note, this Grant program is made possible thanks to your direct sponsoring and micro-donations to QGIS.org.

TEAM Engine test suite for WFS 1.1.0

We use a tool provided by the OGC Compliance Program to run dedicated tests on the server : Teamengine (Test, Evaluation, And Measurement Engine).

Test suites are available through a web interface. However, for the needs of continuous integration, these tests have to be run without user interaction. In the case of WMS 1.3.0, nothing more easy than using the REST API:

$ curl "http://localhost:8081/teamengine/rest/suites/wms/1.20/run?queryable=queryable&basic=basic&capabilities-url=http://172.17.0.2/qgisserver?REQUEST=GetCapabilities%26SERVICE=WMS%26VERSION=1.3.0%26MAP=/data/teamengine_wms_130.qgs

 

However, the WFS 1.1.0 test suite does not provide a REST API and makes the situation less straightforward. We switched to using TEAM Engine directly from command line:

$ cd te_base
$ ./bin/unix/test.sh -source=wfs/1.1.0/ctl/main.ctl -form=params.xml

The params.xml file allows to configure underlying tests. In this particular case, the GetCapabilities URL of the QGIS Server to test is given.  Results are available thanks to the viewlog.sh shell script:

$ ./bin/unix/viewlog.sh -logdir=te_base/users/root/ -session=s0001
Test wfs:wfs-main type Mandatory (s0001) Failed (InheritedFailure)
   Test wfs:readiness-tests type Mandatory (s0001/d68e38807_1) Failed (InheritedFailure)
      Test ctl:SchematronValidatingParser type Mandatory (s0001/d68e38807_1/d68e588_1) Failed
      Test wfs:basic-main type Mandatory (s0001/d68e38807_1/d68e636_1) Failed (InheritedFailure)
         Test wfs:run-GetCapabilities-basic-cc-GET type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1) Failed (InheritedFailure)
            Test wfs:wfs-1.1.0-Basic-GetCapabilities-tc1 type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1095_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1095_1/d68e1234_1) Passed
            Test wfs:wfs-1.1.0-Basic-GetCapabilities-tc2 type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1100_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1100_1/d68e1305_1) Passed
            Test wfs:wfs-1.1.0-Basic-GetCapabilities-tc3 type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1/d68e1558_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1/d68e1582_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1/d68e1606_1) Passed
...
...

Finally, a Python script has been written to read these logs and generate HTML report easily readable. Thanks to our QGIS-Server-CertifSuite, providing the continuous integration infrastructure with Docker images, these reports are also generated daily.

Bugfix and Conclusion

First results were clear: a lot of work is necessary to have a QGIS Server certified for WFS 1.1.0!

We started fixing the issues one by one:

And now we have a much better support than 6 months ago

However, some work still need to be done to finally obtain the OGC certification for WFS 1.1.0. To be continued!

Please contact us if you want QGIS server to become a reference implementation for all OGC service !

Geocoding with Geopy

Need to geocode some addresses? Here’s a five-lines-of-code solution based on “An A-Z of useful Python tricks” by Peter Gleeson:

from geopy import GoogleV3
place = "Krems an der Donau"
location = GoogleV3().geocode(place)
print(location.address)
print("POINT({},{})".format(location.latitude,location.longitude))

For more info, check out geopy:

geopy is a Python 2 and 3 client for several popular geocoding web services.
geopy includes geocoder classes for the OpenStreetMap Nominatim, ESRI ArcGIS, Google Geocoding API (V3), Baidu Maps, Bing Maps API, Yandex, IGN France, GeoNames, Pelias, geocode.earth, OpenMapQuest, PickPoint, What3Words, OpenCage, SmartyStreets, GeocodeFarm, and Here geocoder services.

Plotting GPS Trajectories with error ellipses using Time Manager

This is a guest post by Time Manager collaborator and Python expert, Ariadni-Karolina Alexiou.

Today we’re going to look at how to visualize the error bounds of a GPS trace in time. The goal is to do an in-depth visual exploration using QGIS and Time Manager in order to learn more about the data we have.

The Data

We have a file that contains GPS locations of an object in time, which has been created by a GPS tracker. The tracker also keeps track of the error covariance matrix for each point in time, that is, what confidence it has in the measurements it gives. Here is what the file looks like:

data.png

Error Covariance Matrix

What are those sd* fields? According to the manual: The estimated standard deviations of the solution assuming a priori error model and error parameters by the positioning options. What it basically means is that the real GPS location will be located no further than three standard deviations across north and east from the measured location, most of (99.7%) the time. A way to represent this visually is to create an ellipse that maps this area of where the real location can be.ellipse_ab

An ellipse can be uniquely defined from the lengths of the segments a and b and its rotation angle. For more details on how to get those ellipse parameters from the covariance matrix, please see the footnote.

Ground truth data

We also happen to have a file with the actual locations (also in longitudes and latitudes) of the object for the same time frame as the GPS (also in seconds), provided through another tracking method which is more accurate in this case.

actual_data

This is because, the object was me running on a rooftop in Zürich wearing several tracking devices (not just GPS), and I knew exactly which floor tiles I was hitting.

The goal is to explore, visually, the relationship between the GPS data and the actual locations in time. I hope to get an idea of the accuracy, and what can influence it.

First look

Loading the GPS data into QGIS and Time Manager, we can indeed see the GPS locations vis-a-vis the actual locations in time.

actual_vs_gps

Let’s see if the actual locations that were measured independently fall inside the ellipse coverage area. To do this, we need to use the covariance data to render ellipses.

Creating the ellipses

I considered using the ellipses marker from QGIS.

ellipse_marker.png

It is possible to switch from Millimeter to Map Unit and edit a data defined override for symbol width, height and rotation. Symbol width would be the a parameter of the ellipse, symbol height the b parameter and rotation simply the angle. The thing is, we haven’t computed any of these values yet, we just have the error covariance values in our dataset.

Because of the re-projections and matrix calculations inherent into extracting the a, b and angle of the error ellipse at each point in time, I decided to do this calculation offline using Python and relevant libraries, and then simply add a WKT text field with a polygon representation of the ellipse to the file I had. That way, the augmented data could be re-used outside QGIS, for example, to visualize using Leaflet or similar. I could have done a hybrid solution, where I calculated a, b and the angle offline, and then used the dynamic rendering capabilities of QGIS, as well.

I also decided to dump the csv into an sqlite database with an index on the time column, to make time range queries (which Time Manager does) run faster.

Putting it all together

The code for transforming the initial GPS data csv file into an sqlite database can be found in my github along with a small sample of the file containing the GPS data.

I created three ellipses per timestamp, to represent the three standard deviations. Opening QGIS (I used version: 2.12, Las Palmas) and going to Layer>Add Layer>Add SpatialLite Layer, we see the following dialog:

add_spatialite2.png

After adding the layer (say, for the second standard deviation ellipse), we can add it to Time Manager like so:

add_to_tm

We do the process three times to add the three types of ellipses, taking care to style each ellipse differently. I used transparent fill for the second and third standard deviation ellipses.

I also added the data of my  actual positions.

Here is an exported video of the trace (at a place in time where I go forward, backwards and forward again and then stay still).

gps

Conclusions

Looking at the relationship between the actual data and the GPS data, we can see the following:

  • Although the actual position differs from the measured one, the actual position always lies within one or two standard deviations of the measured position (so, inside the purple and golden ellipses).
  • The direction of movement has greater uncertainty (the ellipse is elongated across the line I am running on).
  • When I am standing still, the GPS position is still moving, and unfortunately does not converge to my actual stationary position, but drifts. More research is needed regarding what happens with the GPS data when the tracker is actually still.
  • The GPS position doesn’t jump erratically, which can be good, however, it seems to have trouble ‘catching up’ with the actual position. This means if we’re looking to measure velocity in particular, the GPS tracker might underestimate that.

These findings are empirical, since they are extracted from a single visualization, but we have already learned some new things. We have some new ideas for what questions to ask on a large scale in the data, what additional experiments to run in the future and what limitations we may need to be aware of.

Thanks for reading!

Footnote: Error Covariance Matrix calculations

The error covariance matrix is (according to the definitions of the sd* columns in the manual):

sde * sde sign(sdne) * sdne * sdne
sign(sdne) * sdne * sdne sdn * sdn

It is not a diagonal matrix, which means that the errors across the ‘north’ dimension and the ‘east’ dimension, are not exactly independent.

An important detail is that, while the position is given in longitudes and latitudes, the sdn, sde and sdne fields are in meters. To address this in the code, we convert the longitude and latitudes using UTM projection, so that they are also in meters (northings and eastings).

For more details on the mathematics used to plot the ellipses check out this article by Robert Eisele and the implementation of the ellipse calculations on my github.

Celebrating 35 years of GRASS GIS!

Today marks 35 years of GRASS GIS development – with frequent releases the project keeps pushing the limits in terms of geospatial data processing quality and performance.

GRASS (Geographic Resources Analysis Support System) is a free and open source Geographic Information System (GIS) software suite used for geospatial data management and analysis, image processing, graphics and map production, spatial modeling, and 3D visualization. Since the major GRASS GIS 7 version, it also comes with a feature rich engine for space-time cubes useful for time series processing of Landsat and Copernicus Sentinel satellite data and more. GRASS GIS can be either used as a desktop application or as a backend for other software packages such as QGIS and R. Furthermore, it is frequently used on HPC and cloud infrastructures for massive parallelized data processing.

Brief history
In 1982, under the direction of Bill Goran at the U.S. Army Corps of Engineers Construction Engineering Research Laboratory (CERL), two GIS development efforts were undertaken. First, Lloyd Van Warren, a University of Illinois engineering student, began development on a new computer program that allowed analysis of mapped data.  Second, Jim Westervelt (CERL) developed a GIS package called “LAGRID – the Landscape Architecture Gridcell analysis system” as his master’s thesis. Thirty five years ago, on 29 July 1983, the user manual for this new system titled “GIS Version 1 Reference Manual” was first published by J. Westervelt and M. O’Shea. With the technical guidance of Michael Shapiro (CERL), the software continued its development at the U.S. Army Corps of Engineers Construction Engineering Research Laboratory (USA/CERL) in Champaign, Illinois; and after further expansion version 1.0 was released in 1985 under the name Geographic Resources Analysis Support System (GRASS). The GRASS GIS community was established the same year with the first annual user meeting and the launch of GRASSnet, one of the internet’s early mailing lists. The user community expanded to a larger audience in 1991 with the “Grasshopper” mailing list and the introduction of the World Wide Web. The users’ and programmers’ mailing lists archives for these early years are still available online.
In the mid 1990s the development transferred from USA/CERL to The Open GRASS Consortium (a group who would later generalize to become today’s Open Geospatial Consortium — the OGC). The project coordination eventually shifted to the international development team made up of governmental and academic researchers and university scientists. Reflecting this shift to a project run by the users, for the users, in 1999 GRASS GIS was released under the terms of the GNU General Public License (GPL). A detailed history of GRASS GIS can be found at https://grass.osgeo.org/history/.

Where to next?
The development on GRASS GIS continues with more energy and interest than ever. Parallel to the long-term maintenance of the GRASS 7.4 stable series, effort is well underway on the new upcoming cutting-edge 7.6 release, which will bring many new features, enhancements, and cleanups. As in the past, the GRASS GIS community is open to any contribution, be it in the form of programming, documentation, testing, and financial sponsorship. Please contact us!

About GRASS GIS

The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The GRASS Development Team, July 2018

The post Celebrating 35 years of GRASS GIS! appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

Create a QGIS vector data provider in Python is now possible

 

Why python data providers?

My main reasons for having Python data provider were:

  • quick prototyping
  • web services
  • why not?

 

This topic has been floating in my head for a while since I decided to give it a second look and I finally implemented it and merged for the next 3.2 release.

 

How it’s been done

To make this possible I had to:

  • create a public API for registering the providers
  • create the Python bindings (the hard part)
  • create a sample Python vector data provider (the boring part)
  • make all the tests pass

 

First, let me say that it wasn’t like a walk in the park: the Python bindings part is always like diving into woodoo and black magic recipes before I can get it to work properly.

For the Python provider sample implementation I decided to re-implement the memory (aka: scratch layers) provider because that’s one of the simplest providers and it does not depend on any external storage or backend.

 

How to and examples

For now, the main source of information is the API and the tests:

To register your own provider (PyProvider in the snippet below) these are the basic steps:

metadata = QgsProviderMetadata(PyProvider.providerKey(), PyProvider.description(), PyProvider.createProvider)
QgsProviderRegistry.instance().registerProvider(metadata)

To create your own provider you will need at least the following components:

  • the provider class itself (subclass of QgsVectorDataProvider)
  • a feature source (subclass of QgsAbstractFeatureSource)
  • a feature iterator (subclass of QgsAbstractFeatureIterator)

Be aware that the implementation of a data provider is not easy and you will need to write a lot of code, but at least you could get some inspiration from the existing example.

 

Enjoy wirting data providers in Python and please let me know if you’ve fond this implementation useful!

The post Create a QGIS vector data provider in Python is now possible first appeared on Open Web Solutions, GIS & Python Development.

“QGZ” – A new default project file format for QGIS

Last year we had the opportunity to implement ‘.QGZ’ as  anew variant of  the QGIS 3 project file format.

This is simply a zipped container for the QGS xml file. We took benefit of that container to store the auxiliary storage database into it – only if users choose that optional format though.

In classical mode, the auxiliary database is saved as a .qgd file  along the xml file.

qgd auxiliary storage file

 

qgz file format saving

When choosing the zipped container, the qgd file falls into the qgz, and this becomes totally transparent for end users. No more issues, you can’t delete it or forget to copy it when sharing project files!

The great news today is that for qgis 3.2, qgz will be the default format!

This will offers many possibilities like embbeding:

  • Resources like fonts, SVG, color ramps, and all styling informations
  • A unified container for off-line editing embedding data into a gpkg format
  • plugins, scripts, processing algorithms and modelers

If you are interested in going further this, please contact us!

Docker images for QGIS

With support from Orange we’ve created Docker images for QGIS. All the material for building and running these images is open-source and freely available on GitHub: https://github.com/Oslandia/docker-qgis.

The docker-qgis repository includes two Docker images: qgis-build and qgis-exec.

The qgis-build image includes all the libraries and tools necessary for building QGIS from source. Building QGIS requires installing a lot of dependencies, so it might not be easy depending on the Operating System you use. The qgis-build image makes it easy and repeatable. The result of a QGIS build is Debian Stretch packages that can be used for building the qgis-exec image, as explained below.

The qgis-exec image includes a runtime environment for QGIS Server. The image makes it easy to deploy and run QGIS Server. It is meant to be be generic and usable in various contexts, and in production. There are two ways to build the qgis-exec image. It can be built from the official QGIS 3 Debian packages (http://qgis.org/debian/), or from local QGIS Debian Stretch packages that were built using the qgis-build image.

We encourage you to go test and use our images! We haven’t pushed the images to the Docker Hub yet, but this is in the plans.

Feel free to contact us on GitHub or by email for any question or request!

Movement data in GIS #12: why you should be using PostGIS trajectories

In short: both writing trajectory queries as well as executing them is considerably faster using PostGIS trajectories (as LinestringM) rather than the commonly used point-based approach.

Here are a couple of examples to give you an impression of the differences.

Spoiler alert! Trajectory queries are up to 500 times faster than comparable point-based queries.

A quick look at indexing

In both cases, we have indexed the tracker id, geometry, and time columns to speed up query processing.

The trajectory table has 3 indexes

  • gist (time_range)
  • gist (track gist_geometry_ops_nd)
  • btree (tracker)

The point-based table has 4 indexes

  • gist (pt)
  • btree (trajectory_id)
  • btree (tracker)
  • btree (t)

Length

First, let’s see how to determine trajectory length for all observed moving objects (identified by a tracker id).

Using the point-based approach, we first need to ensure that the points are in the correct temporal order, create the lines, and finally sum up their length:

WITH ordered AS (
 SELECT trajectory_id, tracker, t, pt
 FROM geolife.trajectory_pt
 ORDER BY t
), tmp AS (
 SELECT trajectory_id, tracker, st_makeline(pt) traj
 FROM ordered 
 GROUP BY trajectory_id, tracker
)
SELECT tracker, round(sum(ST_Length(traj::geography)))
FROM tmp
GROUP BY tracker 
ORDER BY tracker

With trajectories, we can go right to computing lengths:

SELECT tracker, round(sum(ST_Length(track::geography)))
FROM geolife.trajectory_ext
GROUP BY tracker
ORDER BY tracker

On my test system, the trajectory query run time is 22.7 sec instead of 43.0 sec for the point-based approach:

Duration

Compared to trajectory length, duration is less complicated in the point-based approach:

WITH tmp AS (
 SELECT trajectory_id, tracker, min(t) start_time, max(t) end_time
 FROM geolife.trajectory_pt
 GROUP BY trajectory_id, tracker
)
SELECT tracker, sum(end_time - start_time)
FROM tmp
GROUP BY tracker
ORDER BY tracker

Still, the trajectory query is less complex and much faster at 31 ms instead of 6.0 sec:

SELECT tracker, sum(upper(time_range) - lower(time_range))
FROM geolife.trajectory_ext
GROUP BY tracker
ORDER BY tracker

Temporal filter

Extracting trajectories that occurred during a certain time frame is another common use case:

WITH tmp AS (
 SELECT trajectory_id, tracker, min(t) start_time, max(t) end_time
 FROM geolife.trajectory_pt
 GROUP BY trajectory_id, tracker
)
SELECT trajectory_id, tracker, start_time, end_time
FROM tmp
WHERE end_time > '2008-11-26 11:00'
AND start_time < '2008-11-26 15:00'
ORDER BY tracker

This point-based query takes 6.0 sec while the shorter trajectory query finishes in 12 ms:

SELECT id, tracker, time_range
FROM geolife.trajectory_ext
WHERE time_range && '[2008-11-26 11:00+1,2008-11-26 15:00+01]'::tstzrange

or equally fast (12 ms) by making use of the n-dimensional index:

WHERE track &&&	ST_Collect(
 ST_MakePointM(-180, -90, extract(epoch from '2008-11-26 11:00'::timestamptz)),
 ST_MakePointM(180, 90, extract(epoch from '2008-11-26 15:00'::timestamptz))
)

Spatial filter

Finally, of course, let’s have a look at spatial filters, for example, trajectories that start in a certain area:

WITH my AS ( 
 SELECT ST_Buffer(ST_SetSRID(ST_MakePoint(116.31894,39.97472),4326),0.0005) areaA
), tmp AS (
 SELECT trajectory_id, tracker, min(t) t 
 FROM geolife.trajectory_pt
 GROUP BY trajectory_id, tracker
)
SELECT distinct traj.tracker, traj.trajectory_id 
FROM tmp
JOIN geolife.trajectory_pt traj
ON tmp.trajectory_id = traj.trajectory_id AND traj.t = tmp.t
JOIN my
ON ST_Within(traj.pt, my.areaA)

This point-based query takes 6.0 sec while the shorter trajectory query finishes in 488 ms:

WITH my AS ( 
 SELECT ST_Buffer(ST_SetSRID(ST_MakePoint(116.31894, 39.97472),4326),0.0005) areaA
)
SELECT id, tracker, ST_AsText(track)
FROM geolife.trajectory_ext
JOIN my
ON areaA && track
AND ST_Within(ST_StartPoint(track), areaA)

For more generic “does this trajectory intersect another geometry”, the points can also be aggregated to a linestring on the fly but that takes 21.9 sec:

I’ll be presenting more work on PostGIS trajectories at GI_Forum in Salzburg in July. In the talk, I’ll also have a look at the custom PG-Trajectory datatype. Here’s the full open-access paper:

Graser, A. (2018) Evaluating Spatio-temporal Data Models for Trajectories in PostGIS Databases. GI_Forum ‒ Journal of Geographic Information Science, 1-2018, 16-33. DOI: 10.1553/giscience2018_01_s16.

You can find my fork of the PG-Trajectory project – including all necessary fixes – on Bitbucket.


This post is part of a series. Read more about movement data in GIS.

Welcome QGIS 3 and bye bye Madeira

Last week I’ve been in Madeira at the hackfest, like all the past events this has been an amazing happening, for those of you who have never been there, a QGIS hackfest is typically an event where QGIS developers and other pasionate contributors like documentation writers, translators etc. gather together to discuss the future of their beloved QGIS software. QGIS hackfest are informal events where meetings are scheduled freely and any topic relevant to the project can be discussed. This time we have brought to the table some interesting topics like:

  • the future of processing providers: should they be part of QGIS code or handled independently as plugins?
  • the road forward to a better bug reporting system and CI platform: move to gitlab?
  • the certification program for QGIS training courses: how (and how much) training companies should give back to the project?
  • SWOT analysis of current QGIS project: very interesting discussion about the status of the project.
  • QGIS Qt Quick modules for mobile QGIS app
Tehre were also some mentoring sessions where I presented:
  • How to set up a development environment and make your first pull request
  • How to write tests for QGIS (in both python and C++)
  At this link you can find all the video recordings of the sessions: https://github.com/qgis/QGIS/wiki/DeveloperMeetingMadeira2018   Here is a link to the Vagrant QGIS developer VM I’ve prepared for the session: https://github.com/elpaso/qgis-dev-vagrant/   I’ve got a good feedback from other devs about my sessions and I’m really happy that somebody found them useful, one of the main goals of a QGIS hackfest should really be to help other developers to ramp up quicly into the project. Other than that, I’ve also find the time to update to QGIS 3.0 some of my old plugins like GeoCoding and QuickWKT.   Thanks to Giovanni Manghi and to Madeira Government for the organizazion and thanks to all QGIS sponsors and donors!   About me: I started as a QGIS plugin author, continued as the developer of the plugin official repository at https://plugins.qgis.org and now I’m one of the top 5 QGIS core contributors. After almost 10 years that I’m in the QGIS project I’m now not only a proud member of the QGIS community but also an advocate for the open source GIS software movement.

The post Welcome QGIS 3 and bye bye Madeira first appeared on Open Web Solutions, GIS & Python Development.

QGIS 3.0 has been released

We are very pleased to convey the announcement of the  QGIS 3.0 major release called “Girona”.

The whole QGIS community has been working hard on so many changes for the last two years. This version is a major step in the evolution of QGIS. There are a lot of features, and many changes to the underlying code.

At Oslandia, we pushed some great new features, a lot of bugfixes and made our best to help in synchronizing efforts with the community.

Please note that the installers and binaries are still currently being built for all platforms, Ubuntu and Windows are already there,  and Mac packages are still building.

The ChangeLog and the documentation are still being worked on so please start testing that brand new version and let’s make it stronger and stronger together. The more contributors, the better!

While QGIS 3.0 represent a lot of work, note that this version is not a “Long Term Release” and may not be as stable as required for production work.

We would like to thank all the contributors who helped making QGIS 3 a reality.

Oslandia contributors should acknowledged too : Hugo Mercier, Paul Blottière, Régis Haubourg, Vincent Mora and Loïc Bartoletti.

We also want to thank some those who supported directly important features of QGIS3 :

Orange

The QWAT / QGEP organization

The French Ministry for an Ecological and Inclusive Transition

ESG

and also Grenoble Alpes Métropole

  • <<
  • Page 5 of 15 ( 292 posts )
  • >>
  • gis

Back to Top

Sustaining Members