Related Plugins and Tags

QGIS Planet

Movement data in GIS #29: power your web apps with movement data using mobilitydb-sqlalchemy

This is a guest post by Bommakanti Krishna Chaitanya @chaitan94

Introduction

This post introduces mobilitydb-sqlalchemy, a tool I’m developing to make it easier for developers to use movement data in web applications. Many web developers use Object Relational Mappers such as SQLAlchemy to read/write Python objects from/to a database.

Mobilitydb-sqlalchemy integrates the moving objects database MobilityDB into SQLAlchemy and Flask. This is an important step towards dealing with trajectory data using appropriate spatiotemporal data structures rather than plain spatial points or polylines.

To make it even better, mobilitydb-sqlalchemy also supports MovingPandas. This makes it possible to write MovingPandas trajectory objects directly to MobilityDB.

For this post, I have made a demo application which you can find live at https://mobilitydb-sqlalchemy-demo.adonmo.com/. The code for this demo app is open source and available on GitHub. Feel free to explore both the demo app and code!

In the following sections, I will explain the most important parts of this demo app, to show how to use mobilitydb-sqlalchemy in your own webapp. If you want to reproduce this demo, you can clone the demo repository and do a “docker-compose up –build” as it automatically sets up this docker image for you along with running the backend and frontend. Just follow the instructions in README.md for more details.

Declaring your models

For the demo, we used a very simple table – with just two columns – an id and a tgeompoint column for the trip data. Using mobilitydb-sqlalchemy this is as simple as defining any regular table:

from flask_sqlalchemy import SQLAlchemy
from mobilitydb_sqlalchemy import TGeomPoint

db = SQLAlchemy()

class Trips(db.Model):
   __tablename__ = "trips"
   trip_id = db.Column(db.Integer, primary_key=True)
   trip = db.Column(TGeomPoint)

Note: The library also allows you to use the Trajectory class from MovingPandas as well. More about this is explained later in this tutorial.

Populating data

When adding data to the table, mobilitydb-sqlalchemy expects data in the tgeompoint column to be a time indexed pandas dataframe, with two columns – one for the spatial data  called “geometry” with Shapely Point objects and one for the temporal data “t” as regular python datetime objects.

from datetime import datetime
from shapely.geometry import Point

# Prepare and insert the data
# Typically it won’t be hardcoded like this, but it might be coming from 
# other data sources like a different database or maybe csv files
df = pd.DataFrame(
   [
       {"geometry": Point(0, 0), "t": datetime(2012, 1, 1, 8, 0, 0),},
       {"geometry": Point(2, 0), "t": datetime(2012, 1, 1, 8, 10, 0),},
       {"geometry": Point(2, -1.9), "t": datetime(2012, 1, 1, 8, 15, 0),},
   ]
).set_index("t")

trip = Trips(trip_id=1, trip=df)
db.session.add(trip)
db.session.commit()

Writing queries

In the demo, you see two modes. Both modes were designed specifically to explain how functions defined within MobilityDB can be leveraged by our webapp.

1. All trips mode – In this mode, we extract all trip data, along with distance travelled within each trip, and the average speed in that trip, both computed by MobilityDB itself using the ‘length’, ‘speed’ and ‘twAvg’ functions. This example also shows that MobilityDB functions can be chained to form more complicated queries.

mobilitydb-sqlalchemy-demo-1

trips = db.session.query(
   Trips.trip_id,
   Trips.trip,
   func.length(Trips.trip),
   func.twAvg(func.speed(Trips.trip))
).all()

2. Spatial query mode – In this mode, we extract only selective trip data, filtered by a user-selected region of interest. We then make a query to MobilityDB to extract only the trips which pass through the specified region. We use MobilityDB’s ‘intersects’ function to achieve this filtering at the database level itself.

mobilitydb-sqlalchemy-demo-2

trips = db.session.query(
   Trips.trip_id,
   Trips.trip,
   func.length(Trips.trip),
   func.twAvg(func.speed(Trips.trip))
).filter(
   func.intersects(Point(lat, lng).buffer(0.01).wkb, Trips.trip),
).all()

Using MovingPandas Trajectory objects

Mobilitydb-sqlalchemy also provides first-class support for MovingPandas Trajectory objects, which can be installed as an optional dependency of this library. Using this Trajectory class instead of plain DataFrames allows us to make use of much richer functionality over trajectory data like analysis speed, interpolation, splitting and simplification of trajectory points, calculating bounding boxes, etc. To make use of this feature, you have set the use_movingpandas flag to True while declaring your model, as shown in the below code snippet.

class TripsWithMovingPandas(db.Model):
   __tablename__ = "trips"
   trip_id = db.Column(db.Integer, primary_key=True)
   trip = db.Column(TGeomPoint(use_movingpandas=True))

Now when you query over this table, you automatically get the data parsed into Trajectory objects without having to do anything else. This also works during insertion of data – you can directly assign your movingpandas Trajectory objects to the trip column. In the below code snippet we show how inserting and querying works with movingpandas mode.

from datetime import datetime
from shapely.geometry import Point

# Prepare and insert the data
# Typically it won’t be hardcoded like this, but it might be coming from 
# other data sources like a different database or maybe csv files
df = pd.DataFrame(
   [
       {"geometry": Point(0, 0), "t": datetime(2012, 1, 1, 8, 0, 0),},
       {"geometry": Point(2, 0), "t": datetime(2012, 1, 1, 8, 10, 0),},
       {"geometry": Point(2, -1.9), "t": datetime(2012, 1, 1, 8, 15, 0),},
   ]
).set_index("t")

geo_df = GeoDataFrame(df)
traj = mpd.Trajectory(geo_df, 1)

trip = Trips(trip_id=1, trip=traj)
db.session.add(trip)
db.session.commit()

# Querying over this table would automatically map the resulting tgeompoint 
# column to movingpandas’ Trajectory class
result = db.session.query(TripsWithMovingPandas).filter(
   TripsWithMovingPandas.trip_id == 1
).first()

print(result.trip.__class__)
# <class 'movingpandas.trajectory.Trajectory'>

Bonus: trajectory data serialization

Along with mobilitydb-sqlalchemy, recently I have also released trajectory data serialization/compression libraries based on Google’s Encoded Polyline Format Algorithm, for python and javascript called trajectory and trajectory.js respectively. These libraries let you send trajectory data in a compressed format, resulting in smaller payloads if sending your data through human-readable serialization formats like JSON. In some of the internal APIs we use at Adonmo, we have seen this reduce our response sizes by more than half (>50%) sometimes upto 90%.

Want to learn more about mobilitydb-sqlalchemy? Check out the quick start & documentation.


This post is part of a series. Read more about movement data in GIS.

Who is behind QGIS at Oslandia ?

You are using QGIS and look for support services to improve your experience and solve problems ? Oslandia is here to help you with our full QGIS editor service range ! Discover our team members below.

You will probably interact first with our pre-sales engineer Bertrand Parpoil. He leads Geographical Information System projects for 15 years for large corporations, public administrations or hi-tech SME. Bertrand will listen to your needs and explore your use cases, to offer you the best set of services.

Régis Haubourg also takes part in the first steps of projects to analyze your usages and improve them. GIS Expert, he knows QGIS by heart and will make the most of its capabilities. As QGIS Community Manager at Oslandia, he is very active in the QGIS community of developers and contributors. He is president of the Francophone OSGeo local chapter ( OSGeo-fr ), QGIS voting member, organizes the French QGIS day conference in Montpellier, and participates to QGIS community meetings. Before joining Oslandia, he led the migration to QGIS and PostGIS at the Adour-Garonne Water Agency, and now guides our clients with their GIS migrations to OpenSource solutions. Régis is also a great asset when working on water, hydrology and other specific thematic subjects.

Loïc Bartoletti develops QGIS, specializing in features corresponding to his fields of interests : network management, topography, urbanism, architecture… We find him contributing to advanced vector editing in QGIS, writing Python plugins, namely for DICT management. Pushing CAD and migrations from CAD tools to GIS and QGIS is one of his major goals. He will develop your custom applications, combining technical expertise and functional competences. When bored, Loïc packages software on FreeBSD.

Vincent Mora is senior developer in Python and C++, as well as PostGIS expert. He has a strong experience in numerical simulation. He likes coupling GIS (PostGIS, QGIS) with 3D numerical computing for risk management or production optimization. Vincent is an official QGIS committer and can directly integrate your needs into the core of the software. He is also GDAL committer and optimizes low-level layers of your applications. Among numerous activities, Vincent serves as lead developer of the development team for Hydra Software, a tool dedicated to unified hydraulics and hydrology modelling and simulation based on QGIS.

Hugo Mercier is an officiel QGIS committer too for several years. He regularly talks in international conferences on PostGIS, QGIS and other OpenSource GIS softwares. He will implement your needs with new QGIS features, develop innovative plugins ( like QGeoloGIS ) and design and build your new custom applications, solving all kind of technological challenges.

Paul Blottière completes our QGIS committers : very active on core development, Paul has refactored the QGIS server component to bring it to an industry-grade quality level. He also designed and implemented the infrastructure allowing to guarantee QGIS server performances. He dedicated himself to QGIS server OGC certification, especially for WMS (1.3). Thanks to this work QGIS is now a reference OGC implementation.

Julien Cabièces recently joined Oslandia, and quickly dived into QGIS : he contributes to the core of this Desktop GIS, on the server component, as well as applications linked to numerical simulation. Coming from a satellite imagery company with industrial applications, he uses his flexibility to answer all your needs. He brings quality and professionalism to your projects, minimizing risks for large production deployments.

You may also meet Vincent Picavet. Oslandia’s founder is a QGIS.org voting member, and is involved in the project’s evolution and the organization of the community.

Aside from these core contributors, all other Oslandia members also master QGIS integrate this tool into their day-to-day projects.

Bertrand, Régis, Loïc, Vincent (x2), Hugo, Paul et Julien are in tune with you and will be happy to work together for your migrations, application development, and all your desires to contribute to the QGIS ecosystem. Do not hesitate to contact us !

QGIS Snapping improvements

A few months ago, we proposed to the QGIS grant program to make improvements to the snap cache in QGIS. The community vote selected our project which was funded by QGIS.org. Developments are now mostly finished.

In short, snapping is crucial for editing geospatial features. It is the only way to ensuring they are topologically related, ie, connected vertices have exactly the same coordinates even if manual digitizing on screen is imprecise by nature.  Snapping correctly supposes QGIS have in memory an indexed cache of the geometries to snap to. And maintainting this cache when data is modified, sometimes by another user or database logic, can be a real challenge. This it exactly what this work adresses.

The proposal was divided into two different tasks:

  • Manage circular dependencies
  • Relax the snap cache index build

Manage cicular data dependencies

Data dependencies

Data dependency is an existing feature that allows you to configure QGIS to reload layers (and their snapping cache) when a layer is modified.

It is useful when you store your data in a database and you set up triggers to maintain consistency between the different tables of your data model.

For instance, say you have topological informations containing lines and nodes. Nodes are part of lines and lines go through nodes. Then, you move a node in QGIS, and save your modifications to the database. In order to keep the data consistent, a trigger updates the geometry of the line going through the modified node.

Node 2 is modified, Line 1 is updated accordingly

QGIS, as a database client, has no information that the line layer currently displayed in the canvas needs to be refreshed after the trigger. Although the map canvas will be up to date, because QGIS fetches data for display without any caching system, the snapping cache is not and you’ll end up with ghost snapping highlights issues.

Snapping highlights (light red) differ from real line (orange)

Defining a dependency between nodes and lines layers tells QGIS that it has to refresh the line layer when a node is modified.

Dependencies configuration: Lines layer will be refreshed whenever Nodes layer is modified

It also have to work the other way, modifying a line should update the nodes to ensure they still are on the line.

Circular data dependencies

So here we are, lines depend on nodes which depend on lines which depend on nodes which…

That’s what circular dependencies is about. This specific behavior was previously forbidden and needed a special way to deal with it. Thanks to this recent development, it is now possible.

It’s also possible to add the layer itself as one of its own dependencies. It helps dealing with specific cases where one feature modification could lead to a modification of another feature in the same layer (to keep consistency on road networks for instance).

Road 2 is modified, Road 1 is updated accordingly

This feature is available in the next QGIS LTR version 3.10.

Relax the snapping cache index build

If you work in QGIS with huge projects displaying a lot of vector data, and you enable snapping while editing these data, you probably already met this dialog:

Snap indexing dialog

This dialog informs you that data are currently being indexed so you can snap on them while you will edit feature geometry. And for big projects, this dialog can last for a really long time. Let’s work on speeding it up!

What’s a snap index?

Let’s say you want to move a line and snap it onto another one. While you drag your line with the mouse, QGIS will look for an existing geometry beneath the mouse cursor (with a certain pixel tolerance) every time you move your mouse. Without spatial index, QGIS will have to go through every geometry in your layer to check if the given geometry is beneath the cursor position. This would be very ineffective.

In order to prevent this, QGIS keeps an index where vector data are stored in a way that it can quickly find out what geometry is beneath the mouse cursor. The building of this data structure takes time and that is what the progress dialog is about.

Firstly: Parallelize snap index build

If you want to be able to snap on all layers in your project, then QGIS will have to build one snap index for each layer. This operation was made sequentially meaning that if you have for instance 20 layers and the index building last approximatively 3 seconds for each, then the whole index building will last 1 minute. We made modifications to QGIS so that index building could be done in parallel. As a result, the total index building time could theoretically be 3 seconds!

4 layers snap index being built in parallel

However, parallel operations are limited by the number of CPU cores of your machine, meaning that if you have 4 cores (core i7 for instance) then the total time will be up to 4 times faster than when the building is sequential (and last 15 seconds in our example).

Secondly: relax the snap build

For big projects, parallelizing index building is not enough and still takes too much time. Futhermore, to reduce snap index building, an existing optimisation was to build the spatial index for a specific area of interest (determined according to the displayed area and layer size). As a consequence, when you’ve done waiting for an index currently building and you move the map or zoom in/out, you could possibly trigger another snap index building and wait again.

So, the idea was to avoid waiting at all. Snap index is now built whenever it needs to (when you first enable snapping, when you move or zoom) but the user doesn’t have to wait for the build to be over and can continue what it was doing (creating feature, moving…). Snapping highlights will be missing when the index is currently being built and will appear gradually as soon as they finished. That’s what we call the relaxing mode.

No waiting dialog, snapping highlights appears as soon as snap index is ready

This feature has been merged into current QGIS master and will be present in future QGIS 3.12 release. We keep working on this feature in order to make it more stable and efficient.

What’s next

We’ll continue to improve this feature in the coming days, if you have the chance to test it and encounter issues please let us know on the QGIS tracker. If you think about a missing feature or just want to know more about QGIS, feel free to contact us at [email protected]. And please have a look at our support offering for QGIS.

Many thanks to QGIS grant program for funding these new features. Thanks also to all the people involved in reviewing the code and helping to better understand the existing mechanism.

 

GRASS GIS 7.8.2 released

GRASS GIS 7.8.2 released with updated PROJ 6 and GDAL 3 support

What’s new in a nutshell

As a follow-up to the recent GRASS GIS 7.8.1 we have pusblished the new stable release GRASS GIS 7.8.2.
Besides other improvements, the release contains important PROJ 4/5/6 related datum handling fixes, wxGUI fixes and a fix for the vector import from PostGIS databases.

An overview of the new features in the 7.8 release series is available at new features in GRASS GIS 7.8.

Binaries/Installer download:

Source code download:

See also our detailed announcement:

First time users may explore the first steps tutorial after installation.

About GRASS GIS

The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The GRASS Development Team, December 2019

The post GRASS GIS 7.8.2 released appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

Folium vs. hvplot for interactive maps of Point GeoDataFrames

In the previous post, I showed how Folium can be used to create interactive maps of GeoPandas GeoDataFrames. Today’s post continues this theme. Specifically, it compares Folium to another dataviz library called hvplot. hvplot also recently added support for GeoDataFrames, so it’s interesting to see how these different solutions compare.

Minimum viable

The following snippets show the minimum code I found to put a GeoDataFrame of Points onto a map with either Folium or hvplot.

Folium does not automatically zoom to the data extent and I didn’t find a way to add the whole GeoDataFrame of Points without looping through the rows individually:

Hvplot on the other hand registers the hvplot function directly with the GeoDataFrame. This makes it as convenient to use as the original GeoPandas plot function. It also zooms to the data extent:

Standard interaction and zoom to area of interest

The following snippets ensure that the map is set to a useful extent and the map tools enable panning and zooming.

With Folium, we have to set the map center and the zoom. The map tools are Leaflet defaults, so panning and zooming work as expected:

Since hvplot does not come with mouse wheel zoom enabled by default, we need to set that:

Color by attribute

Finally, for many maps, we want to show the point location as well as an attribute value.

To create a continuous color ramp for a numeric value, we can use branca.colormap to define the marker fill color:

In hvplot, it is sufficient to specify the attribute of interest:

I’m pretty impressed with hvplot. The integration with GeoPandas is very smooth. Just don’t forget to set the geo=True parameter if you want to plot lat/lon geometries.

Folium seems less straightforward for this use case. Maybe I missed some option similar to the Choropleth function that I showed in the previous post.

Interactive plots for GeoPandas GeoDataFrames of LineStrings

GeoPandas makes it easy to create basic visualizations of GeoDataFrames:

However, if we want interactive plots, we need additional libraries. Folium (which is built on Leaflet) is a great option. However, all examples for plotting GeoDataFrames that I found focused on point or polygon data. So here is what I found to work for GeoDataFrames of LineStrings:

First, some imports:

import pandas as pd
import geopandas
import folium

Loading the data:

graph = geopandas.read_file('data/population_test-routes-geom.csv')
graph.crs = {'init' :'epsg:4326'}

Creating the map using folium.Choropleth:

m = folium.Map([48.2, 16.4], zoom_start=10)

folium.Choropleth(
    graph[graph.geometry.length>0.001],
    line_weight=3,
    line_color='blue'
).add_to(m)

m

I also tried using folium.PolyLine which seemed like the more obvious choice but does not seem to accept GeoDataFrames as input. Instead, it expects a list of coordinate pairs and of course it expects them to be in the opposite order that Shapely.LineString.coords provides … Oh the joys of geodata!

In any case, I had to limit the number of features that get plotted because Folium refuses to plot all 8778 features at once. I decided to filter by line length because drawing really short lines is pointless for my overview visualization anyway.

(Fr) Rechercher une adresse avec QGIS

Sorry, this entry is only available in French.

QGIS Versioning now supports foreign keys!

QGIS-versioning is a QGIS and PostGIS plugin dedicated to data versioning and history management. It supports :

  • Keeping full table history with all modifications
  • Transparent access to current data
  • Versioning tables with branches
  • Work offline
  • Work on a data subset
  • Conflict management with a GUI

QGIS versioning conflict management

In a previous blog article we detailed how QGIS versioning can manage data history, branches, and work offline with PostGIS-stored data and QGIS. We recently added foreign key support to QGIS versioning so you can now historize any complex database schema.

This QGIS plugin is available in the official QGIS plugin repository, and you can fork it on GitHub too !

Foreign key support

TL;DR

When a user decides to historize its PostgreSQL database with QGIS-versioning, the plugin alters the existing database schema and adds new fields in order to track down the different versions of a single table row. Every access to these versioned tables are subsequently made through updatable views in order to automatically fill in the new versioning fields.

Up to now, it was not possible to deal with primary keys and foreign keys : the original tables had to be constraints-free.  This limitation has been lifted thanks to this contribution.

To make it simple, the solution is to remove all constraints from the original database and transform them into a set of SQL check triggers installed on the working copy databases (SQLite or PostgreSQL). As verifications are made on the client side, it’s impossible to propagate invalid modifications on your base server when you “commit” updates.

Behind the curtains

When you choose to historize an existing database, a few fields are added to the existing table. Among these fields, versioning_ididentifies  one specific version of a row. For one existing row, there are several versions of this row, each with a different versioning_id but with the same original primary key field. As a consequence, that field cannot satisfy the unique constraint, so it cannot be a key, therefore no foreign key neither.

We therefore have to drop the primary key and foreign key constraints when historizing the table. Before removing them, constraints definitions are stored in a dedicated table so that these constraints can be checked later.

When the user checks out a specific table on a specific branch, QGIS-versioning uses that constraint table to build constraint checking triggers in the working copy. The way constraints are built depends on the checkout type (you can checkout in a SQLite file, in the master PostgreSQL database or in another PostgreSQL database).

What do we check ?

That’s where the fun begins ! The first thing we have to check is key uniqueness or foreign key referencing an existing key on insert or update. Remember that there are no primary key and foreign key anymore, we dropped them when activating historization. We keep the term for better understanding.

You also have to deal with deleting or updating a referenced row and the different ways of propagating the modification : cascade, set default, set null, or simply failure, as explained in PostgreSQL Foreign keys documentation .

Nevermind all that, this problem has been solved for you and everything is done automatically in QGIS-versioning. Before you ask, yes foreign keys spanning on multiple fields are also supported.

What’s new in QGIS ?

You will get a new message you probably already know about, when you try to make an invalid modification committing your changes to the master database

Error when foreign key constraint is violated

Partial checkout

One existing Qgis-versioning feature is partial checkout. It allows a user to select a subset of data to checkout in its working copy. It avoids downloading gigabytes of data you do not care about. You can, for instance, checkout features within a given spatial extent.

So far, so good. But if you have only a part of your data, you cannot ensure that modifying a data field as primary key will keep uniqueness. In this particular case, QGIS-versioning will trigger errors on commit, pointing out the invalid rows you have to modify so the unique constraint remains valid.

Error when committing non unique key after a partial checkout

Tests

There is a lot to check when you intend to replace the existing constraint system with your own constraint system based on triggers. In order to ensure QGIS-Versioning stability and reliability, we put some special effort on building a test set that cover all use cases and possible exceptions.

What’s next

There is now no known limitations on using QGIS-versioning on any of your database. If you think about a missing feature or just want to know more about QGIS and QGIS-versioning, feel free to contact us at [email protected]. And please have a look at our support offering for QGIS.

Many thanks to eHealth Africa who helped us develop these new features. eHealth Africa is a non-governmental organization based in Nigeria. Their mission is to build stronger health systems through the design and implementation of data-driven solutions.

QGIS 3 and performance analysis

Context

Since last year we (the QGIS communtity) have been using QGIS-Server-PerfSuite to run performance tests on a daily basis. This way, we’re able to monitor and avoid regressions according to some test scenarios for several QGIS Server releases (currently 2.18, 3.4, 3.6 and master branches). However, there are still many questions about performance from a general point of view:

  • What is the performance of QGIS Server compared to QGIS Desktop?
  • What are the implications of feature simplification for polygons and lines?
  • Does the symbology have a strong impact on performance and in which proportion?

Of course, it’s a broad and complex topic because of the numerous possibilities offered by the rendering engine of QGIS. In this article we’ll look at typical use cases with geometries coming from a PostgreSQL database.

Methodology

The first way to monitor performance is to measure the rendering time. To do so, the Map canvas refreshis activated in the Settings of QGIS Desktop. In this way we can get the rendering time from within the Rendering tab of log messages in QGIS Desktop, as well as from log messages written by QGIS Server.

The rendering time retrieved with this method allows to get the total amount of time spent in rendering for each layer (see the source code).

But in the case of QGIS Server another interesting measure is the total time spent for a specific request, which may be read from log messages too. There are indeed more operations achieved for a single WMS request than a simple rendering in QGIS Desktop:

The rendering time extracted from QGIS Desktop corresponds to the core rendering time displayed in the sequence diagram above. Moreover, to be perfectly comparable, the rendering engine must be configured in the same way in both cases. In this way, and thanks to PyQGIS API, we can retrieve the necessary information from the Python console in QGIS Desktop, like the extent or the canvas size, in order to configure the GetMap WMS request with the appropriate WIDTH,, HEIGHT , and BBOX parameters.

Another way to examine the performance is to use a profiler in order to inspect stack traces. These traces may be represented as a FlameGraph. In this case, debug symbols are necessary, meaning that the rendering time is not representative anymore. Indeed, QGIS has to be compiled in Debug mode.

Polygons

For these tests we use the same dataset as that for the daily performance tests, which is a layer of polygons with 282,776 features.

Feature simplification deactivated

Let’s first have a look at the rendering time and the FlameGraph when the simplification is deactivated. In QGIS Desktop, the mean rendering time is 2591 ms. Using to the PyQGIS API we are able to get the extent and the size of the map to render the map again but using a GetMap WMS request this time.

In this case, the rendering time is 2469 ms and the total request time is 2540 ms. For the record, the first GetMap request is ignored because in this case, the whole QGIS project is read and cached, meaning that the total request time is much higher. But according to those results, the rendering time for QGIS Desktop and QGIS Server are utterly similar, which makes sense considering that the same rendering engine is used, but it is still very reassuring :).

Now, let’s take a look to the FlameGraph to detect where most of the time is spent.

 

Undoubtedly the FlameGraph’s are similar in both cases, meaning that if we want to improve the performance of QGIS Server we need to improve the performance of the core rendering engine, also used in QGIS Desktop. In our case the main method is QgsMapRendererParallelJob::renderLayerStatic where most of the time is spent in:

Methods Desktop % Server %
QgsExpressionContext::setFeature 6.39 6.82
QgsFeatureIterator::nextFeature 28.77 28.41
QgsFeatureRenderer::renderFeature 29.01 27.05

Basically, it may be simplified like:

Clearly, the rendering takes about 30% of the total amount of time. In this case geometry simplification could potentially help.

Feature simplification activated

Geometry simplification, available for both polygons and lines layers, may be activated and configured through layer’s Properties in the Rendering tab. Several parameters may be set:

  • Simplification may be deactivated
  • Threshold for a more drastic simplification
  • Algorithm
  • Provider simplification
  • Scale

Once the simplification activated, we varied the threshold as well as the algorithm in order to detect performance jumps:

The following conclusions can be drawn:

  • The Visvalingam algorithm should be avoided because it begins to be efficient with a high threshold, meaning a significant lack of precision in geometries
  • The ideal threshold for Snap To Grid and Distance algorithms seems to be 1.05. Indeed, considering that it’s a very low threshold, the precision of geometries is still pretty good for a major improvement in rendering time though

For now, these tests have been run on the full extent of the layer. However, we still have a Maximum scale parameter to test, so we’ve decreased the scale of the layer:

And in this case, results are pretty interesting too:

Several conclusions can be drawn:

  • Visvalingam algorithm should be avoided at low scale too
  • Snap To Grid seems counter-productive at low scale
  • Distance algorithm seems to be a good option

Lines

For these tests we also use the same dataset as that for daily performance tests, which is a layer of lines with 125,782 features.

Feature simplification activated

In the same way as for polygons we have tested the effect of the geometric simplification on the rendering time, as well as algorithms and thresholds:

In this case we have exactly the same conclusion as for polygons: the Distance algorithm should be preferred with a threshold of 1.05.

For QGIS Server the mean rendering time is about 1180 ms with geometry simplification compared to 1108 ms for QGIS Desktop, which is totally consistent. And looking at the FlameGraph we note that once again most of the time is spent in accessing the PostgreSQL database (about 30%) and rendering features (about 40%).

 

 

 

 

 

Symbology

Another parameter which has an obvious impact on performance is the symbology used to draw the layers. Some features are known to be time consuming, but we’ve felt that a a thorough study was necessary to verify it.

 

Firstly, we’ve studied the influence of the width as well as the Single Symbol type on the rendering time.

Some points are noteworthy:

Simple Line is clearly the less time consuming

– Beyond the default 0.26 line width, rendering time begins to raise consequently with a clear jump in performance

 

Another interesting feature is the Draw effects option, allowing to add some fancy effects (shadow, glow, …).

However, this feature is known to be particularly CPU consuming. Actually, rendering all the 125,782 lines took so long that we had to to change to a lower scale, with just some a few dozen lines. Results are unequivocal:

 

The last thing we wanted to test for symbology is the effect of the Categorized classification. Here are the results for some classifications with geometry simplification activated:

  • No classification: 1108 ms
  • A simple classification using the column “classification” (8 symbols): 1148 ms
  • A classification based on a stupid expression “classification x 3″ (8 symbols): 1261 ms
  • A classification based on string comparison “toponyme like ‘Ruisseau*'” (2 symbols): 1380 ms
  • A classification with a specific width line for each category (8 symbols): 1850 ms

Considering that a simple classification does not add an excessive extra-cost, it seems that the classification process itself is not very time consuming. However, as soon as an expression is used, we can observe a slight jump in performance.

Labeling

Another important part to study regarding performance is labeling and the underlying positioning. For this test we decreased the scale and varied the Placement parameter without tuning anything.

Clearly, the parallel labeling is much more time consuming than the other placements. However, as previously stated, we used the default parameters for each positioning, meaning that the number of labels really drawn on the map differs from a placement to another.

Points

The last kind of geometries we have to study is points. Similarly to polygons and lines, we used the same dataset as that of performance tests, that is a layer with 435588 points.

In the case of points geometries geometry simplification is of course not available. So we are going to focus on symbology and the impact of marker size.

Obviously Font Marker must be used carefully because of the underlying jump in performance, as well as SVG Symbols. Moreover, contrary to Simple Marker, an increase of the size implies a drastic augmentation in time rendering.

General conclusion

Based on this factual study, several conclusions can be drawn.

Globally, FlameGraph for QGIS Desktop and QGIS Server are completely similar as well as rendering time.

It means that if we want to improve the performance of QGIS Server, we have to work on the desktop configuration and the rendering engine of the QGIS core library.

Extracting generic conclusions from our tests is very difficult, because it clearly depends on the underlying data. But let’s try to suggest some recommendations :).

Firstly, geometry simplification seems pretty efficient with lines and polygons as soon as the algorithm is chosen cautiously, and as long as your features include many vertices. It seems that the Distance algorithm with a 1.05 threshold is a good choice, with both high and low scale. However, it’s not a magic solution!

Secondly, a special care is needed with regards to symbology. Indeed, in some cases, a clear jump in performance is notable. For example, fancy effects and Font Marker SVG Symbol have to be used with caution if you’re picky on rendering time.

Thirdly, we have to be aware of the extra cost caused by labeling, especially the Parallel  placement for line geometries. On this subject, a not very well-known parameter allows to drastically reduce labeling time: the PAL candidates option. Actually, we may decrease the labeling time by reducing the number of candidates. For an explicit use case, you can take a look at the daily reports.

In any case, improving server performance in a substantial way means improving the QGIS core library directly.

Especially, we noticed thanks to FlameGraph that most of the time is spent in drawing features and managing the data from the PostgreSQL database. By the way, a legitimate question is: “How much time do we spend on waiting for the database?”. To be continued 😉

If you hit performance issues on your specific configuration or want to improve QGIS awesomeness, we provide a unique QGIS support offer at http://qgis.oslandia.com/ thanks to our team of specialists!

Funding for selective masking in QGIS is now complete!

Few months ago, Oslandia launched QGIS lab’s , a place to advertise our new ideas for QGIS, but also a place to help you find co funders to make dreams become reality.

The first initiative is about label selective masking, a feature that will allow us to achieve even more professional rendering for our maps.

Selective masking

 

Thanks to the commitment of the Swiss QGIS user group and local authorities, this work is now funded !

We now can start working hard to deliver you this great feature for QGIS 3.10

Thanks again to our funders

A last word, this is not a classical crowd funding initiative, but a classical contract for each funder.

No more reason not to contribute to free and open source software!

Stand-alone PyQGIS scripts with OSGeo4W

PyQGIS scripts are great to automate spatial processing workflows. It’s easy to run these scripts inside QGIS but it can be even more convenient to run PyQGIS scripts without even having to launch QGIS. To create a so-called “stand-alone” PyQGIS script, there are a few things that need to be taken care of. The following steps show how to set up PyCharm for stand-alone PyQGIS development on Windows10 with OSGeo4W.

An essential first step is to ensure that all environment variables are set correctly. The most reliable approach is to go to C:\OSGeo4W64\bin (or wherever OSGeo4W is installed on your machine), make a copy of qgis-dev-g7.bat (or any other QGIS version that you have installed) and rename it to pycharm.bat:

Instead of launching QGIS, we want that pycharm.bat launches PyCharm. Therefore, we edit the final line in the .bat file to start pycharm64.exe:

In PyCharm itself, the main task to finish our setup is configuring the project interpreter:

First, we add a new “system interpreter” for Python 3.7 using the corresponding OSGeo4W Python installation.

To finish the interpreter config, we need to add two additional paths pointing to QGIS\python and QGIS\python\plugins:

That’s it! Now we can start developing our stand-alone PyQGIS script.

The following example shows the necessary steps, particularly:

  1. Initializing QGIS
  2. Initializing Processing
  3. Running a Processing algorithm
import sys

from qgis.core import QgsApplication, QgsProcessingFeedback
from qgis.analysis import QgsNativeAlgorithms

QgsApplication.setPrefixPath(r'C:\OSGeo4W64\apps\qgis-dev', True)
qgs = QgsApplication([], False)
qgs.initQgis()

# Add the path to processing so we can import it next
sys.path.append(r'C:\OSGeo4W64\apps\qgis-dev\python\plugins')
# Imports usually should be at the top of a script but this unconventional 
# order is necessary here because QGIS has to be initialized first
import processing
from processing.core.Processing import Processing

Processing.initialize()
QgsApplication.processingRegistry().addProvider(QgsNativeAlgorithms())
feedback = QgsProcessingFeedback()

rivers = r'D:\Documents\Geodata\NaturalEarthData\Natural_Earth_quick_start\10m_physical\ne_10m_rivers_lake_centerlines.shp'
output = r'D:\Documents\Geodata\temp\danube3.shp'
expression = "name LIKE '%Danube%'"

danube = processing.run(
    'native:extractbyexpression',
    {'INPUT': rivers, 'EXPRESSION': expression, 'OUTPUT': output},
    feedback=feedback
    )['OUTPUT']

print(danube)

Call for testing: GRASS GIS with Python 3

Please help us testing the Python3 support in the yet unreleased GRASS GIS trunk (i.e., version “grass77” which will be released as “grass78” in the near future).

1. Why Python 3?

Python 2 is end-of-life (EOL); the current Python 2.7 will retire in 11 months from today (see https://pythonclock.org). We want to follow the “Moving to require Python 3” and complete the change to Python 3. And we need a broader community testing.

2. Download and test!

Packages are available at time:

3. Instructions for testing

4. Problems found? Please report them to us

Problems and bugs can be reported in the GRASS GIS trac. Code changes are welcome!

Thanks for testing grass77!

The post Call for testing: GRASS GIS with Python 3 appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

Dealing with delayed measurements in (Geo)Pandas

Yesterday, I learned about a cool use case in data-driven agriculture that requires dealing with delayed measurements. As Bert mentions, for example, potatoes end up in the machines and are counted a few seconds after they’re actually taken out of the ground:

Therefore, in order to accurately map yield, we need to take this temporal offset into account.

We need to make sure that time and location stay untouched, but need to shift the potato count value. To support this use case, I’ve implemented apply_offset_seconds() for trajectories in movingpandas:

    def apply_offset_seconds(self, column, offset):
        self.df[column] = self.df[column].shift(offset, freq='1s')

The following test illustrates its use: you can see how the value column is shifted by 120 second. Geometry and time remain unchanged but the value column is shifted accordingly. In this test, we look at the row with index 2 which we access using iloc[2]:

    def test_offset_seconds(self):
        df = pd.DataFrame([
            {'geometry': Point(0, 0), 't': datetime(2018, 1, 1, 12, 0, 0), 'value': 1},
            {'geometry': Point(-6, 10), 't': datetime(2018, 1, 1, 12, 1, 0), 'value': 2},
            {'geometry': Point(6, 6), 't': datetime(2018, 1, 1, 12, 2, 0), 'value': 3},
            {'geometry': Point(6, 12), 't': datetime(2018, 1, 1, 12, 3, 0), 'value':4},
            {'geometry': Point(6, 18), 't': datetime(2018, 1, 1, 12, 4, 0), 'value':5}
        ]).set_index('t')
        geo_df = GeoDataFrame(df, crs={'init': '31256'})
        traj = Trajectory(1, geo_df)
        traj.apply_offset_seconds('value', -120)
        self.assertEqual(traj.df.iloc[2].value, 5)
        self.assertEqual(traj.df.iloc[2].geometry, Point(6, 6))

From CSV to GeoDataFrame in two lines

Pandas is great for data munging and with the help of GeoPandas, these capabilities expand into the spatial realm.

With just two lines, it’s quick and easy to transform a plain headerless CSV file into a GeoDataFrame. (If your CSV is nice and already contains a header, you can skip the header=None and names=FILE_HEADER parameters.)

usecols=USE_COLS is also optional and allows us to specify that we only want to use a subset of the columns available in the CSV.

After the obligatory imports and setting of variables, all we need to do is read the CSV into a regular DataFrame and then construct a GeoDataFrame.

import pandas as pd
from geopandas import GeoDataFrame
from shapely.geometry import Point

FILE_NAME = "/temp/your.csv"
FILE_HEADER = ['a', 'b', 'c', 'd', 'e', 'x', 'y']
USE_COLS = ['a', 'x', 'y']

df = pd.read_csv(
    FILE_NAME, delimiter=";", header=None,
    names=FILE_HEADER, usecols=USE_COLS)
gdf = GeoDataFrame(
    df.drop(['x', 'y'], axis=1),
    crs={'init': 'epsg:4326'},
    geometry=[Point(xy) for xy in zip(df.x, df.y)])

It’s also possible to create the point objects using a lambda function as shown by weiji14 on GIS.SE.

Visualization of borehole logs with QGIS

At Oslandia, we have been working on a component based on the QGIS API for the visualization of well and borehole logs.

This component is aiming at displaying data collected vertically along wells dug underground. It mainly focuses on data organized in series of contiguous samples, and is generic enough to be used for both vertical (where the Y axis corresponds to a depth underground) and horizontal data (where the X axis corresponds to time for example). One of the main concerns was to ensure good display performances with an important volume of sample points (usually hundred of thousands sample points).

We have already been working on plot components in the past, but for specific QGIS plugins (both for timeseries and stratigraphic logs) and thought we will put these past experiences to good use by creating a more generic library.

We decided to represent sample points as geometry features in order to be able to reuse the rich symbology engine of QGIS. This allows users to represent their data whatever they like without having to rewrite an entire symbology engine. We also benefit from all the performance optimizations that have been added and polished over the years (on-the-fly geometry simplification for example).

Albeit written in Python, we achieved good display performances. The key trick was to avoid copy of data between the sample points read by QGIS and the Python graphing component. It is achieved thanks to the fact that the PyQGIS API has some functions that respect the buffer protocol.

You can have a look at the following video to see this component integrated in an existing plugin.

Visualization of borehole logs withing QGIS

It is distributed as usual under an open source license and the code repository can be found on GitHub.

Do not hesitate to contact us ( [email protected] ) if you are interested in any enhancements around this component.

PyQGIS101 part 10 published!

PyQGIS 101: Introduction to QGIS Python programming for non-programmers has now reached the part 10 milestone!

Beyond the obligatory Hello world! example, the contents so far include:

If you’ve been thinking about learning Python programming, but never got around to actually start doing it, give PyQGIS101 a try.

I’d like to thank everyone who has already provided feedback to the exercises. Every comment is important to help me understand the pain points of learning Python for QGIS.

I recently read an article – unfortunately I forgot to bookmark it and cannot locate it anymore – that described the problems with learning to program very well: in the beginning, it’s rather slow going, you don’t know the right terminology and therefore don’t know what to google for when you run into issues. But there comes this point, when you finally get it, when the terminology becomes clearer, when you start thinking “that might work” and it actually does! I hope that PyQGIS101 will be a help along the way.

QGIS Server 3 : OGC Certification work for WFS 1.1.0

QGIS Server is an open source OGC data server which uses QGIS engine as backend. It becomes really awesome because a simple desktop qgis project file can be rendered as web services with exactly the same rendering, and without any mapfile or xml coding by hand.

QGIS Server provides a way to serve OGC web services like WMS, WCS, WFS and WMTS resources from a QGIS project, but can also extend services like GetPrint which takes advantage of QGIS’s map composer power to generate high quality PDF outputs.

Since the 3.0.2 version, QGIS Server is certified as an official OGC reference implementation for WMS 1.3.0 and reports are generated in a daily continuous integration to avoid regressions.

 

Thus, the next step was naturally to take a look at the WFS 1.1.0 thanks to the support of the QGIS Grant Program

Side note, this Grant program is made possible thanks to your direct sponsoring and micro-donations to QGIS.org.

TEAM Engine test suite for WFS 1.1.0

We use a tool provided by the OGC Compliance Program to run dedicated tests on the server : Teamengine (Test, Evaluation, And Measurement Engine).

Test suites are available through a web interface. However, for the needs of continuous integration, these tests have to be run without user interaction. In the case of WMS 1.3.0, nothing more easy than using the REST API:

$ curl "http://localhost:8081/teamengine/rest/suites/wms/1.20/run?queryable=queryable&basic=basic&capabilities-url=http://172.17.0.2/qgisserver?REQUEST=GetCapabilities%26SERVICE=WMS%26VERSION=1.3.0%26MAP=/data/teamengine_wms_130.qgs

 

However, the WFS 1.1.0 test suite does not provide a REST API and makes the situation less straightforward. We switched to using TEAM Engine directly from command line:

$ cd te_base
$ ./bin/unix/test.sh -source=wfs/1.1.0/ctl/main.ctl -form=params.xml

The params.xml file allows to configure underlying tests. In this particular case, the GetCapabilities URL of the QGIS Server to test is given.  Results are available thanks to the viewlog.sh shell script:

$ ./bin/unix/viewlog.sh -logdir=te_base/users/root/ -session=s0001
Test wfs:wfs-main type Mandatory (s0001) Failed (InheritedFailure)
   Test wfs:readiness-tests type Mandatory (s0001/d68e38807_1) Failed (InheritedFailure)
      Test ctl:SchematronValidatingParser type Mandatory (s0001/d68e38807_1/d68e588_1) Failed
      Test wfs:basic-main type Mandatory (s0001/d68e38807_1/d68e636_1) Failed (InheritedFailure)
         Test wfs:run-GetCapabilities-basic-cc-GET type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1) Failed (InheritedFailure)
            Test wfs:wfs-1.1.0-Basic-GetCapabilities-tc1 type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1095_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1095_1/d68e1234_1) Passed
            Test wfs:wfs-1.1.0-Basic-GetCapabilities-tc2 type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1100_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1100_1/d68e1305_1) Passed
            Test wfs:wfs-1.1.0-Basic-GetCapabilities-tc3 type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1/d68e1558_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1/d68e1582_1) Passed
               Test ctl:assert-xpath type Mandatory (s0001/d68e38807_1/d68e636_1/d68e28810_1/d68e1105_1/d68e1606_1) Passed
...
...

Finally, a Python script has been written to read these logs and generate HTML report easily readable. Thanks to our QGIS-Server-CertifSuite, providing the continuous integration infrastructure with Docker images, these reports are also generated daily.

Bugfix and Conclusion

First results were clear: a lot of work is necessary to have a QGIS Server certified for WFS 1.1.0!

We started fixing the issues one by one:

And now we have a much better support than 6 months ago

However, some work still need to be done to finally obtain the OGC certification for WFS 1.1.0. To be continued!

Please contact us if you want QGIS server to become a reference implementation for all OGC service !

Geocoding with Geopy

Need to geocode some addresses? Here’s a five-lines-of-code solution based on “An A-Z of useful Python tricks” by Peter Gleeson:

from geopy import GoogleV3
place = "Krems an der Donau"
location = GoogleV3().geocode(place)
print(location.address)
print("POINT({},{})".format(location.latitude,location.longitude))

For more info, check out geopy:

geopy is a Python 2 and 3 client for several popular geocoding web services.
geopy includes geocoder classes for the OpenStreetMap Nominatim, ESRI ArcGIS, Google Geocoding API (V3), Baidu Maps, Bing Maps API, Yandex, IGN France, GeoNames, Pelias, geocode.earth, OpenMapQuest, PickPoint, What3Words, OpenCage, SmartyStreets, GeocodeFarm, and Here geocoder services.

Plotting GPS Trajectories with error ellipses using Time Manager

This is a guest post by Time Manager collaborator and Python expert, Ariadni-Karolina Alexiou.

Today we’re going to look at how to visualize the error bounds of a GPS trace in time. The goal is to do an in-depth visual exploration using QGIS and Time Manager in order to learn more about the data we have.

The Data

We have a file that contains GPS locations of an object in time, which has been created by a GPS tracker. The tracker also keeps track of the error covariance matrix for each point in time, that is, what confidence it has in the measurements it gives. Here is what the file looks like:

data.png

Error Covariance Matrix

What are those sd* fields? According to the manual: The estimated standard deviations of the solution assuming a priori error model and error parameters by the positioning options. What it basically means is that the real GPS location will be located no further than three standard deviations across north and east from the measured location, most of (99.7%) the time. A way to represent this visually is to create an ellipse that maps this area of where the real location can be.ellipse_ab

An ellipse can be uniquely defined from the lengths of the segments a and b and its rotation angle. For more details on how to get those ellipse parameters from the covariance matrix, please see the footnote.

Ground truth data

We also happen to have a file with the actual locations (also in longitudes and latitudes) of the object for the same time frame as the GPS (also in seconds), provided through another tracking method which is more accurate in this case.

actual_data

This is because, the object was me running on a rooftop in Zürich wearing several tracking devices (not just GPS), and I knew exactly which floor tiles I was hitting.

The goal is to explore, visually, the relationship between the GPS data and the actual locations in time. I hope to get an idea of the accuracy, and what can influence it.

First look

Loading the GPS data into QGIS and Time Manager, we can indeed see the GPS locations vis-a-vis the actual locations in time.

actual_vs_gps

Let’s see if the actual locations that were measured independently fall inside the ellipse coverage area. To do this, we need to use the covariance data to render ellipses.

Creating the ellipses

I considered using the ellipses marker from QGIS.

ellipse_marker.png

It is possible to switch from Millimeter to Map Unit and edit a data defined override for symbol width, height and rotation. Symbol width would be the a parameter of the ellipse, symbol height the b parameter and rotation simply the angle. The thing is, we haven’t computed any of these values yet, we just have the error covariance values in our dataset.

Because of the re-projections and matrix calculations inherent into extracting the a, b and angle of the error ellipse at each point in time, I decided to do this calculation offline using Python and relevant libraries, and then simply add a WKT text field with a polygon representation of the ellipse to the file I had. That way, the augmented data could be re-used outside QGIS, for example, to visualize using Leaflet or similar. I could have done a hybrid solution, where I calculated a, b and the angle offline, and then used the dynamic rendering capabilities of QGIS, as well.

I also decided to dump the csv into an sqlite database with an index on the time column, to make time range queries (which Time Manager does) run faster.

Putting it all together

The code for transforming the initial GPS data csv file into an sqlite database can be found in my github along with a small sample of the file containing the GPS data.

I created three ellipses per timestamp, to represent the three standard deviations. Opening QGIS (I used version: 2.12, Las Palmas) and going to Layer>Add Layer>Add SpatialLite Layer, we see the following dialog:

add_spatialite2.png

After adding the layer (say, for the second standard deviation ellipse), we can add it to Time Manager like so:

add_to_tm

We do the process three times to add the three types of ellipses, taking care to style each ellipse differently. I used transparent fill for the second and third standard deviation ellipses.

I also added the data of my  actual positions.

Here is an exported video of the trace (at a place in time where I go forward, backwards and forward again and then stay still).

gps

Conclusions

Looking at the relationship between the actual data and the GPS data, we can see the following:

  • Although the actual position differs from the measured one, the actual position always lies within one or two standard deviations of the measured position (so, inside the purple and golden ellipses).
  • The direction of movement has greater uncertainty (the ellipse is elongated across the line I am running on).
  • When I am standing still, the GPS position is still moving, and unfortunately does not converge to my actual stationary position, but drifts. More research is needed regarding what happens with the GPS data when the tracker is actually still.
  • The GPS position doesn’t jump erratically, which can be good, however, it seems to have trouble ‘catching up’ with the actual position. This means if we’re looking to measure velocity in particular, the GPS tracker might underestimate that.

These findings are empirical, since they are extracted from a single visualization, but we have already learned some new things. We have some new ideas for what questions to ask on a large scale in the data, what additional experiments to run in the future and what limitations we may need to be aware of.

Thanks for reading!

Footnote: Error Covariance Matrix calculations

The error covariance matrix is (according to the definitions of the sd* columns in the manual):

sde * sde sign(sdne) * sdne * sdne
sign(sdne) * sdne * sdne sdn * sdn

It is not a diagonal matrix, which means that the errors across the ‘north’ dimension and the ‘east’ dimension, are not exactly independent.

An important detail is that, while the position is given in longitudes and latitudes, the sdn, sde and sdne fields are in meters. To address this in the code, we convert the longitude and latitudes using UTM projection, so that they are also in meters (northings and eastings).

For more details on the mathematics used to plot the ellipses check out this article by Robert Eisele and the implementation of the ellipse calculations on my github.

Celebrating 35 years of GRASS GIS!

Today marks 35 years of GRASS GIS development – with frequent releases the project keeps pushing the limits in terms of geospatial data processing quality and performance.

GRASS (Geographic Resources Analysis Support System) is a free and open source Geographic Information System (GIS) software suite used for geospatial data management and analysis, image processing, graphics and map production, spatial modeling, and 3D visualization. Since the major GRASS GIS 7 version, it also comes with a feature rich engine for space-time cubes useful for time series processing of Landsat and Copernicus Sentinel satellite data and more. GRASS GIS can be either used as a desktop application or as a backend for other software packages such as QGIS and R. Furthermore, it is frequently used on HPC and cloud infrastructures for massive parallelized data processing.

Brief history
In 1982, under the direction of Bill Goran at the U.S. Army Corps of Engineers Construction Engineering Research Laboratory (CERL), two GIS development efforts were undertaken. First, Lloyd Van Warren, a University of Illinois engineering student, began development on a new computer program that allowed analysis of mapped data.  Second, Jim Westervelt (CERL) developed a GIS package called “LAGRID – the Landscape Architecture Gridcell analysis system” as his master’s thesis. Thirty five years ago, on 29 July 1983, the user manual for this new system titled “GIS Version 1 Reference Manual” was first published by J. Westervelt and M. O’Shea. With the technical guidance of Michael Shapiro (CERL), the software continued its development at the U.S. Army Corps of Engineers Construction Engineering Research Laboratory (USA/CERL) in Champaign, Illinois; and after further expansion version 1.0 was released in 1985 under the name Geographic Resources Analysis Support System (GRASS). The GRASS GIS community was established the same year with the first annual user meeting and the launch of GRASSnet, one of the internet’s early mailing lists. The user community expanded to a larger audience in 1991 with the “Grasshopper” mailing list and the introduction of the World Wide Web. The users’ and programmers’ mailing lists archives for these early years are still available online.
In the mid 1990s the development transferred from USA/CERL to The Open GRASS Consortium (a group who would later generalize to become today’s Open Geospatial Consortium — the OGC). The project coordination eventually shifted to the international development team made up of governmental and academic researchers and university scientists. Reflecting this shift to a project run by the users, for the users, in 1999 GRASS GIS was released under the terms of the GNU General Public License (GPL). A detailed history of GRASS GIS can be found at https://grass.osgeo.org/history/.

Where to next?
The development on GRASS GIS continues with more energy and interest than ever. Parallel to the long-term maintenance of the GRASS 7.4 stable series, effort is well underway on the new upcoming cutting-edge 7.6 release, which will bring many new features, enhancements, and cleanups. As in the past, the GRASS GIS community is open to any contribution, be it in the form of programming, documentation, testing, and financial sponsorship. Please contact us!

About GRASS GIS

The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities in a single integrated software suite. GRASS GIS includes tools for spatial modeling, visualization of raster and vector data, management and analysis of geospatial data, and the processing of satellite and aerial imagery. It also provides the capability to produce sophisticated presentation graphics and hardcopy maps. GRASS GIS has been translated into about twenty languages and supports a huge array of data formats. It can be used either as a stand-alone application or as backend for other software packages such as QGIS and R geostatistics. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The GRASS Development Team, July 2018

The post Celebrating 35 years of GRASS GIS! appeared first on GFOSS Blog | GRASS GIS and OSGeo News.

  • <<
  • Page 4 of 14 ( 278 posts )
  • >>
  • gis

Back to Top

Sustaining Members