Related Plugins and Tags

QGIS Planet

Getting started with pygeoapi processes

Today’s post is a quick introduction to pygeoapi, a Python server implementation of the OGC API suite of standards. OGC API provides many different standards but I’m particularly interested in OGC API – Processes which standardizes geospatial data processing functionality. pygeoapi implements this standard by providing a plugin architecture, thereby allowing developers to implement custom processing workflows in Python.

I’ll provide instructions for setting up and running pygeoapi on Windows using Powershell. The official docs show how to do this on Linux systems. The pygeoapi homepage prominently features instructions for installing the dev version. For first experiments, however, I’d recommend using a release version instead. So that’s what we’ll do here.

As a first step, lets install the latest release (0.16.1 at the time of writing) from conda-forge:

conda create -n pygeoapi python=3.10
conda activate pygeoapi
mamba install -c conda-forge pygeoapi

Next, we’ll clone the GitHub repo to get the example config and datasets:

cd C:\Users\anita\Documents\GitHub\
git clone https://github.com/geopython/pygeoapi.git
cd pygeoapi\

To finish the setup, we need some configurations:

cp pygeoapi-config.yml example-config.yml  
# There is a known issue in pygeoapi 0.16.1: https://github.com/geopython/pygeoapi/issues/1597
# To fix it, edit the example-config.yml: uncomment the TinyDB option in the server settings (lines 51-54)

$Env:PYGEOAPI_CONFIG = "F:/Documents/GitHub/pygeoapi/example-config.yml"
$Env:PYGEOAPI_OPENAPI = "F:/Documents/GitHub/pygeoapi/example-openapi.yml"
pygeoapi openapi generate $Env:PYGEOAPI_CONFIG --output-file $Env:PYGEOAPI_OPENAPI

Now we can start the server:

pygeoapi serve

And once the server is running, we can send requests, e.g. the list of processes:

curl.exe http://localhost:5000/processes

And, of course, execute the example “hello-world” process:

curl.exe --% -X POST http://localhost:5000/processes/hello-world/execution -H "Content-Type: application/json" -d "{\"inputs\":{\"name\": \"hi there\"}}"

As you can see, writing JSON content for curl is a pain. Luckily, pyopenapi comes with a nice web GUI, including Swagger UI for playing with all the functionality, including the hello-world process:

It’s not really a geospatial hello-world example, but it’s a first step.

Finally, I wan’t to leave you with a teaser since there are more interesting things going on in this space, including work on OGC API – Moving Features as shared by the pygeoapi team recently:

So, stay tuned.

Software quality in QGIS

According to the definition of software quality given by french Wikipedia

An overall assessment of quality takes into account external factors, directly observable by the user, as well as internal factors, observable by engineers during code reviews or maintenance work.

I have chosen in this article to only talk about the latter. The quality of software and more precisely QGIS is therefore not limited to what is described here. There is still much to say about:

  • Taking user feedback into account,
  • the documentation writing process,
  • translation management,
  • interoperability through the implementation of standards,
  • the extensibility using API,
  • the reversibility and resilience of the open source model…

These are subjects that we care a lot and deserve their own article.

I will focus here on the following issue: QGIS is free software and allows anyone with the necessary skills to modify the software. But how can we ensure that the multiple proposals for modifications to the software contribute to its improvement and do not harm its future maintenance?

Self-discipline

All developers contributing to QGIS code doesn’t belong to the same organization. They don’t all live in the same country, don’t necessarily have the same culture and don’t necessarily share the same interests or ambitions for the software. However, they share the awareness of modifying a common good and the desire to take care of it.

This awareness transcends professional awareness, the developer not only has a responsibility towards his employer, but also towards the entire community of users and contributors to the software.

This self-discipline is the foundation of the quality of the contributions of software like QGIS.

However, to err is human and it is essential to carry out checks for each modification proposal.

Automatic checks

With each modification proposal (called Pull Request or Merge Request), the QGIS GitHub platform automatically launches a set of automatic checks.

Example of proposed modification

Result of automatic checks on a modification proposal

The first of these checks is to build QGIS on the different systems on which it is distributed (Linux, Windows, MacOS) by integrating the proposed modification. It is inconceivable to integrate a modification that would prevent the application from being built on one of these systems.

The tests

The first problem posed by a proposed modification is the following “How can we be sure that what is going to be introduced does not break what already exists?”

To validate this assertion, we rely on automatic tests. This is a set of micro-programs called tests, which only purpose is to validate that part of the application behaves as expected. For example, there is a test which validates that when the user adds an entry in a data layer, then this entry is then present in the data layer. If a modification were to break this behavior, then the test would fail and the proposal would be rejected (or more likely corrected).

This makes it possible in particular to avoid regressions (they are very often called non-regression tests) and also to qualify the expected behavior.

There are approximately 1.3 Million lines of code for the QGIS application and 420K lines of test code, a ratio of 1 to 3. The presence of tests is mandatory for adding functionality, therefore the quantity of test code increases with the quantity of application code.

In blue the number of lines of code in QGIS, in red the number of lines of tests

There are currently over 900 groups of automatic tests in QGIS, most of which run in less than 2 seconds, for a total execution time of around 30 minutes.

We also see that certain parts of the QGIS code – the most recent – are better covered by the tests than other older ones. Developers are gradually working to improve this situation to reduce technical debt.

Code checks

Analogous to using a spell checker when writing a document, we carry out a set of quality checks on the source code. We check, for example, that the proposed modification does not contain misspelled words or “banned” words, that the API documentation has been correctly written or that the modified code respects certain formal rules of the programming language.

We recently had the opportunity to add a check based on the clang-tidy tool. The latter relies on the Clang compiler. It is capable of detecting programming errors by carrying out a static analysis of the code.

Clang-tidy is, for example, capable of detecting “narrowing conversions”.

Example of detecting “narrowing conversions”

In the example above, Clang-tidy detects that there has been a “narrowing conversion” and that the value of the port used in the network proxy configuration “may” be corrupted. In this case, this problem was reported on the QGIS issues platform and had to be corrected.

At that time, clang-tidy was not in place. Its use would have made it possible to avoid this anomaly and all the steps which led to its correction (exhaustive description of the issue, multiple exchanges to be able to reproduce it, investigation, correction, review of the modification), meaning a significant amount of human time which could thus have been avoided.

Peer review

A proposed modification that would validate all of the automatic checks described above would not necessarily be integrated into the QGIS code automatically. In fact, its code may be poorly designed or the modification poorly thought out. The relevance of the functionality may be doubtful, or duplicated with another. The integration of the modification would therefore potentially cause a burden for the people in charge of the corrective or evolutionary maintenance of the software.

It is therefore essential to include a human review in the process of accepting a modification.

This is more of a rereading of the substance of the proposal than of the form. For the latter, we favor the automatic checks described above in order to simplify the review process.

Therefore, human proofreading takes time, and this effort is growing with the quantity of modifications proposed in the QGIS code. The question of its funding arises, and discussions are in progress. The QGIS.org association notably dedicates a significant part of its budget to fund code reviews.

More than 100 modification proposals were reviewed and integrated during the month of December 2023. More than 30 different people contributed. More than 2000 files have been modified.

Therefore the wait for a proofreading can sometimes be long. It is also often the moment when disagreements are expressed. It is therefore a phase which can prove frustrating for contributors, but it is an important and rich moment in the community life of a free project.

To be continued !

As a core QGIS developer, and as a pure player OpenSource company, we believe it is fundamental to be involved in each step of the contribution process.

We are investing in the review process, improving automatic checks, and in the QGIS quality process in general. And we will continue to invest in these topics in order to help make QGIS a long-lasting and stable software.

If you would like to contribute or simply learn more about QGIS, do not hesitate to contact us at [email protected] and consult our QGIS support proposal.

Finding geospatial accounts on Mastodon

Besides following hashtags, such as #GISChat, #QGIS, #OpenStreetMap, #FOSS4G, and #OSGeo, curating good lists is probably the best way to stay up to date with geospatial developments.

To get you started (or to potentially enrich your existing lists), I thought I’d share my Geospatial and SpatialDataScience lists with you. And the best thing: you don’t need to go through all the >150 entries manually! Instead, go to your Mastodon account settings and under “Import and export” you’ll find a tool to import and merge my list.csv with your lists:

And if you are not following the geospatial hashtags yet, you can search or click on the hashtags you’re interested in and start following to get all tagged posts into your timeline:

Snappy QField 3.1 “Borneo” has arrived

The launch of QField 3.0 was a big deal, but now we’re back to focusing on smaller, more frequent updates. Don’t let the shorter change log for 3.1 trick you – there are lots of cool new features in this update!

Main highlights

One of the main improvements in this release is the brand-new functionality to enable snapping to common angles while digitizing. When enabled, the coordinate cursor will snap to configured angles alongside a visual guideline. This comes in handy when adding new geometries while surveying features with regular angles (e.g. buildings, parking lots, etc.). As QField gets more digitizing functionalities, we’ve taken the time to implement a nifty UI that collapses digitizing toggle buttons into a drawer, leaving extra space for the map canvas to shine through.

In addition, the vertex editor – one of QField’s most advanced geometry tools – received tons of love during this development cycle, focusing on improving its usability. Changes worth mentioning include:

  • A new undo button allows users to revert individual vertex manipulations in case of mistaken adjustment, which can save you from having to cancel a large set of ongoing manipulations;
  • The possibility to select vertices using finger tapping on the screen, dramatically improving the user experience;
  • Clearer on-screen markers to represent vertices and
  • Tons of bug fixes to the vertex editor itself, as well as the broader set of geometry tools.

It is now possible to lock the geometry of individual features within a single vector layer. While QField has long supported the concept of a locked geometry state for vector layers, that was until now a layer-wide toggle. With the new version of QField, a data-defined property can dictate whether a given feature geometry can be edited. Interested in geofenced geometry editing? We’ve got you covered 😉 This functionality requires the latest version of QFieldSync, which is available through QGIS’ plugin manager.

Noticeably improvements

Permission handling has been improved across all platforms. On Android, QField now delays the permission request for camera, microphone, location, and Bluetooth access until needed. This makes for a much friendlier user experience.

QField 3.0 was one of the largest releases, with major changes in its underlying libraries, including a migration to Qt 6. With the community’s help, we have spent countless hours testing before release. However, it is never a bulletproof process, and that version came with a few noticeable regressions. In particular, camera handling on Android suffered from upstream issues with Qt. We’ve tracked as many of those as possible, making this new version much more stable. One lingering camera issue remains and will be fixed upstream in the next three weeks; we’ll update as soon as it is available.

Finally, long-time users of QField will notice improvements in how geometry highlights and digitizing rubber bands are drawn. We’ve doubled down on efforts to ensure that the digitized geometries and the coordinate cursor itself are always clearly visible, whether overlaid against the canvas’s light or dark parts.

We want to extend a heartfelt thank you to our sponsors for their generous support. If you’re inspired by the developments in QField and want to contribute, please consider donating. Your support will help us continue to innovate and improve this tool for everyone’s benefit.

GRASS GIS Annual Report 2023

The GRASS GIS Annual Report for 2023 highlights a year of significant achievements and developments in the GRASS GIS project, which celebrated its 40th anniversary.

GRASS GIS people on hilltop 2023

Here’s a summary of the report:

  1. Community Meeting: The GRASS GIS Community Meeting was held in June at the Czech Technical University in Prague, bringing together a diverse group of participants from various countries. Additionally, community members participated in the OSGeo Community Sprint in Vienna in November.
  2. Development Activity: The year saw the release of the GRASS GIS 8.3.0 feature release and three maintenance releases. There were 418 pull requests created, 387 merged, and 103 issues resolved. Nine new addons were added by several contributors. The top five contributors were acknowledged, and a new core contributor was welcomed. Also a GRASS Student Grant was awarded.
  3. Conferences in 2023: GRASS GIS was represented at several conferences, including the North Carolina GIS conference, FOSS4G in Kosovo and North America, and the OpenGeoHub Summer School. Topics covered ranged from parallelization in GRASS GIS to computational notebooks for geospatial computation.
  4. GRASS GIS in Industry: OpenPlains Inc., a startup utilizing GRASS GIS, was founded. Additionally, a subaward was granted to research software engineers at North Carolina State University to enhance GRASS GIS, funded by an NSF grant awarded to Natrx.
  5. GRASS GIS in Academia: A significant NSF grant was awarded to a team from four U.S. universities to support the GRASS GIS community. Various academic activities, including workshops, courses, and lectures, were conducted throughout the year, emphasizing the application of GRASS GIS in different fields.
  6. GRASS GIS in Government: The U.S. Geological Survey released a training video on using GRASS GIS with 3D Elevation Program data. The NC State University team received research support from the US Department of Agriculture for developing GRASS modules for surface water modeling.

GRASS GIS Desktop showing maps

Congratulations to all contributors!

The full report is available at https://grass.osgeo.org/news/2023_12_19_annual_report/

The post GRASS GIS Annual Report 2023 appeared first on Markus Neteler Consulting.

Offline Vector Tile Package .vtpk in QGIS

Starting from 3.26, QGIS now supports .vtpk (Vector Tile Package) files out of the box! From the changelog:

ESRI vector tile packages (VTPK files) can now be opened directly as vector tile layers via drag and drop, including support for style translation.

This is great news, particularly for users from Austria, since this makes it possible to use the open government basemap.at vector tiles directly, without any fuss:

1. Download the 2GB offline vector basemap from https://www.data.gv.at/katalog/de/dataset/basemap-at-verwaltungsgrundkarte-vektor-offline-osterreich


2. Add the .vtpk as a layer using the Data Source Manager or via drag-and-drop from the file explorer


3. All done and ready, including the basemap styling and labeling — which we can customize as well:

Kudos to https://wien.rocks/@DieterKomendera/111568809248327077 for bringing this new feature to my attention.

PS: And interesting tidbit from the developer of this feature, Nyall Dawson:

Hi ‘Geocomputation with Python’

Today, I want to point out a blog post over at

https://geocompx.org/post/2023/geocompy-bp1/

In this post, Jakub Nowosad introduces our book “Geocomputation with Python”, also known as geocompy. It is an open-source book on geographic data analysis with Python, written by Michael Dorman, Jakub Nowosad, Robin Lovelace, and me with contributions from others. You can find it online at https://py.geocompx.org/

Mapping relationships between Neo4j spatial nodes with GeoPandas

Previously, we mapped neo4j spatial nodes. This time, we want to take it one step further and map relationships.

A prime example, are the relationships between GTFS StopTime and Trip nodes. For example, this is the Cypher query to get all StopTime nodes of Trip 17:

MATCH 
    (t:Trip  {id: "17"})
    <-[:BELONGS_TO]-
    (st:StopTime) 
RETURN st

To get the stop locations, we also need to get the stop nodes:

MATCH 
    (t:Trip {id: "17"})
    <-[:BELONGS_TO]-
    (st:StopTime)
    -[:STOPS_AT]->
    (s:Stop)
RETURN st ,s

Adapting our code from the previous post, we can plot the stops:

from shapely.geometry import Point

QUERY = """MATCH (
    t:Trip {id: "17"})
    <-[:BELONGS_TO]-
    (st:StopTime)
    -[:STOPS_AT]->
    (s:Stop)
RETURN st ,s
ORDER BY st.stopSequence
"""

with driver.session(database="neo4j") as session:
    tx = session.begin_transaction()
    results = tx.run(QUERY)
    df = results.to_df(expand=True)
    gdf = gpd.GeoDataFrame(
        df[['s().prop.name']], crs=4326,
        geometry=df["s().prop.location"].apply(Point)
    )

tx.close() 
m = gdf.explore()
m

Ordering by stop sequence is actually completely optional. Technically, we could use the sorted GeoDataFrame, and aggregate all the points into a linestring to plot the route. But I want to try something different: we’ll use the NEXT_STOP relationships to get a DataFrame of the start and end stops for each segment:

QUERY = """
MATCH (t:Trip {id: "17"})
   <-[:BELONGS_TO]-
   (st1:StopTime)
   -[:NEXT_STOP]->
   (st2:StopTime)
MATCH (st1)-[:STOPS_AT]->(s1:Stop)
MATCH (st2)-[:STOPS_AT]->(s2:Stop)
RETURN st1, st2, s1, s2
"""

from shapely.geometry import Point, LineString

def make_line(row):
    s1 = Point(row["s1().prop.location"])
    s2 = Point(row["s2().prop.location"])
    return LineString([s1,s2])

with driver.session(database="neo4j") as session:
    tx = session.begin_transaction()
    results = tx.run(QUERY)
    df = results.to_df(expand=True)
    gdf = gpd.GeoDataFrame(
        df[['s1().prop.name']], crs=4326,
        geometry=df.apply(make_line, axis=1)
    )

tx.close() 
gdf.explore(m=m)

Finally, we can also use Cypher to calculate the travel time between two stops:

MATCH (t:Trip {id: "17"})
   <-[:BELONGS_TO]-
   (st1:StopTime)
   -[:NEXT_STOP]->
   (st2:StopTime)
MATCH (st1)-[:STOPS_AT]->(s1:Stop)
MATCH (st2)-[:STOPS_AT]->(s2:Stop)
RETURN st1.departureTime AS time1, 
   st2.arrivalTime AS time2, 
   s1.location AS geom1, 
   s2.location AS geom2, 
   duration.inSeconds(
      time(st1.departureTime), 
      time(st2.arrivalTime)
   ).seconds AS traveltime

As always, here’s the notebook: https://github.com/anitagraser/QGIS-resources/blob/master/qgis3/notebooks/neo4j.ipynb

Mapping Neo4j spatial nodes with GeoPandas

In the recent post Setting up a graph db using GTFS data & Neo4J, we noted that — unfortunately — Neomap is not an option to visualize spatial nodes anymore.

GeoPandas to the rescue!

But first we need the neo4j Python driver:

pip install neo4j

Then we can connect to our database. The default user name is neo4j and you get to pick the password when creating the database:

from neo4j import GraphDatabase

URI = "neo4j://localhost"
AUTH = ("neo4j", "password")

with GraphDatabase.driver(URI, auth=AUTH) as driver:
    driver.verify_connectivity()

Once we have confirmed that the connection works as expected, we can run a query:

QUERY = "MATCH (p:Stop) RETURN p.name AS name, p.location AS geom"

records, summary, keys = driver.execute_query(
    QUERY, database_="neo4j",
)

for rec in records:
    print(rec)

Nice. There we have our GTFS stops, their names and their locations. But how to put them on a map?

Conveniently, there is a to_db() function in the Neo4j driver:

import geopandas as gpd
import numpy as np

with driver.session(database="neo4j") as session:
    tx = session.begin_transaction()
    results = tx.run(QUERY)
    df = results.to_df(expand=True)
    df = df[df["geom[].0"]>0]
    gdf = gpd.GeoDataFrame(
        df['name'], crs=4326,
        geometry=gpd.points_from_xy(df['geom[].0'], df['geom[].1']))
    print(gdf)

tx.close() 

Since some of the nodes lack geometries, I added a quick and dirty hack to get rid of these nodes because — otherwise — gdf.explore() will complain about None geometries.

You can find this notebook at: https://github.com/anitagraser/QGIS-resources/blob/1e4ea435c9b1795ba5b170ddb176aa83689112eb/qgis3/notebooks/neo4j.ipynb

Next step will have to be the relationships. Stay posted.

Setting up a graph db using GTFS data & Neo4J

In a recent post, we looked into a graph-based model for maritime mobility data and how it may be represented in Neo4J. Today, I want to look into another type of mobility data: public transport schedules in GTFS format.

In this post, I’ll be using the public GTFS data for Riga since Riga is one of the demo sites for our current EMERALDS research project.

The workflow is heavily inspired by Bert Radke‘s post “Loading the UK GTFS data feed” from 2021 and his import Cypher script which I used as a template, adjusted to the requirements of the Riga dataset, and updated to recent Neo4J changes.

Here we go.

Since a GTFS export is basically a ZIP archive full of CSVs, we will be making good use of Neo4Js CSV loading capabilities. The basic script for importing the stops file and creating point geometries from lat and lon values would be:

LOAD CSV with headers 
FROM "file:///stops.txt" 
AS row 
CREATE (:Stop {
   stop_id: row["stop_id"],
   name: row["stop_name"], 
   location: point({
    longitude: toFloat(row["stop_lon"]),
    latitude: toFloat(row["stop_lat"])
    })
})

This requires that the stops.txt is located in the import directory of your Neo4J database. When we run the above script and the file is missing, Neo4J will tell us where it tried to look for it. In my case, the directory ended up being:

C:\Users\Anita\.Neo4jDesktop\relate-data\dbmss\dbms-72882d24-bf91-4031-84e9-abd24624b760\import

So, let’s put all GTFS CSVs into that directory and we should be good to go.

Let’s start with the agency file:

load csv with headers from
'file:///agency.txt' as row
create (a:Agency {
   id: row.agency_id, 
   name: row.agency_name, 
   url: row.agency_url, 
   timezone: row.agency_timezone, 
   lang: row.agency_lang
});

… Added 1 label, created 1 node, set 5 properties, completed after 31 ms.

The routes file does not include agency info but, luckily, there is only one agency, so we can hard-code it:

load csv with headers from
'file:///routes.txt' as row
match (a:Agency {id: "rigassatiksme"})
create (a)-[:OPERATES]->(r:Route {
   id: row.route_id, 
   shortName: row.route_short_name,
   longName: row.route_long_name, 
   type: toInteger(row.route_type)
});

… Added 81 labels, created 81 nodes, set 324 properties, created 81 relationships, completed after 28 ms.

From stops, I’m removing non-existent or empty columns:

load csv with headers from
'file:///stops.txt' as row
create (s:Stop {
   id: row.stop_id, 
   name: row.stop_name, 
   location: point({
      latitude: toFloat(row.stop_lat), 
      longitude: toFloat(row.stop_lon)
   }),
   code: row.stop_code
});

… Added 1671 labels, created 1671 nodes, set 5013 properties, completed after 71 ms.

From trips, I’m also removing non-existent or empty columns:

load csv with headers from
'file:///trips.txt' as row
match (r:Route {id: row.route_id})
create (r)<-[:USES]-(t:Trip {
   id: row.trip_id, 
   serviceId: row.service_id,
   headSign: row.trip_headsign, 
   direction_id: toInteger(row.direction_id),
   blockId: row.block_id,
   shapeId: row.shape_id
});

… Added 14427 labels, created 14427 nodes, set 86562 properties, created 14427 relationships, completed after 875 ms.

Slowly getting there. We now have around 16k nodes in our graph:

Finally, it’s stop times time. This is where the serious information is. This file is much larger than all previous ones with over 300k lines (i.e. times when an PT vehicle stops).

This requires another tweak to Bert’s script since using periodic commit is not supported anymore: The PERIODIC COMMIT query hint is no longer supported. Please use CALL { … } IN TRANSACTIONS instead. So I ended up using the following, based on https://community.neo4j.com/t/best-practice-for-replacement-of-using-periodic-commit-to-call-in-transactions/48636/2:

:auto
load csv with headers from
'file:///stop_times.txt' as row
CALL { with row
match (t:Trip {id: row.trip_id}), (s:Stop {id: row.stop_id})
create (t)<-[:BELONGS_TO]-(st:StopTime {
   arrivalTime: row.arrival_time, 
   departureTime: row.departure_time,
   stopSequence: toInteger(row.stop_sequence)})-[:STOPS_AT]->(s)
} IN TRANSACTIONS OF 10 ROWS;

… Added 351388 labels, created 351388 nodes, set 1054164 properties, created 702776 relationships, completed after 1364220 ms.

As you can see, this took a while. But now we have all nodes in place:

The final statement adds additional relationships between consecutive stop times:

call apoc.periodic.iterate('match (t:Trip) return t',
'match (t)<-[:BELONGS_TO]-(st) with st order by st.stopSequence asc
with collect(st) as stops
unwind range(0, size(stops)-2) as i
with stops[i] as curr, stops[i+1] as next
merge (curr)-[:NEXT_STOP]->(next)', {batchmode: "BATCH", parallel:true, parallel:true, batchSize:1});

This fails with: There is no procedure with the name apoc.periodic.iterate registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed.

So, let’s install APOC. That’s a plugin which we can install into our database from within Neo4J Desktop:

After restarting the db, we can run the query:

No errors. Sounds good.

Let’s have a look at what we ended up with. Here are 25 random Trips. I expanded one of them to show its associated StopTimes. We can see the relations between consecutive StopTimes and I’ve expanded the final five StopTimes to show their linked Stops:

I also wanted to visualize the stops on a map. And there used to be a neat app called Neomap which can be installed easily:

However, Neomap does not seem to be compatible with the latest Neo4J:

So this final step will have to wait for another time.

Translating Open Source Software with Weblate: A GRASS GIS Case Study

Open source software projects thrive on the contributions of the community, not only for the code, but also for making the software accessible to a global audience. One of the critical aspects of this accessibility is the localization or translation of the software’s messages and interfaces. In this context, Weblate (https://weblate.org/) has proven to be a powerful tool for managing these translations, especially for projects such as GRASS GIS, which is part of OSGeo (Open Source Geospatial Foundation).

Weblate software logoGRASS GIS logo

What is Weblate?

Weblate is an open source translation management system designed to simplify the translation process of software projects. It provides an intuitive web interface that allows translators to work without deep technical knowledge. This ease of use combined with robust integration capabilities makes Weblate a popular choice for open source projects.

GRASS GIS and Localization

GRASS GIS (https://grass.osgeo.org/), a software suite for managing and analyzing geospatial data, is used worldwide and therefore needs to be available in many languages. The project uses Weblate, hosted by OSGeo, to manage and facilitate its translation work (see OSGeo-Weblate portal).

Marking messages for translation

Before translation work can begin, the messages to be translated must be marked for translation in the GRASS GIS source code. This is done with the gettext macro _(“…”). GNU gettext is a GNU library for the internationalization of software. Here is a simplified overview of the process:

  1. Identify the strings to be translated: The developers identify the strings in the source code that need to be translated. These are usually user messages, while debug messages are not marked for translation.
  2. Use the gettext macro: The identified strings are packed into a gettext macro. For example, a string “Welcome to GRASS GIS” in the source code would be changed to _(“Welcome to GRASS GIS”). This change indicates that the string should be used for translation.
  3. Extraction and template generation: Tools such as xgettext are used to extract these marked strings from the source code and create a POT (Portable Object Template) file. This file is used as a template for all translations. In the GRASS GIS project the template language is English.

There are three template files in the GRASS GIS project: one with the graphical user interface (GUI) messages, one with the library functions (libs) and one with the modules (mods).

Connecting the software project to Weblate

While the POT files could be transferred to Weblate manually, we chose the automated option. The OSGeo Weblate instance is directly connected to the GRASS GIS project via git (GitHub) using the Weblate version control integration.

How it works in practice:

  1. Developer makes a commit to the GRASS GIS repo on GitHub
  2. A GitHub webhook makes a call to weblate.osgeo.org – note that it has it’s own local git repo for GRASS GIS, as it does for other OSGeo projects, with translations being managed in this Weblate instance. This local git repo is updated when the webhook is fired.
  3. As messages are translated in OSGeo-Weblate, they are eventually pushed to the Weblate Github fork of GRASS GIS (the push frequency is set to 24 hours by default, i.e., new translations are collected over a day), and Weblate then triggers a pull request to the main GRASS GIS repo on GitHub.

For technical background on the OSGeo Weblate installation, see the related OSGeo-SAC Weblate page.

Translation process in Weblate

Here is how the typical translation process looks like:

  • Translator registration: Registration (via OSGeo-ID) and login to the Weblate instance.
  • Language selection: Select the language to be translated. If a language does not exist yet, it can be added with the approval of the project managers.
  • Translation interface: Weblate provides an easy-to-use web interface where translators can view the original texts and enter their translations. If activated, machine translation can also be used here (DeepL, Google Translate, etc.). The Weblate translation memory helps to quickly translate identical and similar sentences.
GRASS GIS messages in Weblate

GRASS GIS messages in Weblate

  • Together we are better: translators can discuss translations, resolve conflicts and suggest improvements. Weblate also offers quality checks to ensure consistency and accuracy. Translations in different languages can be compared in tabular form.
Message translation comparison in Weblate (GRASS GIS project example)

Message translation comparison in Weblate (GRASS GIS project example)

  • Integration with source code: Once translations are completed and checked, they are written back into the GRASS GIS source code (see above). Weblate supports automatic synchronization with source code repositories.
  • Continuous updates: As the source code evolves, new strings can be marked for translation and Weblate is automatically updated to reflect these changes.
Pull request with new translations opened by Weblate in GRASS GIS Github repository

Pull request with new translations opened by Weblate in GRASS GIS Github repository

Benefits for the GRASS GIS project

By using Weblate, GRASS GIS benefits from the following advantages:

  • Streamlined translation workflow: The process from tagging strings to integrating translations is efficient and manageable.
  • Community engagement: Weblate’s ease of use encourages more community members to participate in the translation process.
  • Quality and Consistency: Weblate ensures high quality translations through integrated quality checks and collaboration tools.
  • Up-to-date localization: Continuous synchronization with the source code repository ensures that translations are always up-to-date.

Conclusion

The integration of Weblate into the GRASS GIS development workflow underlines the importance of localization in open source software. By using tools such as gettext for message tagging and Weblate for translation management, GRASS GIS ensures that it remains accessible and usable for a global community, embodying the true spirit of open source software.

Thanks

Thanks to Regina Obe from OSGeo-SAC for her support in setting up and maintaining the OSGeo-Weblate instance and for her explanations of how things work in terms of Weblate/GitHub server communication.

The post Translating Open Source Software with Weblate: A GRASS GIS Case Study appeared first on Markus Neteler Consulting.

Bringing QGIS maps into Jupyter notebooks

Earlier this year, we explored how to use PyQGIS in Juypter notebooks to run QGIS Processing tools from a notebook and visualize the Processing results using GeoPandas plots.

Today, we’ll go a step further and replace the GeoPandas plots with maps rendered by QGIS.

The following script presents a minimum solution to this challenge: initializing a QGIS application, canvas, and project; then loading a GeoJSON and displaying it:

from IPython.display import Image

from PyQt5.QtGui import QColor
from PyQt5.QtWidgets import QApplication

from qgis.core import QgsApplication, QgsVectorLayer, QgsProject, QgsSymbol, \
    QgsRendererRange, QgsGraduatedSymbolRenderer, \
    QgsArrowSymbolLayer, QgsLineSymbol, QgsSingleSymbolRenderer, \
    QgsSymbolLayer, QgsProperty
from qgis.gui import QgsMapCanvas

app = QApplication([])
qgs = QgsApplication([], False)
canvas = QgsMapCanvas()
project = QgsProject.instance()

vlayer = QgsVectorLayer("./data/traj.geojson", "My trajectory")
if not vlayer.isValid():
    print("Layer failed to load!")

def saveImage(path, show=True): 
    canvas.saveAsImage(path)
    if show: return Image(path)

project.addMapLayer(vlayer)
canvas.setExtent(vlayer.extent())
canvas.setLayers([vlayer])
canvas.show()
app.exec_()

saveImage("my-traj.png")

When this code is executed, it opens a separate window that displays the map canvas. And in this window, we can even pan and zoom to adjust the map. The line color, however, is assigned randomly (like when we open a new layer in QGIS):

To specify a specific color, we can use:

vlayer.renderer().symbol().setColor(QColor("red"))

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

But regular lines are boring. We could easily create those with GeoPandas plots.

Things get way more interesting when we use QGIS’ custom symbols and renderers. For example, to draw arrows using a QgsArrowSymbolLayer, we can write:

vlayer.renderer().symbol().appendSymbolLayer(QgsArrowSymbolLayer())

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

We can also create a QgsGraduatedSymbolRenderer:

geom_type = vlayer.geometryType()
myRangeList = []

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.setColor(QColor("#3333ff"))
myRange = QgsRendererRange(0, 1, symbol, 'Group 1')
myRangeList.append(myRange)

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.setColor(QColor("#33ff33"))
myRange = QgsRendererRange(1, 3, symbol, 'Group 2')
myRangeList.append(myRange)

myRenderer = QgsGraduatedSymbolRenderer('speed', myRangeList)
vlayer.setRenderer(myRenderer)

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

And we can combine both QgsGraduatedSymbolRenderer and QgsArrowSymbolLayer:

geom_type = vlayer.geometryType()
myRangeList = []

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.appendSymbolLayer(QgsArrowSymbolLayer())
symbol.setColor(QColor("#3333ff"))
myRange = QgsRendererRange(0, 1, symbol, 'Group 1')
myRangeList.append(myRange)

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.appendSymbolLayer(QgsArrowSymbolLayer())
symbol.setColor(QColor("#33ff33"))
myRange = QgsRendererRange(1, 3, symbol, 'Group 2')
myRangeList.append(myRange)

myRenderer = QgsGraduatedSymbolRenderer('speed', myRangeList)
vlayer.setRenderer(myRenderer)

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

Maybe the most powerful option is to use data-defined symbology. For example, to control line width and color:

renderer = QgsSingleSymbolRenderer(QgsSymbol.defaultSymbol(geom_type))

exp_width = 'scale_linear("speed", 0, 3, 0, 7)'
exp_color = "coalesce(ramp_color('Viridis',scale_linear(\"speed\", 0, 3, 0, 1)), '#000000')"

# https://qgis.org/pyqgis/3.0/core/Symbol/QgsSymbolLayer.html?highlight=property#qgis.core.QgsSymbolLayer.PropertySize
renderer.symbol().symbolLayer(0).setDataDefinedProperty(
    QgsSymbolLayer.PropertyStrokeWidth, QgsProperty.fromExpression(exp_width))
renderer.symbol().symbolLayer(0).setDataDefinedProperty(
    QgsSymbolLayer.PropertyStrokeColor, QgsProperty.fromExpression(exp_color))
renderer.symbol().symbolLayer(0).setDataDefinedProperty(
    QgsSymbolLayer.PropertyCapStyle, QgsProperty.fromExpression("'round'"))

vlayer.setRenderer(renderer)

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

Find the full notebook at: https://github.com/anitagraser/QGIS-resources/blob/master/qgis3/notebooks/layer-styling.ipynb

Exploring a hierarchical graph-based model for mobility data representation and analysis

Today’s post is a first quick dive into Neo4J (really just getting my toes wet). It’s based on a publicly available Neo4J dump containing mobility data, ship trajectories to be specific. You can find this data and the setup instructions at:

Maryam Maslek ELayam, Cyril Ray, & Christophe Claramunt. (2022). A hierarchical graph-based model for mobility data representation and analysis [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6405212

I was made aware of this work since they cited MovingPandas in their paper in Data & Knowledge Engineering“The implementation combines several open source tools such as Python, MovingPandas library, Uber H3 index, Neo4j graph database management system”

Once set up, this gives us a database with three hierarchical levels:

Neo4j comes with a nice graphical browser that lets us explore the data. We can switch between levels and click on individual node labels to get a quick preview:

Level 2 is a generalization / aggregation of level 1. Expanding the graph of one of the level 2 nodes shows its connection to level 1. For example, the level 2 port node “Audierne” actually refers to two level 1 nodes:

Every “road” level 1 relationship between ports provide information about the ship, its arrival, departure, travel time, and speed. We can see that this two level 1 ports must be pretty close since travel times are only 5 minutes:

Further expanding one of the port level 1 nodes shows its connection to waypoints of level1:

Switching to level 2, we gain access to nodes of type Traj(ectory). Additionally, the road level 2 relationships represent aggregations of the trajectories, for example, here’s a relationship with only one associated trajectory:

There are also some odd relationships, for example, trajectory 43 has two ends and begins relationships and there are also two road relationships referencing this trajectory (with identical information, only differing in their automatic <id>). I’m not yet sure if that is a feature or a bug:

On level 1, we also have access to ship nodes. They are connected to ports and waypoints. However, exploring them visually is challenging. Things look fine at first:

But after a while, once all relationships have loaded, we have it: the MIGHTY BALL OF YARN ™:

I guess this is the point where it becomes necessary to get accustomed to the query language. And no, it’s not SQL, it is Cypher. For example, selecting a specific trajectory with id 0, looks like this:

 MATCH (t1 {traj_id: 0}) RETURN t1

But more on this another time.


This post is part of a series. Read more about movement data in GIS.

GRASS GIS 8.3.1 released

What’s new in a nutshell

The GRASS GIS 8.3.1 maintenance release provides more than 60 changes compared to 8.3.0. This new patch release brings in important fixes and improvements in GRASS GIS modules and the graphical user interface (GUI) which stabilizes the new single window layout active by default.

Some of the most relevant changes include: fixes for r.watershed which got partially broken in the 8.3.0 release; and a fix for installing addons on MS Windows with g.extension.

Translations continue in Weblate, which automatically creates pull requests with the translated chunks. We’d like to thank the translators of all languages for their ongoing support!

GRASS GIS 8.3.1 graphical user interface

Full list of changes and contributors

For all 60+ changes, see our detailed announcement with the full list of features and bugs fixed at GitHub / Releases / 8.3.1.

Thanks to all contributors!

Software downloads

Binaries/Installers download

Further binary packages for other platforms and distributions will follow shortly, please check at software downloads.

Source code download

First time users may explore the first steps tutorial after installation.

About GRASS GIS

The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities. It can be used either as a stand-alone application, as backend for other software packages such as QGIS and R, or in the cloud. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The GRASS Dev Team

The post GRASS GIS 8.3.1 released appeared first on Markus Neteler Consulting.

QField 3.0 release : field mapping app, based on QGIS

We are very happy and enthusiasts at Oslandia to forward the QField 3.0 release announcement, the new major update of this mobile GIS application based on QGIS.

Oslandia is a strategic partner of OPENGIS.ch, the company at the heart of QField development, as well as the QFieldCloud associated SaaS offering. We join OPENGIS.ch to announce all the new features of QField 3.0.

Get QField 3.0 now !

QField 3.0 screenshots


 

Shipped with many new features and built with the latest generation of Qt’s cross-platform framework, this new chapter marks an important milestone for the most powerful open-source field GIS solution.

Main highlights

Upon launching this new version of QField, users will be greeted by a revamped recent projects list featuring shiny map canvas thumbnails. While this is one of the most obvious UI improvements, countless interface tweaks and harmonization have occurred. From the refreshed dark theme to the further polishing of countless widgets, QField has never looked and felt better.

The top search bar has a new functionality that allows users to look for features within the currently active vector layer by matching any of its attributes against a given search term. Users can also refine their searches by specifying a specific attribute. The new functionality can be triggered by typing the ‘f’ prefix in the search bar followed by a string or number to retrieve a list of matching features. When expanding it, a new list of functionalities appears to help users discover all of the tools available within the search bar.

QField’s tracking has also received some love. A new erroneous distance safeguard setting has been added, which, when enabled, will dictate the tracker not to add a new vertex if the distance between it and the previously added vertex is greater than a user-specified value. This aims at preventing “spikes” of poor position readings during a tracking session. QField is now also capable of resuming a tracking session after being stopped. When resuming, tracking will reuse the last feature used when first starting, allowing sessions interrupted by battery loss or momentary pause to be continued on a single line or polygon geometry.

On the feature form front, QField has gained support for feature form text widgets, a new read-only type introduced in QGIS 3.30, which allows users to create expression-based text labels within complex feature form configurations. In addition, relationship-related form widgets now allow for zooming to children/parent features within the form itself.

To enhance digitizing work in the field, QField now makes it possible to turn snapping on and off through a new snapping button on top of the map canvas when in digitizing mode. When a project has enabled advanced snapping, the dashboard’s legend item now showcases snapping badges, allowing users to toggle snapping for individual vector layers.

In addition, digitizing lines and polygons by using the volume up/down hardware keys on devices such as smartphones is now possible. This can come in handy when digitizing data in harsh conditions where gloves can make it harder to use a touch screen.

While we had to play favorites in describing some of the new functionalities in QField, we’ve barely touched the surface of this feature-packed release. Other major additions include support for Near-Field Communication (NFC) text tag reading and a new geometry editor’s eraser tool to delete part of lines and polygons as you would with a pencil sketch using an eraser.

Thanks to Deutsches Archäologisches Institut, Groupements forestiers Québec, Amsa, and Kanton Luzern for sponsoring these enhancements.

Quality of life improvements

Starting with this new version, the scale bar overlay will now respect projects’ distance measurement units, allowing for scale bars in imperial and nautical units.

QField now offers a rendering quality setting which, at the cost of a slightly reduced visual quality, results in faster rendering speeds and lower memory usage. This can be a lifesaver for older devices having difficulty handling large projects and helps save battery life.

Vector tile layer support has been improved with the automated download of missing fonts and the possibility of toggling label visibility. This pair of changes makes this resolution-independent layer type much more appealing.

On iOS, layouts are now printed by QField as PDF documents instead of images. While this was the case for other platforms, it only became possible on iOS recently after work done by one of our ninjas in QGIS itself.

Many thanks to DB Fahrwgdienste for sponsoring stabilization efforts and fixes during this development cycle.

Qt 6, the latest generation of the cross-platform framework powering QField

Last but not least, QField 3.0 is now built against Qt 6. This is a significant technological milestone for the project as this means we can fully leverage the latest technological innovations into this cross-platform framework that has been powering QField since day one.

On top of the new possibilities, QField benefited from years of fixes and improvements, including better integration with Android and iOS platforms. In addition, the positioning framework in Qt 6 has been improved with awareness of the newer GNSS constellations that have emerged over the last decade.

Forest-themed release names

Forests are critical in climate regulation, biodiversity preservation, and economic sustainability. Beginning with QField 3.0 “Amazonia” and throughout the 3.X’s life cycle, we will choose forest names to underscore the importance of and advocate for global forest conservation.

Software with service

OPENGIS.ch and Oslandia provides the full range of services around QField and QGIS : training, consulting, adaptation, specific development and core development, maintenance and assistance. Do not hesitate to contact us and detail your needs, we will be happy to collaborate : [email protected]

As always, we hope you enjoy this new release. Happy field mapping!

GRASS GIS 8.3.0 released

What’s new in a nutshell

The GRASS GIS 8.3.0 release provides more than 360 changes compared to the 8.2 branch. This new minor release brings in many fixes and improvements in GRASS GIS modules and the graphical user interface (GUI) which now has the single window layout by default. Some of the most relevant changes include: support for parallelization in three raster modules, new options added to several temporal modules, and substantial clean-up of g.extension, the module that allows the installation of add-ons. The GUI also received a lot of attention with many fixes and items reorganised. We have also adopted the Clang format and indented most of the C code accordingly. A lot of effort was put into cleaning up the C/C++ code to fix almost all compiler warnings.

Translations have been moved from Transifex to Weblate, which automatically creates pull requests with the translated chunks. We’d like to thank the translators of all languages for their long term support!

GRASS GIS 8.3

Also, docker images have been updated and moved from the mundialis to the OSGeo organization at https://hub.docker.com/r/osgeo/grass-gis/.

We have carried out quite some work in the GitHub Actions: we added support for “pre-commit” in order to reduce unnecessary runs of the automated checks, there were notable improvements in the code checking section and we have activated renovatebot to automatically maintain GitHub Actions.

Last but not least, we have significantly improved the automated release creation to reduce maintainer workload and we have gained nine new contributors! Welcome all!!

Full list of changes and contributors

For all 360+ changes, see our detailed announcement with the full list of features and bugs fixed at GitHub / Releases / 8.3.0.

Thank you all contributors!!

Download and test!

Binaries/Installers download

Further binary packages for other platforms and distributions will follow shortly, please check at software downloads.

Source code download

First time users may explore the first steps tutorial after installation.

About GRASS GIS

The Geographic Resources Analysis Support System (https://grass.osgeo.org/), commonly referred to as GRASS GIS, is an Open Source Geographic Information System providing powerful raster, vector and geospatial processing capabilities. It can be used either as a stand-alone application, as backend for other software packages such as QGIS and R, or in the cloud. It is distributed freely under the terms of the GNU General Public License (GPL). GRASS GIS is a founding member of the Open Source Geospatial Foundation (OSGeo).

The GRASS Dev Team

The post GRASS GIS 8.3.0 released appeared first on Markus Neteler Consulting.

Strategic partnership agreement between Oslandia and OpenGIS.ch on QField

Who are we?

🤔 For those unfamiliar with Oslandia, OpenGIS.ch, or even QGIS, let’s refresh your memory:

👉 Oslandia is a French company specializing in open-source Geographic Information Systems (GIS). Since our establishment in 2009, we have been providing consulting, development, and training services in GIS, with reknown expertise. Oslandia is a dedicated open-source player and the largest contributor to the QGIS solution in France.

👉 As for OPENGIS.ch, they are a Swiss company specializing in the development of open-source GIS software. Founded in 2011, OPENGIS.ch is the largest Swiss contributor to QGIS. OPENGIS.ch is the creator of QField, the most widely used open-source mobile GIS solution for geomatics professionals.

OPENGIS.ch also offers QFieldCloud as a SaaS or on-premise solution for collaborative field project management.

😲 Some may still be unfamiliar with #QGIS ?

It is a free and open-source Geographic Information System that allows creating, editing, visualizing, analyzing, and publicating geospatial data. QGIS is a cross-platform software that can be used on desktops, servers, as a web application, or as a development library.

QGIS is open-source software developed by multiple contributors worldwide. It is an official project of the OpenSource Geospatial Foundation (OSGeo) and is supported by the QGIS.org association. See https://qgis.org

A Partnership?

🎉 Today, we are delighted to announce our strategic partnership aimed at strengthening and promoting QField, the mobile application companion of QGIS Desktop.

🌟 This partnership between Oslandia and OPENGIS.ch is a significant step for QField and open-source mobile GIS solutions. It will consolidate the platform, providing users worldwide with simplified access to effective tools for collecting, managing, and analyzing geospatial data in the field.

📱 QField, developed by OPENGIS.ch, is an advanced open-source mobile application that enables GIS professionals to work efficiently in the field, using interactive maps, collecting real-time data, and managing complex geospatial projects on Android, iOS, or Windows mobile devices.

↔ QField is cross-platform, based on the QGIS engine, facilitating seamless project sharing between desktop, mobile, and web applications.

🕸 QFieldCloud (https://qfield.cloud), the collaborative web platform for QField project management, will also benefit from this partnership and will be enhanced to complement the range of tools within the QGIS platform.

Reactions

❤ At Oslandia, we are thrilled to collaborate with OPENGIS.ch on QGIS technologies. Oslandia shares with OPENGIS.ch a common vision of open-source software development: a strong involvement in development communities, work in respect with the ecosystem, an highly skilled expertise, and a commitment to industrial-quality, robust, and sustainable software development.

👩‍💻 With this partnership, we aim to offer our clients the highest expertise across all software components of the QGIS platform, from data capture to dissemination.

🤝 On the OpenGIS.ch side, Marco Bernasocchi adds:

The partnership with Oslandia represents a crucial step in our mission to provide leading mobile GIS tools with a genuine OpenSource credo. The complementarity of our skills will accelerate the development of QField and QFieldCloud and meet the growing needs of our users.

Commitment to open source

🙏 Both companies are committed to continue supporting and improving QField and QFieldCloud as open-source projects, ensuring universal access to this high-quality mobile GIS solution without vendor dependencies.

Ready for field mapping ?

🌏 And now, are you ready for the field?

So, download QField (https://qfield.org/get), create projects in QGIS, and share them on QFieldCloud!

✉ If you need training, support, maintenance, deployment, or specific feature development on these platforms, don’t hesitate to contact us. You will have access to the best experts available: [email protected].

 

Analyzing and visualizing large-scale fire events using QGIS processing with ST-DBSCAN

A while back, one of our ninjas added a new algorithm in QGIS’ processing toolbox named ST-DBSCAN Clustering, short for spatio temporal density-based spatial clustering of applications with noise. The algorithm regroups features falling within a user-defined maximum distance and time duration values.

This post will walk you through one practical use for the algorithm: large-scale fire event analysis and visualization through remote-sensed fire detection. More specifically, we will be looking into one of the larger fire events which occurred in Canada’s Quebec province in June 2023.

Fetching and preparing FIRMS data

NASA’s Fire Information for Resource Management System (FIRMS) offers a fantastic worldwide archive of all fire detected through three spaceborne sources: MODIS C6.1 with a resolution of roughly 1 kilometer as well as VIIRS S-NPP and VIIRS NOAA-20 with a resolution of 375 meters. Each detected fire is represented by a point that sits at the center of the source’s resolution grid.

Each source will cover the whole world several times per day. Since detection is impacted by atmospheric conditions, a given pass by one source might not be able to register an ongoing fire event. It’s therefore advisable to rely on more than one source.

To look into our fire event, we have chosen the two fire detection sources with higher resolution – VIIRS S-NPP and VIIRS NOAA-20 – covering the whole month of June 2023. The datasets were downloaded from FIRMS’ archive download page.

After downloading the two separate datasets, we combined them into one merged geopackage dataset using QGIS processing toolbox’s Merge Vector Layers algorithm. The merged dataset will be used to conduct the clustering analysis.

In addition, we will use QGIS’s field calculator to create a new Date & Time field named ACQ_DATE_TIME using the following expression:

to_datetime("ACQ_DATE" || "ACQ_TIME", 'yyyy-MM-ddhhmm')

This will allow us to calculate precise time differences between two dates.

Modeling and running the analysis

The large-scale fire event analysis requires running two distinct algorithms:

  • a spatiotemporal clustering of points to regroup fires into a series of events confined in space and time; and
  • an aggregation of the points within the identified clusters to provide additional information such as the beginning and end date of regrouped events.

This can be achieved through QGIS’ modeler to sequentially execute the ST-DBSCAN Clustering algorithm as well as the Aggregate algorithm against the output of the first algorithm.

The above-pictured model outputs two datasets. The first dataset contains single-part points of detected fires with attributes from the original VIIRS products as well as a pair of new attributes: the CLUSTER_ID provides a unique cluster identifier for each point, and the CLUSTER_SIZE represents the sum of points forming each unique cluster. The second dataset contains multi-part points clusters representing fire events with four attributes: CLUSTER_ID and CLUSTER_SIZE which were discussed above as well as DATE_START and DATE_END to identify the beginning and end time of a fire event.

In our specific example, we will run the model using the merged dataset we created above as the “fire points layer” and select ACQ_DATE_TIME as the “date field”. The outputs will be saved as separate layers within a geopackage file.

Note that the maximum distance (0.025 degrees) and duration (72 hours) settings to form clusters have been set in the model itself. This can be tweaked by editing the model.

Visualizing a specific fire event progression on a map

Once the model has provided its outputs, we are ready to start visualizing a fire event on a map. In this practical example, we will focus on detected fires around latitude 53.0960 and longitude -75.3395.

Using the multi-part points dataset, we can identify two clustered events (CLUSTER_ID 109 and 1285) within the month of June 2023. To help map canvas refresh responsiveness, we can filter both of our output layers to only show features with those two cluster identifiers using the following SQL syntax: CLUSTER_ID IN (109, 1285).

To show the progression of the fire event over time, we can use a data-defined property to graduate the marker fill of the single-part points dataset along a color ramp. To do so, open the layer’s styling panel, select the simple marker symbol layer, click on the data-defined property button next to the fill color and pick the Assistant menu item.

In the assistant panel, set the source expression to the following: day(age(to_date('2023-07-01'),”ACQ_DATE_TIME”)). This will give us the number of days between a given point and an arbitrary reference date (2023-07-01 here). Set the values range from 0 to 30 and pick a color ramp of your choice.

When applying this style, the resulting map will provide a visual representation of the spread of the fire event over time.

Having identified a fire event via clustering easily allows for identification of the “starting point” of a fire by searching for the earliest fire detected amongst the thousands of points. This crucial bit of analysis can help better understand the cause of the fire, and alongside the color grading of neighboring points, its directionality as it expanded over time.

Analyzing a fire event through histogram

Through QGIS’ DataPlotly plugin, it is possible to create an histogram of fire events. After installing the plugin, we can open the DataPlotly panel and configure our histogram.

Set the plot type to histogram and pick the model’s single-part points dataset as the layer to gather data from. Make sure that the layer has been filtered to only show a single fire event. Then, set the X field to the following layer attribute: “ACQ_DATE”.

You can then hit the Create Plot button, go grab a coffee, and enjoy the resulting histogram which will appear after a minute or so.

While not perfect, an histogram can quickly provide a good sense of a fire event’s “peak” over a period of time.

Comparing geographic data analysis in R and Python

Today, I want to point out a blog post over at

https://geocompx.org/post/2023/ogh23/

written together with my fellow “Geocomputation with Python” co-authors Robin Lovelace, Michael Dorman, and Jakub Nowosad.

In this blog post, we talk about our experience teaching R and Python for geocomputation. The context of this blog post is the OpenGeoHub Summer School 2023 which has courses on R, Python and Julia. The focus of the blog post is on geographic vector data, meaning points, lines, polygons (and their ‘multi’ variants) and the attributes associated with them. We plan to cover raster data in a future post.

I’ve archived my Tweets: Goodbye Twitter, Hello Mastodon

Today, Jeff Sikes @[email protected], alerted me to the fact that “Twitter has removed all media attachments from 2014 and prior” (source: https://firefish.social/notes/9imgvtckzqffboxt). So far, it seems unclear whether this was intentional or a system failure (source: https://mas.to/@carnage4life/110922114407553901).

Since I’ve been on Twitter since 2011, this means that some media files are now lost. While the loss of a few low-res images is probably not a major loss for humanity, I would prefer to have some control over when and how content I created vanishes. So, to avoid losing more content, I have followed Jeff’s recommendation to create a proper archival page:

https://anitagraser.github.io/twitter-archive/

It is based on an export I pulled in October 2022 when I started to use Mastodon as my primary social media account. Unfortunately, this export did not include media files.

To follow me in the future, find me on:

https://fosstodon.org/@underdarkGIS

Btw, a recent study published on Nature News shows that Mastodon is the top-ranking Twitter replacement for scientists.

To find other interesting people on Mastodon, there are many useful tools and lists, including, for example:


Update 2023-11-04: I’ve completely deleted my X / Twitter account. If you find any account pretending they are me on that platform, it’s an impostor.

  • Page 1 of 14 ( 274 posts )
  • >>
  • gis

Back to Top

Sustaining Members