Related Plugins and Tags

QGIS Planet

Hi ‘Geocomputation with Python’

Today, I want to point out a blog post over at

https://geocompx.org/post/2023/geocompy-bp1/

In this post, Jakub Nowosad introduces our book “Geocomputation with Python”, also known as geocompy. It is an open-source book on geographic data analysis with Python, written by Michael Dorman, Jakub Nowosad, Robin Lovelace, and me with contributions from others. You can find it online at https://py.geocompx.org/

Mapping relationships between Neo4j spatial nodes with GeoPandas

Previously, we mapped neo4j spatial nodes. This time, we want to take it one step further and map relationships.

A prime example, are the relationships between GTFS StopTime and Trip nodes. For example, this is the Cypher query to get all StopTime nodes of Trip 17:

MATCH 
    (t:Trip  {id: "17"})
    <-[:BELONGS_TO]-
    (st:StopTime) 
RETURN st

To get the stop locations, we also need to get the stop nodes:

MATCH 
    (t:Trip {id: "17"})
    <-[:BELONGS_TO]-
    (st:StopTime)
    -[:STOPS_AT]->
    (s:Stop)
RETURN st ,s

Adapting our code from the previous post, we can plot the stops:

from shapely.geometry import Point

QUERY = """MATCH (
    t:Trip {id: "17"})
    <-[:BELONGS_TO]-
    (st:StopTime)
    -[:STOPS_AT]->
    (s:Stop)
RETURN st ,s
ORDER BY st.stopSequence
"""

with driver.session(database="neo4j") as session:
    tx = session.begin_transaction()
    results = tx.run(QUERY)
    df = results.to_df(expand=True)
    gdf = gpd.GeoDataFrame(
        df[['s().prop.name']], crs=4326,
        geometry=df["s().prop.location"].apply(Point)
    )

tx.close() 
m = gdf.explore()
m

Ordering by stop sequence is actually completely optional. Technically, we could use the sorted GeoDataFrame, and aggregate all the points into a linestring to plot the route. But I want to try something different: we’ll use the NEXT_STOP relationships to get a DataFrame of the start and end stops for each segment:

QUERY = """
MATCH (t:Trip {id: "17"})
   <-[:BELONGS_TO]-
   (st1:StopTime)
   -[:NEXT_STOP]->
   (st2:StopTime)
MATCH (st1)-[:STOPS_AT]->(s1:Stop)
MATCH (st2)-[:STOPS_AT]->(s2:Stop)
RETURN st1, st2, s1, s2
"""

from shapely.geometry import Point, LineString

def make_line(row):
    s1 = Point(row["s1().prop.location"])
    s2 = Point(row["s2().prop.location"])
    return LineString([s1,s2])

with driver.session(database="neo4j") as session:
    tx = session.begin_transaction()
    results = tx.run(QUERY)
    df = results.to_df(expand=True)
    gdf = gpd.GeoDataFrame(
        df[['s1().prop.name']], crs=4326,
        geometry=df.apply(make_line, axis=1)
    )

tx.close() 
gdf.explore(m=m)

Finally, we can also use Cypher to calculate the travel time between two stops:

MATCH (t:Trip {id: "17"})
   <-[:BELONGS_TO]-
   (st1:StopTime)
   -[:NEXT_STOP]->
   (st2:StopTime)
MATCH (st1)-[:STOPS_AT]->(s1:Stop)
MATCH (st2)-[:STOPS_AT]->(s2:Stop)
RETURN st1.departureTime AS time1, 
   st2.arrivalTime AS time2, 
   s1.location AS geom1, 
   s2.location AS geom2, 
   duration.inSeconds(
      time(st1.departureTime), 
      time(st2.arrivalTime)
   ).seconds AS traveltime

As always, here’s the notebook: https://github.com/anitagraser/QGIS-resources/blob/master/qgis3/notebooks/neo4j.ipynb

Mapping Neo4j spatial nodes with GeoPandas

In the recent post Setting up a graph db using GTFS data & Neo4J, we noted that — unfortunately — Neomap is not an option to visualize spatial nodes anymore.

GeoPandas to the rescue!

But first we need the neo4j Python driver:

pip install neo4j

Then we can connect to our database. The default user name is neo4j and you get to pick the password when creating the database:

from neo4j import GraphDatabase

URI = "neo4j://localhost"
AUTH = ("neo4j", "password")

with GraphDatabase.driver(URI, auth=AUTH) as driver:
    driver.verify_connectivity()

Once we have confirmed that the connection works as expected, we can run a query:

QUERY = "MATCH (p:Stop) RETURN p.name AS name, p.location AS geom"

records, summary, keys = driver.execute_query(
    QUERY, database_="neo4j",
)

for rec in records:
    print(rec)

Nice. There we have our GTFS stops, their names and their locations. But how to put them on a map?

Conveniently, there is a to_db() function in the Neo4j driver:

import geopandas as gpd
import numpy as np

with driver.session(database="neo4j") as session:
    tx = session.begin_transaction()
    results = tx.run(QUERY)
    df = results.to_df(expand=True)
    df = df[df["geom[].0"]>0]
    gdf = gpd.GeoDataFrame(
        df['name'], crs=4326,
        geometry=gpd.points_from_xy(df['geom[].0'], df['geom[].1']))
    print(gdf)

tx.close() 

Since some of the nodes lack geometries, I added a quick and dirty hack to get rid of these nodes because — otherwise — gdf.explore() will complain about None geometries.

You can find this notebook at: https://github.com/anitagraser/QGIS-resources/blob/1e4ea435c9b1795ba5b170ddb176aa83689112eb/qgis3/notebooks/neo4j.ipynb

Next step will have to be the relationships. Stay posted.

Analyzing mobility hotspots with MovingPandas & CARTO

Today, I want to point out a blog post over at

https://carto.com/blog/analyzing-mobility-hotspots-with-movingpandas

written together with my fellow co-authors and EMERALDS project team members Argyrios Kyrgiazos and Helen McKenzie.

In this blog post, we walk you through a trajectory hotspot analysis using open taxi trajectory data from Kaggle, combining data preparation with MovingPandas (including the new OutlierCleaner illustrated above) and spatiotemporal hotspot analysis from Carto.

Setting up a graph db using GTFS data & Neo4J

In a recent post, we looked into a graph-based model for maritime mobility data and how it may be represented in Neo4J. Today, I want to look into another type of mobility data: public transport schedules in GTFS format.

In this post, I’ll be using the public GTFS data for Riga since Riga is one of the demo sites for our current EMERALDS research project.

The workflow is heavily inspired by Bert Radke‘s post “Loading the UK GTFS data feed” from 2021 and his import Cypher script which I used as a template, adjusted to the requirements of the Riga dataset, and updated to recent Neo4J changes.

Here we go.

Since a GTFS export is basically a ZIP archive full of CSVs, we will be making good use of Neo4Js CSV loading capabilities. The basic script for importing the stops file and creating point geometries from lat and lon values would be:

LOAD CSV with headers 
FROM "file:///stops.txt" 
AS row 
CREATE (:Stop {
   stop_id: row["stop_id"],
   name: row["stop_name"], 
   location: point({
    longitude: toFloat(row["stop_lon"]),
    latitude: toFloat(row["stop_lat"])
    })
})

This requires that the stops.txt is located in the import directory of your Neo4J database. When we run the above script and the file is missing, Neo4J will tell us where it tried to look for it. In my case, the directory ended up being:

C:\Users\Anita\.Neo4jDesktop\relate-data\dbmss\dbms-72882d24-bf91-4031-84e9-abd24624b760\import

So, let’s put all GTFS CSVs into that directory and we should be good to go.

Let’s start with the agency file:

load csv with headers from
'file:///agency.txt' as row
create (a:Agency {
   id: row.agency_id, 
   name: row.agency_name, 
   url: row.agency_url, 
   timezone: row.agency_timezone, 
   lang: row.agency_lang
});

… Added 1 label, created 1 node, set 5 properties, completed after 31 ms.

The routes file does not include agency info but, luckily, there is only one agency, so we can hard-code it:

load csv with headers from
'file:///routes.txt' as row
match (a:Agency {id: "rigassatiksme"})
create (a)-[:OPERATES]->(r:Route {
   id: row.route_id, 
   shortName: row.route_short_name,
   longName: row.route_long_name, 
   type: toInteger(row.route_type)
});

… Added 81 labels, created 81 nodes, set 324 properties, created 81 relationships, completed after 28 ms.

From stops, I’m removing non-existent or empty columns:

load csv with headers from
'file:///stops.txt' as row
create (s:Stop {
   id: row.stop_id, 
   name: row.stop_name, 
   location: point({
      latitude: toFloat(row.stop_lat), 
      longitude: toFloat(row.stop_lon)
   }),
   code: row.stop_code
});

… Added 1671 labels, created 1671 nodes, set 5013 properties, completed after 71 ms.

From trips, I’m also removing non-existent or empty columns:

load csv with headers from
'file:///trips.txt' as row
match (r:Route {id: row.route_id})
create (r)<-[:USES]-(t:Trip {
   id: row.trip_id, 
   serviceId: row.service_id,
   headSign: row.trip_headsign, 
   direction_id: toInteger(row.direction_id),
   blockId: row.block_id,
   shapeId: row.shape_id
});

… Added 14427 labels, created 14427 nodes, set 86562 properties, created 14427 relationships, completed after 875 ms.

Slowly getting there. We now have around 16k nodes in our graph:

Finally, it’s stop times time. This is where the serious information is. This file is much larger than all previous ones with over 300k lines (i.e. times when an PT vehicle stops).

This requires another tweak to Bert’s script since using periodic commit is not supported anymore: The PERIODIC COMMIT query hint is no longer supported. Please use CALL { … } IN TRANSACTIONS instead. So I ended up using the following, based on https://community.neo4j.com/t/best-practice-for-replacement-of-using-periodic-commit-to-call-in-transactions/48636/2:

:auto
load csv with headers from
'file:///stop_times.txt' as row
CALL { with row
match (t:Trip {id: row.trip_id}), (s:Stop {id: row.stop_id})
create (t)<-[:BELONGS_TO]-(st:StopTime {
   arrivalTime: row.arrival_time, 
   departureTime: row.departure_time,
   stopSequence: toInteger(row.stop_sequence)})-[:STOPS_AT]->(s)
} IN TRANSACTIONS OF 10 ROWS;

… Added 351388 labels, created 351388 nodes, set 1054164 properties, created 702776 relationships, completed after 1364220 ms.

As you can see, this took a while. But now we have all nodes in place:

The final statement adds additional relationships between consecutive stop times:

call apoc.periodic.iterate('match (t:Trip) return t',
'match (t)<-[:BELONGS_TO]-(st) with st order by st.stopSequence asc
with collect(st) as stops
unwind range(0, size(stops)-2) as i
with stops[i] as curr, stops[i+1] as next
merge (curr)-[:NEXT_STOP]->(next)', {batchmode: "BATCH", parallel:true, parallel:true, batchSize:1});

This fails with: There is no procedure with the name apoc.periodic.iterate registered for this database instance. Please ensure you've spelled the procedure name correctly and that the procedure is properly deployed.

So, let’s install APOC. That’s a plugin which we can install into our database from within Neo4J Desktop:

After restarting the db, we can run the query:

No errors. Sounds good.

Let’s have a look at what we ended up with. Here are 25 random Trips. I expanded one of them to show its associated StopTimes. We can see the relations between consecutive StopTimes and I’ve expanded the final five StopTimes to show their linked Stops:

I also wanted to visualize the stops on a map. And there used to be a neat app called Neomap which can be installed easily:

However, Neomap does not seem to be compatible with the latest Neo4J:

So this final step will have to wait for another time.

MovingPandas v0.17 released!

Over the last couple of months, I have not been posting release announcements here, so there is quite a bit to catch up.

The latest v0.17.2 release is now available from conda-forge.

New features (since 0.14):

  • Improved MovingFeatures MF-JSON support
    • Ability to parse a MovingFeatureCollection from a json file #330
    • GeoDataFrame to MF-JSON #325
    • Adding read_mf_dict function #357
  • New OutlierCleaner #334
  • Faster stop detection #316
  • New arrow markers to indicate trajectory direction in plots fb1174b 
  • Distance, speed, and acceleration unit handling #295
  • New aggregation parameter (agg) for to_traj_gdf() 5745068 
  • New get_segments_between() for TrajectoryCollection #287 

Behind the scenes:

  • We now have a dedicated Github organization: https://github.com/movingpandas that houses all related repositories
  • And we finally added https support to the website

As always, all tutorials are available from the movingpandas-examples repository and on MyBinder:

If you have questions about using MovingPandas or just want to discuss new ideas, you’re welcome to join our discussion forum.

Adding basemaps to PyQGIS maps in Jupyter notebooks

In the previous post, we investigated how to bring QGIS maps into Jupyter notebooks.

Today, we’ll take the next step and add basemaps to our maps. This is trickier than I would have expected. In particular, I was fighting with “invalid” OSM tile layers until I realized that my QGIS application instance somehow lacked the “WMS” provider.

In addition, getting basemaps to work also means that we have to take care of layer and project CRSes and on-the-fly reprojections. So let’s get to work:

from IPython.display import Image
from PyQt5.QtGui import QColor
from PyQt5.QtWidgets import QApplication
from qgis.core import QgsApplication, QgsVectorLayer, QgsProject, QgsRasterLayer, \
    QgsCoordinateReferenceSystem, QgsProviderRegistry, QgsSimpleMarkerSymbolLayerBase
from qgis.gui import QgsMapCanvas
app = QApplication([])
qgs = QgsApplication([], False)
qgs.setPrefixPath(r"C:\temp", True)  # setting a prefix path should enable the WMS provider
qgs.initQgis()
canvas = QgsMapCanvas()
project = QgsProject.instance()
map_crs = QgsCoordinateReferenceSystem('EPSG:3857')
canvas.setDestinationCrs(map_crs)
print("providers: ", QgsProviderRegistry.instance().providerList())

To add an OSM basemap, we use the xyz tiles option of the WMS provider:

urlWithParams = 'type=xyz&url=https://tile.openstreetmap.org/{z}/{x}/{y}.png&zmax=19&zmin=0&crs=EPSG3857'
rlayer = QgsRasterLayer(urlWithParams, 'OpenStreetMap', 'wms')  
print(rlayer.crs())
if rlayer.isValid():
    project.addMapLayer(rlayer)
else:
    print('invalid layer')
    print(rlayer.error().summary()) 

If there are issues with the WMS provider, rlayer.error().summary() should point them out.

With both the vector layer and the basemap ready, we can finally plot the map:

canvas.setExtent(rlayer.extent())
plot_layers([vlayer,rlayer])

Of course, we can get more creative and style our vector layers:

vlayer.renderer().symbol().setColor(QColor("yellow"))
vlayer.renderer().symbol().symbolLayer(0).setShape(QgsSimpleMarkerSymbolLayerBase.Star)
vlayer.renderer().symbol().symbolLayer(0).setSize(10)
plot_layers([vlayer,rlayer])

And to switch to other basemaps, we just need to update the URL accordingly, for example, to load Carto tiles instead:

urlWithParams = 'type=xyz&url=http://basemaps.cartocdn.com/dark_all/{z}/{x}/{y}.png&zmax=19&zmin=0&crs=EPSG3857'
rlayer2 = QgsRasterLayer(urlWithParams, 'Carto', 'wms')  
print(rlayer2.crs())
if rlayer2.isValid():
    project.addMapLayer(rlayer2)
else:
    print('invalid layer')
    print(rlayer2.error().summary()) 
    
plot_layers([vlayer,rlayer2])

You can find the whole notebook at: https://github.com/anitagraser/QGIS-resources/blob/master/qgis3/notebooks/basemaps.ipynb

Bringing QGIS maps into Jupyter notebooks

Earlier this year, we explored how to use PyQGIS in Juypter notebooks to run QGIS Processing tools from a notebook and visualize the Processing results using GeoPandas plots.

Today, we’ll go a step further and replace the GeoPandas plots with maps rendered by QGIS.

The following script presents a minimum solution to this challenge: initializing a QGIS application, canvas, and project; then loading a GeoJSON and displaying it:

from IPython.display import Image

from PyQt5.QtGui import QColor
from PyQt5.QtWidgets import QApplication

from qgis.core import QgsApplication, QgsVectorLayer, QgsProject, QgsSymbol, \
    QgsRendererRange, QgsGraduatedSymbolRenderer, \
    QgsArrowSymbolLayer, QgsLineSymbol, QgsSingleSymbolRenderer, \
    QgsSymbolLayer, QgsProperty
from qgis.gui import QgsMapCanvas

app = QApplication([])
qgs = QgsApplication([], False)
canvas = QgsMapCanvas()
project = QgsProject.instance()

vlayer = QgsVectorLayer("./data/traj.geojson", "My trajectory")
if not vlayer.isValid():
    print("Layer failed to load!")

def saveImage(path, show=True): 
    canvas.saveAsImage(path)
    if show: return Image(path)

project.addMapLayer(vlayer)
canvas.setExtent(vlayer.extent())
canvas.setLayers([vlayer])
canvas.show()
app.exec_()

saveImage("my-traj.png")

When this code is executed, it opens a separate window that displays the map canvas. And in this window, we can even pan and zoom to adjust the map. The line color, however, is assigned randomly (like when we open a new layer in QGIS):

To specify a specific color, we can use:

vlayer.renderer().symbol().setColor(QColor("red"))

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

But regular lines are boring. We could easily create those with GeoPandas plots.

Things get way more interesting when we use QGIS’ custom symbols and renderers. For example, to draw arrows using a QgsArrowSymbolLayer, we can write:

vlayer.renderer().symbol().appendSymbolLayer(QgsArrowSymbolLayer())

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

We can also create a QgsGraduatedSymbolRenderer:

geom_type = vlayer.geometryType()
myRangeList = []

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.setColor(QColor("#3333ff"))
myRange = QgsRendererRange(0, 1, symbol, 'Group 1')
myRangeList.append(myRange)

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.setColor(QColor("#33ff33"))
myRange = QgsRendererRange(1, 3, symbol, 'Group 2')
myRangeList.append(myRange)

myRenderer = QgsGraduatedSymbolRenderer('speed', myRangeList)
vlayer.setRenderer(myRenderer)

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

And we can combine both QgsGraduatedSymbolRenderer and QgsArrowSymbolLayer:

geom_type = vlayer.geometryType()
myRangeList = []

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.appendSymbolLayer(QgsArrowSymbolLayer())
symbol.setColor(QColor("#3333ff"))
myRange = QgsRendererRange(0, 1, symbol, 'Group 1')
myRangeList.append(myRange)

symbol = QgsSymbol.defaultSymbol(geom_type)
symbol.appendSymbolLayer(QgsArrowSymbolLayer())
symbol.setColor(QColor("#33ff33"))
myRange = QgsRendererRange(1, 3, symbol, 'Group 2')
myRangeList.append(myRange)

myRenderer = QgsGraduatedSymbolRenderer('speed', myRangeList)
vlayer.setRenderer(myRenderer)

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

Maybe the most powerful option is to use data-defined symbology. For example, to control line width and color:

renderer = QgsSingleSymbolRenderer(QgsSymbol.defaultSymbol(geom_type))

exp_width = 'scale_linear("speed", 0, 3, 0, 7)'
exp_color = "coalesce(ramp_color('Viridis',scale_linear(\"speed\", 0, 3, 0, 1)), '#000000')"

# https://qgis.org/pyqgis/3.0/core/Symbol/QgsSymbolLayer.html?highlight=property#qgis.core.QgsSymbolLayer.PropertySize
renderer.symbol().symbolLayer(0).setDataDefinedProperty(
    QgsSymbolLayer.PropertyStrokeWidth, QgsProperty.fromExpression(exp_width))
renderer.symbol().symbolLayer(0).setDataDefinedProperty(
    QgsSymbolLayer.PropertyStrokeColor, QgsProperty.fromExpression(exp_color))
renderer.symbol().symbolLayer(0).setDataDefinedProperty(
    QgsSymbolLayer.PropertyCapStyle, QgsProperty.fromExpression("'round'"))

vlayer.setRenderer(renderer)

vlayer.triggerRepaint()
canvas.show()
app.exec_()
saveImage("my-traj.png")

Find the full notebook at: https://github.com/anitagraser/QGIS-resources/blob/master/qgis3/notebooks/layer-styling.ipynb

Exploring a hierarchical graph-based model for mobility data representation and analysis

Today’s post is a first quick dive into Neo4J (really just getting my toes wet). It’s based on a publicly available Neo4J dump containing mobility data, ship trajectories to be specific. You can find this data and the setup instructions at:

Maryam Maslek ELayam, Cyril Ray, & Christophe Claramunt. (2022). A hierarchical graph-based model for mobility data representation and analysis [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6405212

I was made aware of this work since they cited MovingPandas in their paper in Data & Knowledge Engineering“The implementation combines several open source tools such as Python, MovingPandas library, Uber H3 index, Neo4j graph database management system”

Once set up, this gives us a database with three hierarchical levels:

Neo4j comes with a nice graphical browser that lets us explore the data. We can switch between levels and click on individual node labels to get a quick preview:

Level 2 is a generalization / aggregation of level 1. Expanding the graph of one of the level 2 nodes shows its connection to level 1. For example, the level 2 port node “Audierne” actually refers to two level 1 nodes:

Every “road” level 1 relationship between ports provide information about the ship, its arrival, departure, travel time, and speed. We can see that this two level 1 ports must be pretty close since travel times are only 5 minutes:

Further expanding one of the port level 1 nodes shows its connection to waypoints of level1:

Switching to level 2, we gain access to nodes of type Traj(ectory). Additionally, the road level 2 relationships represent aggregations of the trajectories, for example, here’s a relationship with only one associated trajectory:

There are also some odd relationships, for example, trajectory 43 has two ends and begins relationships and there are also two road relationships referencing this trajectory (with identical information, only differing in their automatic <id>). I’m not yet sure if that is a feature or a bug:

On level 1, we also have access to ship nodes. They are connected to ports and waypoints. However, exploring them visually is challenging. Things look fine at first:

But after a while, once all relationships have loaded, we have it: the MIGHTY BALL OF YARN ™:

I guess this is the point where it becomes necessary to get accustomed to the query language. And no, it’s not SQL, it is Cypher. For example, selecting a specific trajectory with id 0, looks like this:

 MATCH (t1 {traj_id: 0}) RETURN t1

But more on this another time.


This post is part of a series. Read more about movement data in GIS.

Data engineering for Mobility Data Science (with Python and DVC)

This summer, I had the honor to — once again — speak at the OpenGeoHub Summer School. This time, I wanted to challenge the students and myself by not just doing MovingPandas but by introducing both MovingPandas and DVC for Mobility Data Science.

I’ve previously written about DVC and how it may be used to track geoprocessing workflows with QGIS & DVC. In my summer school session, we go into details on how to use DVC to keep track of MovingPandas movement data analytics workflow.

Here is the recording of the session live stream and you can find the materials at https://github.com/movingpandas/movingpandas-examples/blob/opengeohub2023/README.md


This post is part of a series. Read more about movement data in GIS.

Comparing geographic data analysis in R and Python

Today, I want to point out a blog post over at

https://geocompx.org/post/2023/ogh23/

written together with my fellow “Geocomputation with Python” co-authors Robin Lovelace, Michael Dorman, and Jakub Nowosad.

In this blog post, we talk about our experience teaching R and Python for geocomputation. The context of this blog post is the OpenGeoHub Summer School 2023 which has courses on R, Python and Julia. The focus of the blog post is on geographic vector data, meaning points, lines, polygons (and their ‘multi’ variants) and the attributes associated with them. We plan to cover raster data in a future post.

I’ve archived my Tweets: Goodbye Twitter, Hello Mastodon

Today, Jeff Sikes @[email protected], alerted me to the fact that “Twitter has removed all media attachments from 2014 and prior” (source: https://firefish.social/notes/9imgvtckzqffboxt). So far, it seems unclear whether this was intentional or a system failure (source: https://mas.to/@carnage4life/110922114407553901).

Since I’ve been on Twitter since 2011, this means that some media files are now lost. While the loss of a few low-res images is probably not a major loss for humanity, I would prefer to have some control over when and how content I created vanishes. So, to avoid losing more content, I have followed Jeff’s recommendation to create a proper archival page:

https://anitagraser.github.io/twitter-archive/

It is based on an export I pulled in October 2022 when I started to use Mastodon as my primary social media account. Unfortunately, this export did not include media files.

To follow me in the future, find me on:

https://fosstodon.org/@underdarkGIS

Btw, a recent study published on Nature News shows that Mastodon is the top-ranking Twitter replacement for scientists.

To find other interesting people on Mastodon, there are many useful tools and lists, including, for example:


Update 2023-11-04: I’ve completely deleted my X / Twitter account. If you find any account pretending they are me on that platform, it’s an impostor.

Analyzing video-based bicycle trajectories

Did you know that MovingPandas also supports local image coordinates? Indeed, it does.

In today’s post, we will explore how we can use this feature to analyze bicycle tracks extracted from video footage published by Michael Szell @mszll:

The bicycle trajectory coordinates are stored in two separate lists: xs_640x360 and ys640x360:

This format is kind of similar to the Kaggle Taxi dataset, we worked with in the previous post. However, to use the solution we implemented there, we need to combine the x and y coordinates into nice (x,y) tuples:

df['coordinates'] = df.apply(
    lambda row: list(zip(row['xs_640x360'], row['ys_640x360'])), axis=1)
df.drop(columns=['xs_640x360', 'ys_640x360'], inplace=True)

Afterwards, we can create the points and compute the proper timestamps from the frame numbers:

def compute_datetime(row):
    # some educated guessing going on here: the paper states that the video covers 2021-06-09 07:00-08:00
    d = datetime(2021,6,9,7,0,0) + (row['frame_in'] + row['running_number']) * timedelta(seconds=2)
    return d
def create_point(xy):
    try: 
        return Point(xy)
    except TypeError:  # when there are nan values in the input data
        return None
new_df = df.head().explode('coordinates')
new_df['geometry'] = new_df['coordinates'].apply(create_point)
new_df['running_number'] = new_df.groupby('id').cumcount()
new_df['datetime'] = new_df.apply(compute_datetime, axis=1)
new_df.drop(columns=['coordinates', 'frame_in', 'running_number'], inplace=True)
new_df

Once the points and timestamps are ready, we can create the MovingPandas TrajectoryCollection. Note how we explicitly state that there is no CRS for this dataset (crs=None):

trajs = mpd.TrajectoryCollection(
    gpd.GeoDataFrame(new_df), 
    traj_id_col='id',  t='datetime', crs=None)

Plotting trajectories with image coordinates

Similarly, to plot these trajectories, we should tell hvplot that it should not fetch any background map tiles (’tiles’:None) and that the coordinates are not geographic (‘geo’:False):

If you want to explore the full source code, you can find my Github fork with the Jupyter notebook at: https://github.com/anitagraser/desirelines/blob/main/mpd.ipynb

The repository also contains a camera image of the intersection, which we can use as a background for our trajectory plots:

bg_img = hv.RGB.load_image('img/intersection2.png', bounds=(0,0,640,360)) 

One important caveat is that speed will be calculated in pixels per second. So when we plot the bicycle speed, the segments closer to the camera will appear faster than the segments in the background:

To fix this issue, we would have to correct for the distortions of the camera lens and perspective. I’m sure that there is specialized software for this task but, for the purpose of this post, I’m going to grab the opportunity to finally test out the VectorBender plugin.

Georeferencing the trajectories using QGIS VectorBender plugin

Let’s load the five test trajectories and the camera image to QGIS. To make sure that they align properly, both are set to the same CRS and I’ve created the following basic world file for the camera image:

1
0
0
-1
0
360

Then we can use the VectorBender tools to georeference the trajectories by linking locations from the camera image to locations on aerial images. You can see the whole process in action here:

After around 15 minutes linking control points, VectorBender comes up with the following georeferenced trajectory result:

Not bad for a quick-and-dirty hack. Some points on the borders of the image could not be georeferenced since I wasn’t always able to identify suitable control points at the camera image borders. So it won’t be perfect but should improve speed estimates.


This post is part of a series. Read more about movement data in GIS.

How to use Kaggle’s Taxi Trajectory Data in MovingPandas

Kaggle’s “Taxi Trajectory Data from ECML/PKDD 15: Taxi Trip Time Prediction (II) Competition” is one of the most used mobility / vehicle trajectory datasets in computer science. However, in contrast to other similar datasets, Kaggle’s taxi trajectories are provided in a format that is not readily usable in MovingPandas since the spatiotemporal information is provided as:

  • TIMESTAMP: (integer) Unix Timestamp (in seconds). It identifies the trip’s start;
  • POLYLINE: (String): It contains a list of GPS coordinates (i.e. WGS84 format) mapped as a string. The beginning and the end of the string are identified with brackets (i.e. [ and ], respectively). Each pair of coordinates is also identified by the same brackets as [LONGITUDE, LATITUDE]. This list contains one pair of coordinates for each 15 seconds of trip. The last list item corresponds to the trip’s destination while the first one represents its start;

Therefore, we need to create a DataFrame with one point + timestamp per row before we can use MovingPandas to create Trajectories and analyze them.

But first things first. Let’s download the dataset:

import datetime
import pandas as pd
import geopandas as gpd
import movingpandas as mpd
import opendatasets as od
from os.path import exists
from shapely.geometry import Point

input_file_path = 'taxi-trajectory/train.csv'

def get_porto_taxi_from_kaggle():
    if not exists(input_file_path):
        od.download("https://www.kaggle.com/datasets/crailtap/taxi-trajectory")

get_porto_taxi_from_kaggle()
df = pd.read_csv(input_file_path, nrows=10, usecols=['TRIP_ID', 'TAXI_ID', 'TIMESTAMP', 'MISSING_DATA', 'POLYLINE'])
df.POLYLINE = df.POLYLINE.apply(eval)  # string to list
df

And now for the remodelling:

def unixtime_to_datetime(unix_time):
    return datetime.datetime.fromtimestamp(unix_time)

def compute_datetime(row):
    unix_time = row['TIMESTAMP']
    offset = row['running_number'] * datetime.timedelta(seconds=15)
    return unixtime_to_datetime(unix_time) + offset

def create_point(xy):
    try: 
        return Point(xy)
    except TypeError:  # when there are nan values in the input data
        return None
 
new_df = df.explode('POLYLINE')
new_df['geometry'] = new_df['POLYLINE'].apply(create_point)
new_df['running_number'] = new_df.groupby('TRIP_ID').cumcount()
new_df['datetime'] = new_df.apply(compute_datetime, axis=1)
new_df.drop(columns=['POLYLINE', 'TIMESTAMP', 'running_number'], inplace=True)
new_df

And that’s it. Now we can create the trajectories:

trajs = mpd.TrajectoryCollection(
    gpd.GeoDataFrame(new_df, crs=4326), 
    traj_id_col='TRIP_ID', obj_id_col='TAXI_ID', t='datetime')
trajs.hvplot(title='Kaggle Taxi Trajectory Data', tiles='CartoLight')

That’s it. Now our MovingPandas.TrajectoryCollection is ready for further analysis.

By the way, the plot above illustrates a new feature in the recent MovingPandas 0.16 release which, among other features, introduced plots with arrow markers that show the movement direction. Other new features include a completely new custom distance, speed, and acceleration unit support. This means that, for example, instead of always getting speed in meters per second, you can now specify your desired output units, including km/h, mph, or nm/h (knots).


This post is part of a series. Read more about movement data in GIS.

Deep learning from trajectory data

I’ve previously written about Movement data in GIS and the AI hype and today’s post is a follow-up in which I want to share with you a new review of the state of the art in deep learning from trajectory data.

Our review covers 8 use cases:

  1. Location classification
  2. Arrival time prediction
  3. Traffic flow / activity prediction
  4. Trajectory prediction
  5. Trajectory classification
  6. Next location prediction
  7. Anomaly detection
  8. Synthetic data generation

We particularly looked into the trajectory data preprocessing steps and the specific movement data representation used as input to train the neutral networks:

On a completely subjective note: the price for most surprising approach goes to natural language processing (NLP) Transfomers for traffic volume prediction.

The paper was presented at BMDA2023 and you can watch the full talk recording here:

References

Graser, A., Jalali, A., Lampert, J., Weißenfeld, A., & Janowicz, K. (2023). Deep Learning From Trajectory Data: a Review of Neural Networks and the Trajectory Data Representations to Train Them. Workshop on Big Mobility Data Analysis BMDA2023 in conjunction with EDBT/ICDT 2023.


This post is part of a series. Read more about movement data in GIS.

Tracking geoprocessing workflows with QGIS & DVC

Today’s post is a geeky deep dive into how to leverage DVC (not just) data version control to track QGIS geoprocessing workflows.

“Why is this great?” you may ask.

DVC tracks data, parameters, and code. If anything changes, we simply rerun the process and DVC will figure out which stages need to be recomputed and which can be skipped by re-using cached results.

This can lead to huge time savings compared to re-running the whole model

You can find the source code used in this post on my repo https://github.com/anitagraser/QGIS-resources/tree/dvc

I’m using DVC with the DVC plugin for VSCode but DVC can be used completely from the command line, if you prefer this appraoch.

Basically, what follows is a proof of concept: converting a QGIS Processing model to a DVC workflow. In the following screenshot, you can see the main stages

  1. The QGIS model in the upper left corner
  2. The Python script exported from the QGIS model builder in the lower left corner
  3. The DVC stages in my dvc.yaml file in the upper right corner (And please ignore the hello world stage. It’s a left over from my first experiment)
  4. The DVC DAG visualizing the sequence of stages. Looks similar to the QGIS model, doesn’t it ;-)

Besides the stage definitions in dvc.yaml, there’s a parameters file:

random-points:
  n: 10
buffer-points:
  size: 0.5

And, of course, the two stages, each as it’s own Python script.

First, random-points.py which reads the random-points.n parameter to create the desired number of points within the polygon defined in qgis3/data/test.geojson:

import dvc.api

from qgis.core import QgsVectorLayer
from processing.core.Processing import Processing
import processing

Processing.initialize()

params = dvc.api.params_show()
pts_n = params['random-points']['n']

input_vector = QgsVectorLayer("qgis3/data/test.geojson")
output_filename = "qgis3/output/random-points.geojson"

alg_params = {
    'INCLUDE_POLYGON_ATTRIBUTES': True,
    'INPUT': input_vector,
    'MAX_TRIES_PER_POINT': 10,
    'MIN_DISTANCE': 0,
    'MIN_DISTANCE_GLOBAL': 0,
    'POINTS_NUMBER': pts_n,
    'SEED': None,
    'OUTPUT': output_filename
}
processing.run('native:randompointsinpolygons', alg_params)

And second, buffer-points.py which reads the buffer-points.size parameter to buffer the previously generated points:

import dvc.api
import geopandas as gpd
import matplotlib.pyplot as plt

from qgis.core import QgsVectorLayer
from processing.core.Processing import Processing
import processing

Processing.initialize()

params = dvc.api.params_show()
buffer_size = params['buffer-points']['size']

input_vector = QgsVectorLayer("qgis3/output/random-points.geojson")
output_filename = "qgis3/output/buffered-points.geojson"

alg_params = {
    'DISSOLVE': False,
    'DISTANCE': buffer_size,
    'END_CAP_STYLE': 0,  # Round
    'INPUT': input_vector,
    'JOIN_STYLE': 0,  # Round
    'MITER_LIMIT': 2,
    'SEGMENTS': 5,
    'OUTPUT': output_filename
}
processing.run('native:buffer', alg_params)

gdf = gpd.read_file(output_filename)
gdf.plot()

plt.savefig('qgis3/output/buffered-points.png')

With these things in place, we can use dvc to run the workflow, either from within VSCode or from the command line. Here, you can see the workflow (and how dvc skips stages and fetches results from cache) in action:

If you try it out yourself, let me know what you think.

PyQGIS Jupyter notebooks on Windows using Conda

The QGIS conda packages have been around for a while. One of their use cases, for example, is to allow Linux users to easily install multiple versions of QGIS.

Similarly, we’ve seen posts on using PyQGIS in Jupyter notebooks. However, I find the setup with *.bat files rather tricky.

This post presents a way to set up a conda environment with QGIS that is ready to be used in Jupyter notebooks.

The first steps are to create a new environment and install QGIS. I use mamba for the installation step because it is faster than conda but you can use conda as well:

(base) PS C:\Users\anita> conda create -n qgis python=3.9
(base) PS C:\Users\anita> conda activate qgis
(qgis) PS C:\Users\anita> mamba install -c conda-forge qgis=3.28.2 
(qgis) PS C:\Users\anita> qgis

If we now try to import the qgis module in Python, we get an error:

(qgis) PS C:\Users\anita> python
Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:41:22) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import qgis
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'qgis'

To fix this error, we need to get the paths from the Python console inside QGIS:

import sys
sys.path
['H:/miniconda3/envs/qgis/Library/./python', 'C:/Users/anita/AppData/Roaming/QGIS/QGIS3\\profiles\\default/python', ... ]

This list of paths can be configured as the defaults for our qgis environment using conda develop:

(qgis) PS C:\Users\anita> conda activate base
(base) PS C:\Users\anita> mamba install conda-build -c conda-forge
(base) PS C:\Users\anita> conda develop -n qgis [list of paths from qgis python console] 

With this setup, the import should now work without errors:

(base) PS C:\Users\anita> conda activate qgis
(qgis) PS C:\Users\anita> python
Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:41:22) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import qgis

The example Jupyter notebook covers running a QGIS Processing algorithm and visualizing the results in the notebook using GeoPandas:

Head over to Github to find the full instructions: https://github.com/anitagraser/QGIS-resources/blob/master/qgis3/notebooks/hello-world.ipynb

MovingPandas v0.13 & v0.14 released!

December has been busy with two new MovingPandas releases: v0.13 and v0.14.

The latest v0.14 release is now available from conda-forge.

These releases are a huge step forward towards making MovingPandas easier to install with fewer mandatory dependencies. All interactive plotting libraries are now optional. So if you are using MovingPandas for trajectory data processing in the background and don’t need the interactive visualization features, the number of necessary libraries is now much lower. This (and the fact that GeoPandas is now shipped with OSGeo4W) will also make it easier to use MovingPandas in QGIS plugins.

New features:

  • #268 New add_angular_difference method

Includes fixes and enhancements for:

  • #267 Improved documentation: direction values are [0, 360)

Behind the scenes:

As always, all tutorials are available from the movingpandas-examples repository and on MyBinder:

If you have questions about using MovingPandas or just want to discuss new ideas, you’re welcome to join our discussion forum.

Visualizing trajectories with QGIS & MobilityDB

In the previous post, we — creatively ;-) — used MobilityDB to visualize stationary IOT sensor measurements.

This post covers the more obvious use case of visualizing trajectories. Thus bringing together the MobilityDB trajectories created in Detecting close encounters using MobilityDB 1.0 and visualization using Temporal Controller.

Like in the previous post, the valueAtTimestamp function does the heavy lifting. This time, we also apply it to the geometry time series column called trip:

SELECT mmsi,
    valueAtTimestamp(trip, '2017-05-07 08:55:40') geom,
    valueAtTimestamp(SOG, '2017-05-07 08:55:40') SOG
FROM "public"."ships"

Using this SQL query, we again set up a — not yet Temporal Controller-controlled — QueryLayer.

To configure Temporal Controller to update the timestamp in our SQL query, we again need to run the Python script from the previous post.

With this done, we are all set up to animate and explore the movement patterns in our dataset:


This post is part of a series. Read more about movement data in GIS.

MovingPandas v0.12 released!

The latest v0.12 release is now available from conda-forge.

This release contains some really cool new features, including:

  • New function to add an acceleration column #253
  • We have further improved our repo setup by adding an action that automatically creates and publishes packages from releases, heavily inspired by the work of the GeoPandas team.
  • Last but not least, we’ve created a Twitter account for the project. (And might soon add a Mastodon account as well.)

As always, all tutorials are available from the movingpandas-examples repository and on MyBinder:

If you have questions about using MovingPandas or just want to discuss new ideas, you’re welcome to join our discussion forum.

Back to Top

Sustaining Members