Related Plugins and Tags

QGIS Planet

OSM turn restriction QA with QGIS

Wrong navigation instructions can be annoying and sometimes even dangerous, but they happen. No dataset is free of errors. That’s why it’s important to assess the quality of datasets. One specific use case I previously presented at FOSS4G 2013 is the quality assessment of turn restrictions in OSM, which influence vehicle routing results.

The main idea is to compare OSM to another data source. For this example, I used turn restriction data from the City of Toronto. Of the more than 70,000 features in this dataset, I extracted a sample of about 500 turn restrictions around Ryerson University, which I had the pleasure of visiting in 2014.

As you can see from the following screenshot, OSM and the city’s dataset agree on 420 of 504 restrictions (83%), while 36 cases (7%) are in clear disagreement. The remaining cases require further visual inspection.

toronto_turns_overview

The following two examples show one case where the turn restriction is modelled in both datasets (on the left) and one case where OSM does not agree with the city data (on the right).
In the first case, the turn restriction (short green arrow) tells us that cars are not allowed to turn right at this location. An OSM-based router (here I used OpenRouteService.org) therefore finds a route (blue dashed arrow) which avoids the forbidden turn. In the second case, the router does not avoid the forbidden turn. We have to conclude that one of the two datasets is wrong.

turn restriction in both datasets missing restriction in OSM?

If you want to learn more about the methodology, please check Graser, A., Straub, M., & Dragaschnig, M. (2014). Towards an open source analysis toolbox for street network comparison: indicators, tools and results of a comparison of OSM and the official Austrian reference graph. Transactions in GIS, 18(4), 510-526. doi:10.1111/tgis.12061.

Interestingly, the disagreement in the second example has been fixed by a recent edit (only 14 hours ago). We can see this in the OSM way history, which reveals that the line direction has been switched, but this change hasn’t made it into the routing databases yet:

now before

This leads to the funny situation that the oneway is correctly displayed on the map but seemingly ignored by the routers:

toronto_okeefe_osrm

To evaluate the results of the automatic analysis, I wrote a QGIS script, which allows me to step through the results and visually compare turn restrictions and routing results. It provides a function called next() which updates a project variable called myvar. This project variable controls which features (i.e. turn restriction and associated route) are rendered. Finally, the script zooms to the route feature:

def next():
    f = features.next()
    id = f['TURN_ID']
    print "Going to %s" % (id)
    QgsExpressionContextUtils.setProjectVariable('myvar',id)
    iface.mapCanvas().zoomToFeatureExtent(f.geometry().boundingBox())
    if iface.mapCanvas().scale() < 500:
        iface.mapCanvas().zoomScale(500)

layer = iface.activeLayer()
features = layer.getFeatures()
next()

You can see it in action here:

I’d love to see this as an interactive web map where users can have a look at all results, compare with other routing services – or ideally the real world – and finally fix OSM where necessary.

This work has been in the making for a while. I’d like to thank the team of OpenRouteService.org who’s routing service I used (and who recently added support for North America) as well as my colleagues at Ryerson University in Toronto, who pointed me towards Toronto’s open data.


Exploring CKAN data portals with QGIS

CKAN is for data portals what QGIS is for GIS. The project describes itself as

CKAN is a powerful data management system that makes data accessible – by providing tools to streamline publishing, sharing, finding and using data. CKAN is aimed at data publishers wanting to make their data open and available.

Many open (government) data platforms rely on CKAN and while the web interface is pretty good, there’s still the hassle of finding and downloading the data using a web browser.

This is where the QGIS CKAN-Browser plugin comes in useful. The plugin has been developed by BergWerkGIS for the state of Carinthia, Austria and added to the public plugin repo earlier this year. CKAN-Browser comes preconfigured with some Austrian and European CKAN URLs for testing, so you can get going really quickly. It is easy to search for specific datasets or explore the portal’s data categories and it is just one click to download and load the data into your QGIS map:

Screenshot 2016-06-26 22.25.00

Here’s a quick demo of loading a vector dataset as well as raster tiles:

For the full usage guide, visit the plugin’s Github page.

It’s great to see how well CKAN and QGIS can play together to enable seamless access to open data!


Experiments in the 3rd dimension

The upcoming 2.14 release of QGIS features a new renderer. For the first time in QGIS history, it will be possible to render 2.5D objects directly in the map window. This feature is the result of a successful crowd funding campaign organized by Matthias Kuhn last year.

In this post, I’ll showcase this new renderer and compare the achievable results to output from the Qgis2threejs plugin.

For this post, I’m using building parts data from the city of Vienna, which is publicly available through their data viewer:

This dataset is a pretty detailed building model, where each building is made up of multiple features that represent parts of the building with different height. Of course, if we just load the dataset in default style, we cannot really appreciate the data:

Loaded building parts layer

Loaded building parts layer

All this changes if we use the new 2.5D renderer. With just a few basic settings, we can create 2.5D representations of the building parts:

QGIS 2.5D renderer settings

QGIS 2.5D renderer settings

Compare the results to aerial images in Google Maps …

QGIS 2.5D renderer and view in Google Maps

QGIS 2.5D renderer and view in Google Maps

… not bad at all!

Except for a few glitches concerning the small towers at the corners of the building, and some situations where it seems like the wrong building part is drawn in the front, the 2.5D look is quite impressive.

Now, how does this compare to Qgis2threejs, one of the popular plugins which uses web technologies to render 3D content?

One obvious disadvantage of Qgis2threejs is that we cannot define a dedicated roof color. Thus the whole block is drawn in the same color.

On the other hand, Qgis2threejs does not suffer from the rendering order issues that we observe in the QGIS 2.5D renderer and the small towers in the building corners are correctly displayed as well:

QGIS 2.5D renderer and QGIS2threejs output

QGIS 2.5D renderer and Qgis2threejs output

Overall, the 2.5D renderer is a really fun and exciting new feature. Besides the obvious building usecase, I’m sure we will see a lot of thematic maps making use of this as well.

Give it a try!

In the next post, I’m planning a more in-depth look into how the 2.5D renderer works. Here’s a small teaser of what’s possible if you are not afraid to get your hands dirty:


A QGIS router for GIP.at

Monday, January 4th 2016, was the open data release date of the official Austrian street network dataset called GIP.at. As far as I know, the dataset is not totally complete yet but it should be in the upcoming months. I’ve blogged about GIP.at before in Open source IDF parser for QGIS and Open source IDF router for QGIS where I was implementing tools based on the data samples that were available then. Naturally, I was very curious if my parser and particularly the router could handle the whole country release …

Some code tweaking, patience for loading, and 9GB of RAM later, QGIS happily routes through Austria, for example from my work place to Salzburg – maybe for some skiing:

Screenshot 2016-01-06 17.11.27

The routing request itself takes something between 1 and 2 seconds. (I should still add a timer to it.)

So far, I’ve implemented shortest distance routing for pedestrians, bikes, and cars. Since the data also contains travel speeds, it should be quite straight-forward to also add shortest travel time routing.

The code is available on Github for you to try. I’d appreciate any feedback!


Open source IDF parser for QGIS

IDF is the data format used by Austrian authorities to publish the official open government street graph. It’s basically a text file describing network nodes, links, and permissions for different modes of transport.

Since, to my knowledge, there hasn’t been any open source IDF parser available so far, I’ve started to write my own using PyQGIS. You can find the script which is meant to be run in the QGIS Python console in my Github QGIS-resources repo.

I haven’t implemented all details yet but it successfully parses nodes and links from the two example IDF files that have been published so far as can be seen in the following screenshot which shows the Klagenfurt example data:

Screenshot 2015-07-23 16.23.25

If you are interested in advancing this project, just get in touch here or on Github.


Routing in polygon layers? Yes we can!

A few weeks ago, the city of Vienna released a great dataset: the so-called “Flächen-Mehrzweckkarte” (FMZK) is a polygon vector layer with an amazing level of detail which contains roads, buildings, sidewalk, parking lots and much more detail:

preview of the Flächen-Mehrzweckkarte

preview of the Flächen-Mehrzweckkarte

Now, of course we can use this dataset to create gorgeous maps but wouldn’t it be great to use it for analysis? One thing that has been bugging me for a while is routing for pedestrians and how it’s still pretty bad in many situations. For example, if I’d be looking for a route from the northern to the southern side of the square in the previous screenshot, the suggestions would look something like this:

Pedestrian routing in Google Maps

Pedestrian routing in Google Maps

… Great! Google wants me to walk around it …

Pedestrian routing on openstreetmap.org

Pedestrian routing on openstreetmap.org

… Openstreetmap too – but on the other side :P

Wouldn’t it be nice if we could just cross the square? There’s no reason not to. The routing graphs of OSM and Google just don’t contain a connection. Polygon datasets like the FMZK could be a solution to the issue of routing pedestrians over squares. Here’s my first attempt using GRASS r.walk:

Routing with GRASS r.walk

Routing with GRASS r.walk (Green areas are walk-friendly, yellow/orange areas are harder to cross, and red buildings are basically impassable.)

… The route crosses the square – like any sane pedestrian would.

The key steps are:

  1. Assigning pedestrian costs to different polygon classes
  2. Rasterizing the polygons
  3. Computing a cost raster for moving using r.walk
  4. Computing the route using r.drain

I’ve been using GRASS 7 for this example. GRASS 7 is not yet compatible with QGIS but it would certainly be great to have access to this functionality from within QGIS. You can help make this happen by supporting the crowdfunding initiative for the GRASS plugin update.


Exploring open government population data collected by ODVIS-AT

At FOSS4G2013, I had the pleasure to attend a presentation about the ODVIS.AT project by Marius Schebella from the FH Salzburg. The goal of the project – which ended in Summer 2014 – was “to display open data (demographic, open government data) in a quick and easy way to end users” by combining it with OpenStreetMap. Even though their visualization does not work for me (“unable to get datasets” error), not all is lost because they provide an SQL dump of their PostGIS database.

Checking the data, it quickly becomes apparent that each data publisher decided to publish a slightly different dataset: some published their population counts as timelines over multiple years, others classified population by migration background, age, or gender. Also, according to the metadata table, no data from Salzburg and Burgenland were included. Most datasets’ reference date is between 2011 and 2013 but the data of the westernmost state Vorarlberg seems to be from 2001.

Based on this database, I created a dataset combining the municipalities with the Viennese districts and joined the population data from the individual state tables. The following map shows the population density based on this dataset: it is easy to recognize the densely populated regions around Vienna, Linz, Graz, and in the big Alpine valleys.

odvis_popdens

Overall, it is incredibly time-consuming to create this seemingly simple dataset. It would be very helpful if the publishers would agree on a common scheme for releasing at least the most basic information.

Considering that OpenStreetMap already contains population data, it barely seems worth all the trouble to merge these OGD datasets. Granted, the time lines of population development would be interesting but they are not available for each state.

P.S. If anyone is interested in the edited database, I would be happy to share the SQL dump.


QA for Turn Restrictions in OSM

Correct turn restriction information is essential for the vehicle routing quality of any street network dataset – open or commercial. One of the challenges of this kind of information is that these restrictions are typically not directly visible on each map.

This post is inspired by a share on G+ which resurfaced in my notifications. In a post on the Mapbox blog, John Firebaugh presents the OSM iD editor which should make editing turn restrictions straight-forward: clicking on the source link turns the associated turn information visible. By clicking on the turn arrows, the user can easily toggle between allowed and forbidden.

iD, the web editor for OpenStreetMap, makes it even simpler to add turn restrictions to OpenStreetMap.

editing turn restrictions in iD, the web editor for OpenStreetMap. source: “Simple Editing for Turn Restrictions in OpenStreetMap” by John Firebaugh on June 06 2014

But the issue of identifying wrong turn restrictions remains. One approach to solving this issue is to compare restriction information in OSM with the information in a reference data set.

This is possible by comparing routes computed on OSM and the reference data using a method I presented at FOSS4G (video): a turn restriction basically is a forbidden combination of links. If we compute the route from the start link of the forbidden combination to the end link, we can check if the resulting route geometry violates the restriction or uses an appropriate detour:

read more about this method and results:

illustrative slide from my LBS2014 presentation on OSM vehicle routing quality – read more about this method and results for Vienna in our TGIS paper or the open pre-print version

It would be great to have an automated system comparing OSM and open government street network data to detect these differences. The quality of both data sets could benefit enormously by bundling their QA efforts. Unfortunately, the open government street network data sets I’m aware of don’t contain turn information.


5 meter elevation model of Vienna published

A while ago I wrote about the 5 meter elevation model of the city of Vienna. In the meantime the 5 meter model has been replaced by a 10 meter version.

For future reference, I’ve therefore published the 5 meter version on opendataportal.at.

details from the Viennese elevation model

details of the Viennese elevation model

I’ve been using the dataset to compare it to EU-DEM and NASA SRTM for energy estimation:
A. Graser, J. Asamer, M. Dragaschnig: “How to Reduce Range Anxiety? The Impact of Digital Elevation Model Quality on Energy Estimates for Electric Vehicles” (2014).

I hope someone else will find it useful as well because assembling the whole elevation model was quite a challenge.

mosaicking the rasterized WFS responses

mosaicking the rasterized WFS responses


OSM quality assessment with QGIS: positional accuracy

Over the last years, research on OpenStreetMap data quality has become increasingly popular. At this year’s FOSS4G, I had the honor to present some work we did at the AIT to assess OSM quality in Vienna, Austria. In the meantime, our paper “Towards an Open Source Analysis Toolbox for Street Network Comparison” has been published for early access. Thanks to the conference organizers who made this possible! I’ve implemented comparison tools found in related OSM literature as well as new tools for oneway street and turn restriction comparison using Sextante scripts and models for QGIS 1.8. All code is available on Github to enable collaboration. If you are interested in OSM data quality research, I’d like to invite you to give the tools a try.

Since most users probably don’t have access to QGIS 1.8 anymore, I’ll be updating the tools to QGIS 2.0 Processing. I’m starting today with the positional accuracy comparison tool. It is based on a method described by Goodchild & Hunter (1997). Here’s the corresponding slide from my FOSS4G presentation:

foss4g_osm_data_quality_10

The basic idea is to evaluate the positional accuracy of a street graph by comparing it with a reference graph. To do that, we check how much of the graph lies within a certain tolerance (buffer) of the reference graph.

The processing model uses the following input: the two street graphs which should be compared, the size of the buffer (tolerance for positional accuracy), a polygon layer with analysis regions, and the field containing the region id. This is how the model looks in Processing modeler:

graph_covered_by_buffered_reference_graph

First, all layers are reprojected into a common CRS. This will have to be adjusted if the tool is used in other geographic regions. Then the reference graph is buffered and – since I found that dissolving buffers directly in the buffer tool can become very slow with big datasets – the faster difference tool is used to dissolve the buffers before we calculate the graph length inside the buffer (inbufLEN) as well as the total graph length in the analysis region (totalLEN). Finally, the two results are joined based on the region id field and the percentage of graph length within the buffered reference graph (inbufPERC) is calculated. A high percentage shows that both graphs agree very well geometrically.

The following image shows the tool applied to a sample of OpenStreetMap (red) and official data published by the city of Vienna (purple) at Wien Handelskai. OSM was used as a reference graph and the buffer size was set to 10 meters.

ogd_osm_positional_accuracy

In general, both graphs agree quite well. The percentage of the official graph within 10 meters of the OSM graph is 93% in the 20th district. In the above image, we can see that links available in OSM are not contained in the official graph (mostly pedestrian/bike links) and there seem to be some connectivity issues as well in the upper right corner of the image.

In my opinion, Processing models are a great solution to document geoprocessing work flows and share them with others. If you want to collaborate on building more models for OSM-related analysis, just leave a comment bellow.


Public transport isochrones with pgRouting

This post covers a simple approach to calculating isochrones in a public transport network using pgRouting and QGIS.

For this example, I’m using the public transport network of Vienna which is loaded into a pgRouting-enable database as network.publictransport. To create the routable network run:

select pgr_createTopology('network.publictransport', 0.0005, 'geom', 'id');

Note that the tolerance parameter 0.0005 (units are degrees) controls how far link start and end points can be apart and still be considered as the same topological network node.

To create a view with the network nodes run:

create or replace view network.publictransport_nodes as
select id, st_centroid(st_collect(pt)) as geom
from (
	(select source as id, st_startpoint(geom) as pt
	from network.publictransport
	) 
union
	(select target as id, st_endpoint(geom) as pt
	from network.publictransport
	) 
) as foo
group by id;

To calculate isochrones, we need a cost attribute for our network links. To calculate travel times for each link, I used speed averages: 15 km/h for buses and trams and 32km/h for metro lines (similar to data published by the city of Vienna).

alter table network.publictransport add column length_m integer;
update network.publictransport set length_m = st_length(st_transform(geom,31287));

alter table network.publictransport add column traveltime_min double precision;
update network.publictransport set traveltime_min = length_m  / 15000.0 * 60; -- average is 15 km/h
update network.publictransport set traveltime_min = length_m  / 32000.0 * 60 where "LTYP" = '4'; -- average metro is 32 km/h

That’s all the preparations we need. Next, we can already calculate our isochrone data using pgr_drivingdistance, e.g. for network node #1:

create or replace view network.temp as
 SELECT seq, id1 AS node, id2 AS edge, cost, geom
  FROM pgr_drivingdistance(
    'SELECT id, source, target, traveltime_min as cost FROM network.publictransport',
    1, 100000, false, false
  ) as di
  JOIN network.publictransport_nodes pt
  ON di.id1 = pt.id;

The resulting view contains all network nodes which are reachable within 100,000 cost units (which are minutes in our case).

Let’s load the view into QGIS to visualize the isochrones:

isochrone_publictransport_1

The trick is to use data-defined size to calculate the different walking circles around the public transport stops. For example, we can set up 10 minute isochrones which take into account how much time was used to travel by pubic transport and show how far we can get by walking in the time that is left:

1. We want to scale the circle radius to reflect the remaining time left to walk. Therefore, enable Scale diameter in Advanced | Size scale field:

scale_diameter

2. In the Simple marker properties change size units to Map units.
3. Go to data defined properties to set up the dynamic circle size.

datadefined_size

The expression makes sure that only nodes reachable within 10 minutes are displayed. Then it calculates the remaining time (10-"cost") and assumes that we can walk 100 meters per minute which is left. It additionally multiplies by 2 since we are scaling the diameter instead of the radius.

To calculate isochrones for different start nodes, we simply update the definition of the view network.temp.

While this approach certainly has it’s limitations, it’s a good place to start learning how to create isochrones. A better solution should take into account that it takes time to change between different lines. While preparing the network, more care should to be taken to ensure that possible exchange nodes are modeled correctly. Some network links might only be usable in one direction. Not to mention that there are time tables which could be accounted for ;)


Improving Population Density Maps Using Dasymetric Mapping

Yesterday, I described my process to generate a basic population density map from the city of Vienna’s open government data. In the end of that post, I described some ideas for further improvement. Today, I want to follow-up on those ideas using what is known as dasymetric mapping. GIS Dictionary defines it well (much better than Wikipedia):

Dasymetric mapping is a technique in which attribute data that is organized by a large or arbitrary area unit is more accurately distributed within that unit by the overlay of geographic boundaries that exclude, restrict, or confine the attribute in question.
For example, a population attribute organized by census tract might be more accurately distributed by the overlay of water bodies, vacant land, and other land-use boundaries within which it is reasonable to infer that people do not live.

That’s exactly what I want to do: Based on subdistricts with population density values and auxiliary data – Corine Land Cover to be exact – I want to create an improved representation of population density within the city.

This is the population density map I start out with:

… and this is the Corine Land Cover dataset for the same area:

It shows built-up areas (red), parks and natural areas (green) as well as water-covered regions (blue). For further analysis, I follow the assumption that people only live in areas with Corine code 111 “Continuous urban fabric” and 112 “Discontinuous urban fabric”. Therefore, I use the Intersection tool to clip only these residential areas from the subdistrict polygons. The subdistrict population can now be distributed over these new, smaller areas (use Field Calculator) to create a more realistic visualization of population density:

For easier comparison, I put the original density and the dasymetric map into a looping animation. Some subdistricts change their population density values quite drastically, especially in regions where big parts covered by water or rail infrastructure were removed:

Corine Land Cover is not too detailed but I think it still usable on this scale. One thing to note is that I used data from 2006 with population data from 2012 so some areas in the outer districts will have been turned residential in the meantime. But I hope this doesn’t distort the overall picture too much.


Mapping OGDWien Population Density

The city of Vienna provides both subdistrict geometries and population statistics. Mapping the city’s population density should be straightforward, right? Let’s see …

We should be able to join on ZBEZ and SUB_DISTRICT_CODE, check! But what about the actual population counts? Unfortunately, there is no file which simply lists population per subdistrict. The file I found contains four lines for each subdistrict: females 2011, males 2011, females 2012 and males 2012. That’s not the perfect format for mapping general population density.

A quick way to prepare our input data is applying pivot tables, eg. in Open Office: The goal is to have one row per subdistrict and columns for population in 2011 and 2012:

Export as CSV, add CSVT and load into QGIS. Finally, we can join geometries and CSV table:

A quick look at the joined data confirms that each subdistrict now has a population value. But visualizing absolute values results in misleading maps. Big subdistricts with only average density will overrule smaller but much denser subdistricts:

That’s why we need to calculate population density. This is easy to do using Field Calculator. The subdistrict file already contains area values but even if they were missing, we could calculate it using the $area operator: "pop2012" / ($area / 10000). The resulting population density in population per ha finally shows which subdistricts are the most densely populated:

One could argue that this is still no accurate representation of population density: Big parts of some subdistricts are actually covered by water – especially along the Danube – and therefore uninhabited. There are also big parks which could be excluded from the subdistrict area. But that’s going to be the topic of another post.

If you want to use my results so far, you can download the GeoJSON file from Github.


WFS to PostGIS in 3 Steps

This is a quick note on how to download features from a WFS and import them into a PostGIS database. The first line downloads a zipped Shapefile from the WFS. The second one unzips it and the last one loads the data into my existing “gis_experimental” database:

wget "http://data.wien.gv.at/daten/geoserver/ows?service=WFS&request=GetFeature&version=1.1.0&typeName=ogdwien:BEZIRKSGRENZEOGD&srsName=EPSG:4326&outputFormat=shape-zip" -O BEZIRKSGRENZEOGD.zip
unzip -d /tmp BEZIRKSGRENZEOGD.zip
shp2pgsql -s 4326 -I -S -c -W Latin1 "/tmp/BEZIRKSGRENZEOGD.shp" | psql gis_experimental

Now, I’d just need a loop through the WFS Capabilities to automatically fetch all offered layers … Ideas anyone?

Thanks to Tim for his post “Batch importing shapefiles into PostGIS” which was very useful here.

Update: Many readers have pointed out that ogr2ogr is a great tool for this kind of use cases and can do the above in one line. That’s true – if it works. Unfortunately, it is picky about the supported encodings, e.g. doesn’t want to parse ISO-8859-15. In such cases, the three code lines above can be a good alternative.


Green Vienna – Tree Cadastre

The Viennese tree cadastre “Baumkataster” contains all trees growing on city property. Amongst the attributes you can find the tree species and year of plantation as well as the diameter of it’s crown.

Using the crown diameter, we can create a foliage map of the city of Vienna. (Note that trees on private ground are missing.)

trees in Vienna scaled by crown diameter

Here’s a high-resolution version of the map: PDF (10 MB)

The tree cadastre is still under construction. If you miss a tree, let the OGD team know.


Finding Open Data – DataCatalogs.org

DataCatalogs.org aims to be the most comprehensive list of open data catalogs in the world. [...] The alpha version of DataCatalogs.org was launched at OKCon 2011 in Berlin.

As of today, 134 data catalogs have been registered. From Dalian, China to Rhode Island, data from all over the world is ready to be discovered.

Open data and open GIS – 2011 is an exciting year!


Open Government Data Wien in QGIS

Vienna’s Open Government Data initiative is publishing an increasing amount of Geodata and the best thing is: They’re publishing it via open standardized services! Both WMS and WFS are available and ready to be used in QGIS.

Let’s see how we can use the data available through their WFS using the district information layer “Bezirksgrenzen” as an example. The page lists a GML and a GeoJSON version of WFS. For now, we’ll use GeoJSON.

In QGIS, the layer can be loaded using “Add vector layer” – “Protocol” and inserting the GeoJSON url there. The encoding should be changed to ISO8859-15 to account for “Umlaute”.

The loaded GeoJSON layer "Bezirksgrenzen"

Now, we have geodata. Let’s add some attribute data too! Attribute data is available in CSV format. After downloading e.g. some information on the district population, check the file content and remove excessive header lines so that there is only one header line containing attribute names left. Then, you can load the CSV file into QGIS too (“Add vector layer”).

Last step: Joining geodata and attribute data! Go to the vector layer’s properties – Join tab and add the following join relation:

Joining GeoJSON and attribute layer

Now, the attribute table of the vector layer contains the additional CSV attributes – ready for further analysis. If you want to classify based on numerical CSV attributes, you’ll have to create a .csvt file first otherwise all fields are interpreted as texts.

Works great. Thumbs up for this great initiative!


  • Page 1 of 1 ( 17 posts )
  • opengovermentdata

Back to Top

Sustaining Members