1.5.0
Just in time for summer, v1.5.0 of the Data Hub CLI is here with new features and bug fixes that will help your geospatial data kick back and relax in a comfortable space.
🔤🔤🔤 CSV Group by 🔢🔢🔢
--groupby columnName consolidates multiple rows of a CSV that share a unique ID into a single feature (designated with -i (usually representing a admin geography)); values in each row within that selected column will be grouped as nested properties within an object named after the column in the consolidated feature properties.
-groupby can be used with upload or the join command to extract the heirarchy from a CSV and upload it to a space without geometries.
- with
join, the data is uploaded and the virtual space with the geometry space is created in one step. upload --groupbyis useful for updating the "data space" in a virtual space that has already been created. It can also be used to upload the grouped data before a virtual space has created with a space containing geometries matching geoIDs usinghere xyz vs -a
This feature is best illustrated by election data, census data and time series data. One example is COVID-19 data from the Covid Tracking project API.
here xyz join xkRyxQl9 -f https://covidtracking.com/api/v1/states/daily.csv --noCoords -i state --groupby date
This will merge daily state testing data from March 2020 into a virtual space with xkRyxQl9, a shared space with US state geometries.
Date tags and properties
We've added --dateprops to the --date option, meaning we now let you save your time slices as new properties in the feature as well as tags -- these are prefixed by xyz_propertyname_.
Also, while converting the date string, we no longer add a time zone offset to the ISO 8601 timestamp as this caused problems when the data wasn't collected in your timezone.
📜💾⭐️ CLI history ⭐️💾📜
You're working with that complex dataset, and you just created the perfect upload command. It is so good, the data in your space is excellent, and your map is happy. Then you go back to update that space six months later… and you have completely forgotten the options. Was it -i? What chunk size did I use? How did I make that tag?
Well, never fear, upload --history is here. It automatically records the last three upload commands in the space definition, and it even lets you --save one as a --fav.
If you've done at least one upload to a space, you can recall it using an interactive menu:
here xyz upload spaceid --history
You can pick an upload command to save with --history save, and then re-run that particular command using --history fav. You can also --clear your history.
🚚🚚🚚 Batch upload 🚢🚢🚢
If you have a folder full of geospatial files, you can upload them all to a space in one command with --batch. Just specify the directory with -f and the filetype after --batch: geojson,geojsonl,csv, shp,json, or gpx and watch your files fly into Data Hub.
While uploading shapefiles,--batch will inspect directories in that specificed directory and look for .shp and all the other files you get when uncompressing a zipped shapefile. (This is handy, say when you've downloaded say 50 state shapefiles from the US Census website. We've all been there.)
🔍🔍🔍 Data Hub Console 🔎🔎🔎
Both config and token have a new --console option that opens up the new Data Hub console in a web browser.
🐛🐞🦟 Bug fixes and other enhancements 🛠🔧🔩
- inline help was improved, along with online documentation
here xyz list --filterdoes a better job of handling null title and description fields- GeoJSON features that are not in a
featureCollectioncan once again be uploaded config -rnow outputs properly formattedjson- confirmation prompt added to
config --shared true join -kwas changed tojoin -ito be more consistent withupload -i- we fixed a bug in activity log creation -- we were sending state as a string, but the API expected a number
- we fixed some issues while streaming
voronoianddelaunaypolygons show -rno longer wrapsgeojsonloutput in afeatureCollection- we set
skipCache=truefor/statisticsand/spaceGETcalls so you get the latest and greatest metadata