Whenever you run flask build-explorer --upload-map-files, a new set of CSVs (named like _5A3A14.csv) are uploaded to S3, and URLs to these files are written into the sources.json file that is updated and committed to the explorer directory. After that, and after the netlify deployment of the explorer, the public web map will use these new files.
However, that command has no cleanup steps to remove whatever CSV files were previously in use. (There are probably > 100 unused CSV files in S3 at this point.)
There is a separate operation for cleanup, flask clean-explorer-bucket. However, I haven't used that command for a little bit, so it would be good to re-read the code before running it.
I could see this command being placed in a github action that only has workflow_dispatch, so a repo admin could just go in and run it periodically... these files are small enough that even after a very long time without cleaning up the bucket, it won't really incur significant costs.
Note there are a couple of files in the bucket that should not be deleted, states.csv and counties.csv (I think that's their names), which is why the generated CSVs have _ prefixes. The script will read all valid file names from the current sources.json file, and then remove all _* files that don't match any of the valid file names.
Whenever you run
flask build-explorer --upload-map-files, a new set of CSVs (named like_5A3A14.csv) are uploaded to S3, and URLs to these files are written into thesources.jsonfile that is updated and committed to the explorer directory. After that, and after the netlify deployment of the explorer, the public web map will use these new files.However, that command has no cleanup steps to remove whatever CSV files were previously in use. (There are probably > 100 unused CSV files in S3 at this point.)
There is a separate operation for cleanup,
flask clean-explorer-bucket. However, I haven't used that command for a little bit, so it would be good to re-read the code before running it.I could see this command being placed in a github action that only has
workflow_dispatch, so a repo admin could just go in and run it periodically... these files are small enough that even after a very long time without cleaning up the bucket, it won't really incur significant costs.Note there are a couple of files in the bucket that should not be deleted,
states.csvandcounties.csv(I think that's their names), which is why the generated CSVs have_prefixes. The script will read all valid file names from the currentsources.jsonfile, and then remove all_*files that don't match any of the valid file names.