Consume calculates consumption and emission results based on a number of input parameters.
The core Consume code was designed for interactive use in a REPL (read, evaluate, print, loop) environment.
FERA has wrapped the core code to provide an application interface.
Run the wrapper (consume_batch.py) with no arguments to see usage instructions.
$ python consume_batch.py
usage: consume_batch.py [-h] [-f loadings file] [-x output columns]
[-l message level] [--metric] [--nosera] [-o output filename]
[burn type (activity | natural)]
[input file csv format]
Consume predicts fuel consumption, pollutant emissions, and heat release
based on input fuel loadings and environmental variables. This command
line interface requires a specified burn type (either activity or natural),
environmental variables input file (csv format), and fuel loadings file
(generated by FCCS 3.0, csv format), and. A sample fuel loadings file
(fuel_loadings.csv) and environmental inputs file (input.csv) have been
provided. Note: the units column in the input csv file is ignored. The units
are tons_ac (consumption columns) and lbs_ac (emissions columns). Use
the --metric flag to convert to tonnes_ha (consumption columns) and
kg_ha (emissions columns).
positional arguments:
burn type (activity | natural)
input file (csv format)
optional arguments:
-h, --help show this help message and exit
-f loadings file Specify the fuel loadings file for consume to use. Run
the FCCS batch processor over the fuelbeds for which
you want to generate consumption/emission results to
create a fuel loadings file.
-x output columns Specify the output column configuration file for
consume to use
-l message level Specify the detail level of messages (1 | 2 | 3). 1 =
fewest messages 3 = most messages
--metric Convert consumption columns from tons_ac to tonnes_ha and emissions columns from lbs_ac to kg_ha.
--nosera Normally, emissions factors are looked up in tables based
on the Smoke Emissions Reference Application (SERA) database:
(https://depts.washington.edu/nwfire/sera/index.php)
Use this option to lookup values in tables not based
on the SERA database.
-o output filename Specify the name of the Consume output results file.
Examples:
// display help (this text)
python consume_batch.py
// Simple case, natural fuel types, required input file (uses built-in loadings file)
python consume_batch.py natural input_natural.csv
// Specify an alternative loadings file
python consume_batch.py natural input_natural.csv -f my_loadings.xml
// Specify a column configuration file. Please see the documentation for details.
python consume_batch.py activity input_activity.csv -x output_all.csv
Consume is written in Python so there is no build. Consume runs under both Python 3.10
Consume has regression tests and unit tests (depends on the green library).
pip install green
Run them like so:
$ ./run_regression_tests.sh
python3 consume_batch.py natural ./test/regression_input_southern.csv
Success!!! Results are in "/home/kjells/fera/consume/consume_results.csv"
diff ./consume_results.csv ./test/expected/regression_expected_southern.csv
python3 consume_batch.py natural ./test/regression_input_western.csv
Success!!! Results are in "/home/kjells/fera/consume/consume_results.csv"
diff ./consume_results.csv ./test/expected/regression_expected_western.csv
Success !!!
$ cd consume
$ ../run_unit_tests.sh
......................................
Ran 941 tests in 0.111s
OK (passes=38)
docker run -it --rm consume-h3:latest /bin/bash
Run with volume mounting… this allows editing of test_driver.py:
docker run -it --rm -v $(pwd):/app consume-h3:latest /bin/bash
python test/test_driver.py (edit test_driver.py to run tests or update expected results)
See https://depts.washington.edu/fft/fft_download/
zip file contains instructions and sample input files
Consume runs under Python 3.10, and we package a version of portable Python within FFT.
At some point, there will be changes to Python that break the portable Python.
At some point, it might be advisable to pull python from FFT and require Windows users to install Python locally.
Known dependencies on Consume:
-
Bluesky uses the Consume_loadings.csv file (FCCS output)
-
Consume module has been implemented in:
-
BlueSky
-
FFT
-
WFEIS (not maintained - Michigan Tech)
-
WA DNR Smoke Management System
-
ODF ACost System
-
cd /Users/briandrye/repos/uw/apps-consumeGIT/consume/eflookup321
edit input-data/orig-fccs2covertype.csv
run ./dev/scripts/import-fccs2ct2ef --log-level=DEBUG
updated file may not be where you expect... Mine was in:
~/anaconda3/lib/python3.6/site-packages/eflookup/fccs2ef/data/fccs2covertype.py
copy fccs2covertype.py to:
/Users/briandrye/repos/uw/apps-consumeGIT/consume/eflookup321/eflookup/fccs2ef/data
steps to update fccs_loadings.csv:
install FFT which has fuelbeds (or run FCCS standalone)
put LF disturbance fuelbeds in "FCCS/fuelbeds" folder
put R6 fuelbeds in "FCCS/fuelbeds" folder
run "java -jar fuelbed.jar fuelbeds/*.xml"
duplicate first column, rename to FCCSID
replace underscore with '0' in first column (fuelbed_number)
sort the rows by the first column
replace 0.000000 with 0 (to reduce file size)
add 452 - 984, remappings
add -1111 row
make a copy of fccs_loadings.csv called LF_ConsumeLoadings.csv (send to Landfire)
run regression and unit tests for Consume. Update if necessary using test_driver.py.
make a new Jenkins/Artifactory build
make a new FFT build
make a new Cmd Line Instruction zip file
consume_loadings.csv is created by running FCCS (within FFT) against all fuelbeds using the default environment variables.
consume_loadings.csv is renamed fccs_loadings.csv and put in
/consume/input_data/fccs_loadings.csv
This version of fccs_loadings.csv should match what is available on the Landfire FCCS page: https://landfire.gov/fuel/fccs.
Bluesky Playground uses a fccs_loadings.json file that only has the standard fuelbeds.
The processs for creating the json file is described in the README of the bluesky-playground-v3 repository.
From Artifactory, download consume to Downloads folder
~/Downlaods/consume
install python 3.10 if not already installed
cd to ~/Downloads/consume
make virtual environment:
briandrye@Brians-MacBook-Pro consume % python3.10 --version
Python 3.10.12
briandrye@Brians-MacBook-Pro consume % python3.10 -m venv my310env
briandrye@Brians-MacBook-Pro consume % source my310env/bin/activate
pip install -r requirements.txt
pip install numpy pandas (not needed... should be installed by requirements.txt)
Create a consume loadings file from online FCCS (or FFT) that has required columns: consume-loadings.csv.
Create an input file that has a row for each fuelbed in the consume-loadings.csv file.
sample_natural_input5and8.csv
fuelbeds,area,fm_duff,fm_1000hr,can_con_pct,shrub_black_pct,pile_black_pct,units,ecoregion,fm_litter,season,duff_pct_available,sound_cwd_pct_available,rotten_cwd_pct_available
5,100,30,60,20,80,90,tons,western,30,fall,100,100,100
8,100,30,60,20,80,90,tons,western,30,fall,100,100,100
(my310env2) briandrye@Brians-MacBook-Pro consume % python consume_batch.py -f consume-loadings.csv -o bdtest.csv natural sample_natural_input5and8.csv
Success!
Summary of commands:
1033 python3.10 --version
1034 python3.10 -m venv my310env
1035 source my310env/bin/activate
1036 python --version
1042 pip install -r requirements.txt
1045 python consume_batch.py -f consume-loadings2.csv -o bdtest.csv natural sample_natural_input5and8.csv