Environment Variables
description: >- A library of environment variables in the flepiMoP codebase. These variables may be updated or deprecated as the project evolves.
Environment Variables
Below you will find a list of environment variables (envvars) defined throughout the flepiMoP codebase. Often, these variables are set in response to command-line argument input. Though, some are set by flepiMoP
without direct user input (these are denoted by a 'Not a CLI option' note in the Argument column.)
BATCH_SYSTEM
Not a CLI option.
System you are running on (e.g., aws, SLURM, local).
N/A
e.g., aws
, slurm
inference_job_launcher.py
CENSUS_API_KEY
Not a CLI option.
A unique key to the API for census data.
N/A
slurm_init.sh
, build_US_setup.R
CONFIG_PATH
-c
, --config
Path to a configuration file.
--
your/path/to/config_file
build_covid_data.R
, build_US_setup.R
, build_initial_seeding.R
, build_flu_data.R
, config.R
, preprocessing/
files
DELPHI_API_KEY
-d
, --delhpi_api_key
Your personalized key for the Delphi Epidata API. Alternatively, this key can go in the config inference section as gt_api_key
.
--
build_covid_data.R
DIAGNOSTICS
-n
, --run-diagnostics
Flag for whether or not diagnostic tests should be run during execution.
TRUE
--run-diagnostics FALSE
for FALSE
, --run-diagnostics
or no mention for TRUE
run_sim_processing_SLURM.R
DISEASE
-i
, --disease
Which disease is being simulated in the prsent run.
flu
e.g., rsv
, covid
run_sim_processing_SLURM.R
/td>
DVC_OUTPUTS
Not a CLI option, but defined using --output
The names of the directories with outputs to save in S3 (separated by a space).
model_output model_parameters importation hospitalization
e.g., model_output model_parameters importation hospitalization
scenario_job.py
, AWS_scenario_runner.sh
FILENAME
Not a CLI option.
Filenames for output files, determined dynamically during inference.
N/A
file.parquet
, plot.pdf
AWS_postprocess_runner.sh
, SLURM_inference_job.run
, AWS_inference_runner.sh
FIRST_SIM_INDEX
-i
, --first_sim_index
The index of the first simulation.
1
int
shared_cli.py
FLEPI_BLOCK_INDEX
-b
, --this_block
Index of current block.
1
int
flepimop-inference-main.R
, utils.py
, AWS_postprocess_runner.sh
, AWS_inference_runner.sh
, SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_CONTINUATION
--continuation
/--no-continuation
Flag for whether or not to use the resumed run seir files (or provided initial files bucket) as initial conditions for the next run.
FALSE
--continuation TRUE
for TRUE
, --continuation
or no mention for FALSE
SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_CONTINUATION_FTYPE
Not a CLI option.
If running a continuation, the file type of the initial condition files.
config['initial_conditions']['initial_file_type']
e.g., .csv
SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_CONTINUATION_LOCATION
--continuation-location
The location (folder or an S3 bucket) from which to pull the /init/ files (if not set, uses the resume location seir files).
--
path/to/your/location
SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_CONTINUATION_RUN_ID
--continuation-run-id
The ID of run to continue at, if doing a continuation.
--
int
SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_INFO_PATH
Not a CLI option.
pending
pending
pending
info.py
FLEPI_ITERATIONS_PER_SLOT
-k
, --iterations_per_slot
Number of iterations to run per slot.
--
int
flepimop-inference-slot.R
, flepimop-inference-main.R
, SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_MAX_STACK_SIZE
--stacked-max
Maximum number of iterventions to allow in a stacked intervention.
5000
int >=350
StackedModifier.py
, inference_job_launcher.py
FLEPI_MEM_PROFILE
-M
, --memory_profiling
Flag for whether or not memory profile should be run during iterations.
FALSE
--memory_profiling TRUE
for TRUE
, --memory_profiling
or no mention for FALSE
flepimop-inference-slot.R
, flepimop-inference-main.R
, inference_job_launcher.py
FLEPI_MEM_PROF_ITERS
-P
, --memory_profiling_iters
If doing memory profiling, after every X iterations, run the profiling.
100
int
flepimop-inference-slot.R
, flepimop-inference-main.R
, inference_job_launcher.py
FLEPI_NJOBS
-j
, --jobs
Number of parallel processors used to run the simulation. If there are more slots than jobs, slots will be divided up between processors and run in series on each.
Number of cores detected as available at computing cluster.
int
flepimop-inference-slot.R
, flepimop-inference-main.R
, calibrate.py
FLEPI_NUM_SLOTS
-n
, --slots
Number of independent simulations of the model to be run.
--
int >=1
flepimop-inference-slot.R
, flepimop-inference-main.R
, calibrate.py
, inference_job_launcher.py
FLEPI_OUTCOME_SCENARIOS
-d
, --outcome_modifiers_scenarios
Name of the outcome scenario to run.
'all'
pending
flepimop-inference-slot.R
, flepimop-inference-main.R
, SLURM_inference_job.run
, inference_job_launcher.py
FLEPI_PATH
-p
, --flepi_path
Path to the flepiMoP directory.
'flepiMoP'
path/to/flepiMoP
several postprocessing/
files, several batch/
files, several preprocessing/
files, info.py
, utils.py
, _cli.py
FLEPI_PREFIX
--in-prefix
Unique name for the run.
--
e.g., project_scenario1_outcomeA
, etc.
SLURM_inference_job.run
, inference_job_launcher.py
, AWS_postprocess_runner.sh
, calibrate.py
, several preprocessing/
files, several postprocessing/
files, several batch/
files
FLEPI_RESET_CHIMERICS
-L
, --reset_chimeric_on_accept
Flag for whether or not chimeric parameters should be reset to global parameters whena global acceptance occurs.
TRUE
--reset_chimeric_on_accept FALSE
for FALSE
, --reset_chimeric_on_accept
or no mention for TRUE
flepimop-inference-slot.R
, flepimop-inference-main.R
, slurm_init.sh
, hpc_init
, inference_job_launcher.py
FLEPI_RESUME
--resume
/--no-resume
Flag for whether or not to resume the current calibration.
FALSE
--resume TRUE
for TRUE
, --resume
or no mention for FALSE
flepimop-inference-slot.R
, flepimop-inference-main.R
, slurm_init.sh
, hpc_init
, inference_job_launcher.py
FLEPI_RUN_INDEX
-u
, --run_id
Unique ID given to the model run. If the same config is run multiple times, you can avoid the output being overwritten by using unique model run IDs.
Auto-assigned run ID
int
copy_for_continuation.py
, flepimop-inference-slot.R
, flepimop-inference-main.R
, shared_cli.py
, base.py
, calibrate.py
, several batch/
files, several postprocessing/
files
FLEPI_SEIR_SCENARIOS
-s
, --seir_modifier_scenarios
Names of the intervention scenarios to run.
'all'
pending
flepimop-inference-slot.R
, flepimop-inference-main.R
, inference_job_launcher.py
FLEPI_SLOT_INDEX
-i
, --this_slot
Index for current slots.
1
int
flepimop-inference-slot.R
, several batch/
files
FS_RESULTS_PATH
-R
, --results-path
A path to the model results.
--
your/path/to/model_results
prune_by_llik.py
, prune_by_llik_and_proj.py
, several postprocessing/
files, several batch/
files, model_output_notebook.Rmd
FULL_FIT
-F
, --full-fit
Whether or not to process the full fit.
FALSE
--full-fit TRUE
for TRUE
, --full-fit
or no mention for FALSE
run_sim_processing_SLURM.R
GT_DATA_SOURCE
-s
, --gt_data_source
Sources of groundtruth data.
'csse_case, fluview_death, hhs_hosp'
See default
build_covid_data.R
GT_END_DATE
--ground_truth_end
Last date to include ground truth for.
--
YYYY-MM-DD
format
flepimop-inference-slot.R
, flepimop-inference-main.R
GT_START_DATE
--ground_truth_start
First date to include ground truth for.
--
YYYY-MM-DD
format
flepimop-inference-slot.R
, flepimop-inference-main.R
IMM_ESC_PROP
--imm_esc_prop
Annual percent of immune escape.
0.35
float
between 0.00 - 1.00
several preprocessing/
files
INCL_AGGR_LIKELIHOOD
-a
, --incl_aggr_likelihood
Whether or not the likelihood should be calculated with aggregate estimates.
FALSE
--incl_aggr_likelihood TRUE
for TRUE
, --incl_aggr_likelihood
or no mention for FALSE
flepimop-inference-slot.R
IN_FILENAME
Not a CLI option.
Name of input files.
N/A
file_1.csv
file_2.csv
, etc.
several batch/
files
INIT_FILENAME
--init_file_name
Initial file global intermediate name.
--
file.csv
seir_init_immuneladder.R
, inference_job.run
, several preprocessing/
files
INTERACTIVE_RUN
-I
, --is-interactive
Whether or not the current run is interactive.
FALSE
--is-interactive TRUE
for TRUE
, --is-interactive
or no mention for FALSE
flepimop-inference-slot.R
, flepimop-inference-main.R
JOB_NAME
--job-name
Unique job name (intended for use when submitting to SLURM).
--
Convention: {config['name']}-{timestamp}
(str)
several batch/
files
LAST_JOB_OUTPUT
Not a CLI option.
Path to output of last job.
N/A
path/to/last_job/output
utils.py
, several batch/
files
OLD_FLEPI_RUN_INDEX
Not a CLI option.
Run ID of old flepiMoP run.
N/A
int
several batch/
files
OUT_FILENAME
Not a CLI option.
Name of output files.
N/A
file_1.csv
file_2.csv
, etc.
several batch/
files
OUT_FILENAME_DIR
Not a CLI option.
Directory for output files.
N/A
path/to/output/files
SLURM_inference_job.run
OUTPUTS
-o
, --select-outputs
A list of outputs to plot.
'hosp, hnpi, snpi, llik'
hosp, hnpi, snpi, llik
postprocess_snapshot.R
PARQUET_TYPES
Not a CLI option.
Parquet files.
'seed spar snpi seir hpar hnpi hosp llik init'
seed spar snpi seir hpar hnpi hosp llik init
AWS_postprocess_runner.sh
, SLURM_inference_job.run
, AWS_inference_runner.sh
PATH
Not a CLI option.
Path relating to AWS installation. Used during SLURM runs.
N/A
set with export PATH=~/aws-cli/bin:$PATH
in SLURM_inference_job.run
schema.yml
, utils.py
, info.py
, AWS_postprocess_runner.sh
, SLURM_inference_job.run
PROCESS
-r
, --run-processing
Whether or not to process the run.
FALSE
--run-processing TRUE
for TRUE
, --run-processing
or no mention for FALSE
run_sim_processing_SLURM.R
PROJECT_PATH
-d
, --data_path
Path to the folder with configs and model output.
--
path/to/configs_and_model-output
base.py
, _cli.py
, calibrate.py
, several postprocessing/
files, several batch/
files
PULL_GT
-g
, --pull-gt
Whether or not to pull ground truth data.
FALSE
--pull-gt TRUE
for TRUE
, --pull-gt
or no mention for FALSE
run_sm_processing_SLURM.R
PYTHON_PATH
-y
, --python
Path to Python executable.
'python3'
path/to/your_python
flepimop-inference-slot.R
, flepimop-inference-main.R
RESUMED_CONFIG_PATH
--res_config
Path to previous config file, if using resumes.
NA
path/to/past_config
seir_init_immuneladder.R
, several preprocessing/
files
RESUME_DISCARD_SEEDING
--resume-discard-seeding
, --resume-carry-seeding
Whether or not to keep seeding in resume runs.
FALSE
--resume-carry-seeding TRUE
for TRUE
, --resume-carry-seeding
or no mention for FALSE
several batch/
files
RESUME_LOCATION
-r
, --restart-from-location
The location (folder or an S3 bucket) where the previous run is stored.
--
path/to/last_job/output
built_initial_seeding.R
, calibrate.py
, slurm_init.sh
, hpc_init
, inference_job_launcher.py
RESUME_RUN
-R
, --is-resume
Whether or not this run is a resume.
FALSE
--is-a-resume TRUE
for TRUE
, --is-a-resume
or no mention for FALSE
flepimop-inference-slot.R
, flepimop-inference-main.R
RESUME_RUN_INDEX
Not a CLI option.
Index of resumed run.
set by OLD_FLEPI_RUN_INDEX
int
SLURM_inference_job.run
RSCRIPT_PATH
-r
, --rpath
Path to R executable.
'Rscript'
path/to/your_R
build_initial_seeding.R
, flepimop-inference-slot.R
, flepimop-inference-main.R
RUN_INTERACTIVE
-I
, --is-interactive
Whether or not the current run is interactive.
FALSE
--is-interactive
for TRUE
, --is-interactive
or no mention for FALSE
flepimop-inference-slot.R
, flepimop-inference-main.R
SAVE_HOSP
-H
, --save_hosp
Whether or not the HOSP output files should be saved for each iteration.
TRUE
--save_hosp FALSE
for FALSE
, --save_hosp
or no mention for TRUE
flepimop-inference-slot.R
, flepimop-inference-main.R
SAVE_SEIR
-S
, --save_seir
Whether or not the SEIR output files should be saved for each iteration.
FALSE
--save_seir TRUE
for TRUE
, --save_seir
or no mention for FALSE
flepimop-inference-slot.R
, flepimop-inference-main.R
SEED_VARIANTS
-s
, --seed_variants
Whether or not to add variants/subtypes to outcomes in seeding.
--
FALSE
, TRUE
create_seeding.R
SIMS_PER_JOB
Not a CLI option.
Simulations per job.
N/A
int >=1
AWS_postprocess_runner.sh
, inference_job_launcher.py
, AWS_inference_runner.sh
SLACK_CHANNEL
-s
, --slack-channel
Slack channel, either 'csp-production' or 'debug'; or 'noslack' to disable slack.
--
csp-production
, debug
, or noslack
postrpocess_auto.py
, postprocessing-scripts.sh
, inference_job_launcher.py
SLACK_TOKEN
-s
, --slack-token
Slack token.
--
postprocess_auto.py
, SLURM_postprocess_runner.run
SUBPOP_LENGTH
-g
, --subpop_len
Number of digits in subpops.
5
int
flepimop-inference-slot.R
, flepimop-inference-main.R
S3_MODEL_PROJECT_PATH
Not a CLI option.
Location in S3 bucket with the code, data, and dvc pipeline.
N/A
path/to/code_data_dvc
several batch/
files
S3_RESULTS_PATH
Not a CLI option.
Location in S3 to store results.
N/A
path/to/s3/results
several batch/
files
S3_UPLOAD
Not a CLI option.
Whether or not we also save runs to S3 for slurm runs
TRUE
TRUE
, FALSE
SLURM_postprocess_runner.run
, SLURM_inference_job.run
, inference_job_launcher.py
VALIDATION_DATE
--validation-end-date
First date of projection/forecast (first date without ground truth data).
date.today()
YYYY-MM-DD
format
data_setup_source.R
, DataUtils.R
, groundtruth_source.R
, slurm_init.sh
, hpc_init
, inference_job_launcher.py
Last updated