arrow-left

All pages
gitbookPowered by GitBook
1 of 4

Loading...

Loading...

Loading...

Loading...

Running on AWS 🌳

using Docker container

hashtag
🖥 Start and access AWS submission box

Spin up an Ubuntu submission box if not already running. To do this, log onto AWS Console and start the EC2 instance.

Update IP address in .ssh/config file. To do this, open a terminal and type the command below. This will open your config file where you can change the IP to the IP4 assigned to the AWS EC2 instance (see AWS Console for this):

SSH into the box. In the terminal, SSH into your box. Typically we name these instances "staging", so usually the command is:

hashtag
🧱 Setup

Now you should be logged onto the AWS submission box. If you haven't yet, set up your directory structure.

hashtag
🗂 Create the directory structure (ONCE PER USER)

Type the following commands:

circle-exclamation

Note that the repository is cloned nested, i.e the flepiMoP repository is INSIDE the data repository.

Have your Github ssh key passphrase handy so you can paste it when prompted (possibly multiple times) with the git pull command. Alternatively, you can add your github key to your batch box so you don't have to enter your token 6 times per day.

hashtag
🚀 Run inference using AWS (do everytime)

hashtag
🛳 Initiate the Docker

Start up and log into the docker container, and run setup scripts to setup the environment. This setup code links the docker directories to the existing directories on your box. As this is the case, you should not run job submission simultaneously using this setup, as one job submission might modify the data for another job submission.

hashtag
Setup environment

To set up the environment for your run, run the following commands. These are specific to your run, i.e., change VALIDATION_DATE, FLEPI_RUN_INDEX and RESUME_LOCATION as required. If submitting multiple jobs, it is recommended to split jobs between 2 queues: Compartment-JQ-1588569569 and Compartment-JQ-1588569574.

NOTE: If you are not running a resume run, DO NOT export the environmental variable RESUME_LOCATION.

Additionally, if you want to profile how the model is using your memory resources during the run, run the following commands

Then prepare the pipeline directory (if you have already done that and the pipeline hasn't been updated (git pull says it's up to date). You need to set $PROJECT_PATH to your data folder. For a COVID-19 run, do:

for Flu do:

Now for any type of run:

For now, just in case: update the arrow package from 8.0.0 in the docker to 11.0.3 ;

Now flepiMoP is ready 🎉 ;

Do some clean-up before your run. The fast way is to restore the $PROJECT_PATH git repository to its blank states (⚠️ removes everything that does not come from git):

chevron-rightI want more control over what is deletedhashtag

if you prefer to have more control, delete the files you like, e.g

If you still want to use git to clean the repo but want finer control or to understand how dangerous is the command, .

Then run the preparatory data building scripts and you are good

Now you may want to test that it works :

If this fails, you may want to investigate this error. In case this succeeds, then you can proceed by first deleting the model_output:

hashtag
Launch your inference batch job on AWS

Assuming that the initial test simulation finishes successfully, you will now enter credentials and submit your job onto AWS batch. Enter the following command into the terminal:

You will be prompted to enter the following items. These can be found in a file you received from Shaun called new_user_credentials.csv.

  • Access key ID when prompted

  • Secret access key when prompted

  • Default region name: us-west-2

Now you're fully set to go 🎉

To launch the whole inference batch job, type the following command:

This command infers everything from you environment variables, if there is a resume or not, what is the run_id, etc., and the default is to carry seeding if it is a resume (see below for alternative options).

If you'd like to have more control, you can specify the arguments manually:

We allow for a number of different jobs, with different setups, e.g., you may not want to carry seeding. Some examples of appropriate setups are given below. No modification of these code chunks should be required ;

NOTE: Resume and Continuation Resume runs are currently submitted the same way, resuming from an S3 that was generated manually. Typically we will also submit any Continuation Resume run specifying --resume-carry-seeding as starting seeding conditions will be manually constructed and put in the S3.

Carrying seeding (do this to use seeding fits from resumed run):

Discarding seeding (do this to refit seeding again):

hashtag
Document the submission

After the job is successfully submitted, you will now be in a new branch of the data repo. Commit the ground truth data files to the branch on github and then return to the main branch:

Send the submission information to slack so we can identify the job later. Example output:

notepad .ssh/config
Default output: Leave blank when this is prompted and press enter (The Access Key ID and Secret Access Key will be given to you once in a file)
Single Iteration + Carry seeding (do this to produce additional scenarios where no fitting is required):
read thisarrow-up-right
ssh staging
git clone https://github.com/HopkinsIDD/flepiMoP.git
git clone https://github.com/HopkinsIDD/Flu_USA.git
git clone https://github.com/HopkinsIDD/COVID19_USA.git
cd COVID19_USA
git clone https://github.com/HopkinsIDD/flepiMoP.git
cd ..
# or any other data directories
git config --global credential.helper store
git config --global user.name "{NAME SURNAME}"
git config --global user.email YOUREMAIL@EMAIL.COM
git config --global pull.rebase false # so you use merge as the default reconciliation method
cd COVID19_USA
git config --global credential.helper cache
git pull 
git checkout main
git pull

cd flepiMoP
git pull	
git checkout main
git pull
cd .. 
sudo docker pull hopkinsidd/flepimop:latest
sudo docker run -it \
  -v /home/ec2-user/COVID19_USA:/home/app/drp/COVID19_USA \
  -v /home/ec2-user/flepiMoP:/home/app/drp/flepiMoP \
  -v /home/ec2-user/.ssh:/home/app/.ssh \
hopkinsidd/flepimop:latest 
cd ~/drp
export CENSUS_API_KEY={A CENSUS API KEY}
export FLEPI_RESET_CHIMERICS=TRUE
export COMPUTE_QUEUE="Compartment-JQ-1588569574"

export VALIDATION_DATE="2023-01-29"
export RESUME_LOCATION=s3://idd-inference-runs/USA-20230122T145824
export FLEPI_RUN_INDEX=FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc

export CONFIG_PATH=config_FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc.yml
export FLEPI_MEM_PROFILE=TRUE
export FLEPI_MEM_PROF_ITERS=50
cd ~/drp
export PROJECT_PATH=$(pwd)/COVID19_USA
export GT_DATA_SOURCE="csse_case, fluview_death, hhs_hosp"
cd ~/drp
export PROJECT_PATH=$(pwd)/Flu_USA
cd $PROJECT_PATH
export FLEPI_PATH=$(pwd)/flepiMoP
cd $FLEPI_PATH
git checkout main
git pull
git config --global credential.helper 'cache --timeout 300000'

#install gempyor and the R modules. There should be no error, please report if not.
# Sometimes you might need to run the next line two times because inference depends
# on report.generation, which is installed later, in alphabetical order.
# (or if you know R well enough to fix that 😊)

Rscript build/local_install.R # warnings are ok; there should be no error.
   python -m pip install --upgrade pip &
   pip install -e flepimop/gempyor_pkg/ &
   pip install boto3 &
   cd ..
cd $PROJECT_PATH
git pull 
git checkout main
git reset --hard && git clean -f -d  # this deletes everything that is not on github in this repo !!!
rm -rf model_output data/us_data.csv data-truth &&
   rm -rf data/mobility_territories.csv data/geodata_territories.csv &&
   rm -rf data/seeding_territories.csv && 
   rm -rf data/seeding_territories_Level5.csv data/seeding_territories_Level67.csv

# don't delete model_output if you have another run in //
rm -rf $PROJECT_PATH/model_output
export CONFIG_PATH=config_FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc.yml # if you haven't already done this
Rscript $FLEPI_PATH/datasetup/build_US_setup.R

# For covid do
Rscript $FLEPI_PATH/datasetup/build_covid_data.R

# For Flu do
Rscript $FLEPI_PATH/datasetup/build_flu_data.R
flepimop-inference-main -c $CONFIG_PATH -j 1 -n 1 -k 1 
rm -r model_output
aws configure
python $FLEPI_PATH/batch/inference_job_launcher.py --aws -c $CONFIG_PATH -q $COMPUTE_QUEUE 
python $FLEPI_PATH/batch/inference_job_launcher.py --aws \ ## FIX THIS TO REFLECT AWS OPTIONS
                    -c $CONFIG_PATH \
                    -p $FLEPI_PATH \
                    --data-path $PROJECT_PATH \
                    --upload-to-s3 True \
                    --id $FLEPI_RUN_INDEX \
                    --restart-from-location $RESUME_LOCATION
cd $PROJECT_PATH 

$FLEPI_PATH/batch/inference_job_launcher.py --aws -c $CONFIG_PATH -q $COMPUTE_QUEUE
cd $PROJECT_PATH 

$FLEPI_PATH/batch/inference_job_launcher.py --aws -c $CONFIG_PATH -q $COMPUTE_QUEUE -j 1 -k 1
cd $PROJECT_PATH

$FLEPI_PATH/batch/inference_job_launcher.py --aws -c $CONFIG_PATH -q $COMPUTE_QUEUE --resume-carry-seeding --restart-from-location $RESUME_LOCATION
cd $PROJECT_PATH 

$COVID_PATH/batch/inference_job_launcher.py --aws -c $CONFIG_PATH -q $COMPUTE_QUEUE --resume-discard-seeding --restart-from-location $RESUME_LOCATION
git add data/ 
git config --global user.email "[email]" 
git config --global user.name "[github username]" 
git commit -m"scenario run initial" 
branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p')
git push --set-upstream origin $branch

git checkout main
git pull
Launching USA-20230426T135628_inference_med on aws...
 >> Job array: 300 slot(s) X 5 block(s) of 55 simulation(s) each.
 >> Final output will be: s3://idd-inference-runs/USA-20230426T135628/model_output/
 >> Run id is SMH_R17_noBoo_lowIE_phase1_blk1
 >> config is config_SMH_R17_noBoo_lowIE_phase1_blk1.yml
 >> FLEPIMOP branch is main with hash 3773ed8a20186e82accd6914bfaf907fd9c52002
 >> DATA branch is R17 with hash 6f060fefa9784d3f98d88a313af6ce433b1ac913
cd $PROJECT_PATH 

$COVID_PATH/batch/inference_job_launcher.py -c $CONFIG_PATH -q $COMPUTE_QUEUE --resume-carry-seeding --restart-from-location $RESUME_LOCATION

Running On A HPC With Slurm

Tutorial on how to install and run flepiMoP on a supported HPC with slurm.

These details cover how to install and initialize flepiMoP on an HPC environment and submit a job with slurm.

circle-exclamation

Currently only JHU's Rockfish and UNC's Longleaf HPC clusters are supported. If you need support for a new HPC cluster please file an issue in the flepiMoP GitHub repositoryarrow-up-right.

For getting access to one of the supported HPC environments please refer to the following documentation before continuing:

  • for UNC users, or

  • for JHU users.

External users will need to consult with their PI contact at the respective institution.

hashtag
Installing flepiMoP

This task needs to be ran once to do the initial install of flepiMoP.

circle-info

On JHU's Rockfish you'll need to run these steps in a slurm interactive job. This can be launched with /data/apps/helpers/interact -n 4 -m 12GB -t 4:00:00, but please consult the for up to date information.

Download and run the the appropriate installation script with the following command:

Substituting <cluster-name> with either rockfish or longleaf. This script will install flepiMoP to the correct locations on the cluster. Once the installation is done the conda environment can be activated and the script can be removed with:

hashtag
Updating flepiMoP

Updating flepiMoP is designed to work just the same as installing flepiMoP. First change directory to your flepiMoP installation and then make sure that your clone of the flepiMoP repository is set to the branch you are working with (if doing development or operations work) and then run the flepimop-install-<cluster-name> script, substituting <cluster-name> with either rockfish or longleaf.

hashtag
Initialize The Created flepiMoP Environment

These steps to initialize the environment need to run on a per run or as needed basis.

Change directory to where a full clone of the flepiMoP repository was placed (it will state the location in the output of the script above). And then run the hpc_init script, substituting <cluster-name> with either rockfish or longleaf. This script will assume the same defaults as the script before for where the flepiMoP clone is and the name of the conda environment. This script will also ask about the path to your flepiMoP installation and project directory. It will also ask if you would like to set a default configuration file, if you plan to use the flepimop batch-calibrate command below we recommend pressing enter to skip setting this environment variable. If this is your first time initializing flepiMoP it might be helpful to use configs out of flepiMoP/examples/tutorials directory as a test.

Upon completing this script it will output a sample set of commands to run to quickly test if the installation/initialization has gone okay.

hashtag
Submitting A Batch Inference Job To Slurm

The main entry point for submitting batch inference jobs is the flepimop batch-calibrate action. This CLI tool will let you submit a job to slurm once logged into a cluster. For details on the available options please refer to flepimop batch-calibrate --help. As a quick example let's submit an R inference and EMCEE inference job. For the R inference run execute the following once logged into either longleaf or rockfish:

This command will produce a large amount of output, due to -vvv. If you want to try the command without actually submitting the job you can pass the --dry-run option. This command will submit a job to calibrate the sample 2 population configuration which uses R inference. The R inference supports array jobs so each chain will be run on an individual node with 1 CPU and 1GB of memory a piece. Additionally the extra option allows you to provide additional info to the batch system, in this case what partition to submit the jobs to but email is also supported with slurm for notifications. After running this command you should notice the following output:

  • config_sample_2pop-YYYYMMDDTHHMMSS.yml: This file contains the compiled config that is actually submitted for inference,

  • manifest.json: This file contains a description of the submitted job with the command used, the job name, and flepiMoP and project git commit hashes,

For operational runs these files should be committed to the checked out branch for archival/reproducibility reasons. Since this is just a test you can safely remove these files after inspecting them.

Now, let's submit an EMCEE inference job with the same tool. Importantly, the options we'll use won't change much because flepimop batch-calibrate is designed to provide a unified implementation independent interface.

One notable difference is, unlike R inference, EMCEE inference only supports running on 1 node so resources for this command are adjusted accordingly:

  • Swapping 4 nodes with 1 cpu each to 1 node with 4 cpus, and

  • Doubling the memory usage from 4 nodes with 1GB each for 4GB total to 1 node with 8GB for 8GB total.

The extra increase in memory is to run a configuration that is slightly more resource intense than the previous example. This command will also produce a similar set of record keeping files like before that you can safely remove after inspecting.

hashtag
Estimating Required Resources For A Batch Inference Job

When inspecting the output of flepimop batch-calibrate --help you may have noticed several options named --estimate-*. While not required for the smaller jobs above this tool has the ability to estimate the required resources to run a larger batch estimation job. The tool does this by running smaller jobs and then projecting the required resources for a large job from those smaller jobs. To use this feature provide the --estimate flag, a job size of the targeted job, resources for test jobs, and the following estimation settings:

  • --estimate-runs: The number of smaller jobs to run to estimate the required resources from,

  • --estimate-interval: The size of the prediction interval to use for estimating the resource/time limit upper bounds,

  • --estimate-vary: The job size elements to vary when generating smaller jobs,

Effectively using these options requires some knowledge of the underlying inference method. Sticking with the simple usa state level example above try submitting the following command (after cleaning up the output from the previous example):

In short, this command will submit 6 test jobs that will vary simulations and measure time and memory. The number of simulations will be used to project the required resources. The test jobs will range from 1/5 to 1/10 of the target job size. This command will take a bit to run because it needs to wait on these test jobs to finish running before it can do the analysis, so you can check on the progress by checking the output of the simple_usa_statelevel_estimation.log file.

Once this command finishes running you should notice a file called USA_influpaint_resources.json. This JSON file contains the estimated resources required to run the target job. You can submit the target job with the estimated resources by using the same command as before without the --estimate-* options and using the --from-estimate option to pull the information from the outputted file:

hashtag
Saving Model Outputs On Batch Inference Job Finish

For production runs it is particularly helpful to save the calibration results after a successful run to long term storage for safe keeping. To accomplish this flepimop batch-calibrate can chain a call to flepimop sync after a successful run via the --sync-protocol option. For more details on the flepimop sync command in general please refer to the guide.

For a quick demonstration of how to use this option start with the config_sample_2pop_inference.yml configuration file and add the following section:

Where /path/to/an/example-folder and s3://my-bucket/and-sub-bucket are place holders for paths to your desired location. Importantly, note that there is no trailing slash on the model_output directory name. This will cause flepimop sync to sync the model_output directory itself and not just it's contents. You can also apply additional filters to the sync protocols here, say to limit the backed up model outputs to certain folders or exclude llik outputs, but the --sync-protocol option will add filters to limit the synced directories to those corresponding to the run submitted. Note that users do not need to specify run/job ids or configuration file names in the sync protocol. The flepimop batch-calibrate CLI will take advantage of flepimop sync's options to set paths appropriately to accommodate for run/job ids.

Modifying the first flepimop batch-calibrate command from before:

This command will submit an array job just like before, but will also add a dependent job with the same name prefixed with 'sync_'. This should looks like:

After those jobs finish the results can be found in a subdirectory named after the job and whose contents will look like:

Note that this contains the model_output directory but only limited to the batch run named 'sample_2pop-20250521T190823_Ro_all_test_limits' as well as a file called manifest.json which can be used to reproduce the run from scratch if needed.

hashtag
Saving Model Outputs To S3 For Hopkins Users

For Hopkins affiliated users there is a configuration file patch included with flepiMoP that can be used to add S3 syncing for model outputs to s3://idd-inference-runs. Taking the example before of running the config_sample_2pop_inference.yml configuration we can slightly modify the command to:

This will take advantage of the patching abilities of the flepimop batch-calibrate to add a sync protocol named s3-idd-inference-runs that will save the results to the s3://idd-inference-runs bucket.

Running with Docker locally 🛳

Short tutorial on running FlepiMop on your personal computer using a "Docker" container

hashtag
Access model files

See the section to ensure you have access to the correct files needed to run. On your local machine, determine the file paths to:

slurm-*_*.out
: These files contain output from slurm for each of the array jobs submitted,
  • tmp*.sbatch: Contains the generated file submitted to slurm with sbatch.

  • --estimate-factors: The factors to use in projecting the larger scale estimation job,

  • --estimate-measurements: The resources to estimate,

  • --estimate-scale-upper: The scale factor to use to determine the largest sample job to generate, and

  • --estimate-scale-lower: The scale factor to use to determine the smallest sample job to generate.

  • UNC's Longleaf Clusterarrow-up-right
    JHU's Rockfish Clusterarrow-up-right
    Rockfish user guidearrow-up-right
    Synchronizing files: Syntax and Applications
    $ curl -LsSf -o flepimop-install-<cluster-name> https://raw.githubusercontent.com/HopkinsIDD/flepiMoP/refs/heads/main/bin/flepimop-install-<cluster-name>
    $ chmod +x flepimop-install-<cluster-name>
    $ ./flepimop-install-<cluster-name>
    $ conda activate flepimop-env
    $ rm flepimop-install-<cluster-name> flepimop-install
    $ ./bin/flepimop-install-<cluster-name>
    $ ./batch/hpc_init <cluster-name>
    $ export PROJECT_PATH="$FLEPI_PATH/examples/tutorials/"
    $ cd $PROJECT_PATH
    $ flepimop batch-calibrate \
        --blocks 1 \
        --chains 4 \
        --samples 20 \
        --simulations 100 \
        --time-limit 30min \
        --slurm \
        --nodes 4 \
        --cpus 1 \
        --memory 1G \
        --extra 'partition=<your partition, if relevant>' \
        --extra 'email=<your email, if relevant>' \
        --skip-checkout \
        -vvv \
        config_sample_2pop_inference.yml
    $ export PROJECT_PATH="$FLEPI_PATH/examples/simple_usa_statelevel/"
    $ cd $PROJECT_PATH
    $ flepimop batch-calibrate \
        --blocks 1 \
        --chains 4 \
        --samples 20 \
        --simulations 100 \
        --time-limit 30min \
        --slurm \
        --nodes 1 \
        --cpus 4 \
        --memory 8G \
        --extra 'partition=<your partition, if relevant>' \
        --extra 'email=<your email, if relevant>' \
        --skip-checkout \
        -vvv \
        simple_usa_statelevel.yml
    $ flepimop batch-calibrate \
        --blocks 1 \
        --chains 4 \
        --samples 20 \
        --simulations 500 \
        --time-limit 2hr \
        --slurm \
        --nodes 1 \
        --cpus 4 \
        --memory 24GB \
        --extra 'partition=<your partition, if relevant>' \
        --extra 'email=<your email, if relevant>' \
        --skip-checkout \
        --estimate \
        --estimate-runs 6 \
        --estimate-interval 0.8 \
        --estimate-vary simulations \
        --estimate-factors simulations \
        --estimate-measurements time \
        --estimate-measurements memory \
        --estimate-scale-upper 5 \
        --estimate-scale-lower 10 \
        -vvv \
        simple_usa_statelevel.yml > simple_usa_statelevel_estimation.log 2>&1 & disown
    $ flepimop batch-calibrate \
        --blocks 1 \
        --chains 4 \
        --samples 20 \
        --simulations 500 \
        --time-limit 2hr \
        --slurm \
        --nodes 1 \
        --cpus 4 \
        --memory 24GB \
        --from-estimate USA_influpaint_resources.json \
        --extra 'partition=<your partition, if relevant>' \
        --extra 'email=<your email, if relevant>' \
        --skip-checkout \
        -vvv \
        simple_usa_statelevel.yml
    sync:
      rsync-model-output:
        type: rsync
        source: model_output
        target: /path/to/an/example-folder
      s3-model-output:
        type: s3sync
        source: model_output
        target: s3://my-bucket/and-sub-bucket
    $ export PROJECT_PATH="$FLEPI_PATH/examples/tutorials/"
    $ cd $PROJECT_PATH
    $ flepimop batch-calibrate \
        --blocks 1 \
        --chains 4 \
        --samples 20 \
        --simulations 100 \
        --time-limit 30min \
        --slurm \
        --nodes 4 \
        --cpus 1 \
        --memory 1G \
        --extra 'partition=<your partition, if relevant>' \
        --extra 'email=<your email, if relevant>' \
        --skip-checkout \
        --sync-protocol <your sync protocol, either rsync-model-output or s3-model-output in this case> \
        -vvv \
        config_sample_2pop_inference.yml
    [twillard@longleaf-login6 tutorials]$ squeue -p jlessler
                 JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
               2374868  jlessler sync_sam twillard PD       0:00      1 (Dependency)
             2374867_1  jlessler sample_2 twillard  R       2:26      1 g1803jles01
             2374867_2  jlessler sample_2 twillard  R       2:26      1 g1803jles01
             2374867_3  jlessler sample_2 twillard  R       2:26      1 g1803jles01
             2374867_4  jlessler sample_2 twillard  R       2:26      1 g1803jles01
    [twillard@longleaf-login6 sample_2pop-20250521T190823_Ro_all_test_limits]$ tree -L 4
    .
    ├── manifest.json
    └── model_output
        └── sample_2pop_Ro_all_test_limits
            └── sample_2pop-20250521T190823_Ro_all_test_limits
                ├── hnpi
                ├── hosp
                ├── hpar
                ├── init
                ├── llik
                ├── seir
                ├── snpi
                └── spar
    
    11 directories, 1 file
    $ flepimop batch-calibrate \
        --blocks 1 \
        --chains 4 \
        --samples 20 \
        --simulations 100 \
        --time-limit 30min \
        --slurm \
        --nodes 4 \
        --cpus 1 \
        --memory 1G \
        --extra 'partition=<your partition, if relevant>' \
        --extra 'email=<your email, if relevant>' \
        --skip-checkout \
        --sync-protocol s3-idd-inference-runs \
        -vvv \
        config_sample_2pop_inference.yml $FLEPI_PATH/common/s3-idd-inference-runs.yml
    the directory containing the flepimop code (likely the folder you cloned from Github), which we'll call
    <FLEPI_PATH>
  • the directory containing your project code including input configuration file and population structure, which we'll call <PROJECT_PATH>

  • circle-info

    For example, if you clone your Github repositories into a local folder called Github and are using the flepimop_sample as a project repository, your directory names could be _On Mac: ;

    <FLEPI_PATH> = /Users/YourName/Github/flepiMoP

    <PROJECT_PATH> = /Users/YourName/Github/fleiMoP/examples/tutorials On Windows: <FLEPI_PATH> = C:\Users\YourName\Github\flepiMoP

    <PROJECT_PATH> = C:\Users\YourName\Github\flepiMoP\examples\tutorials

    Note that Docker file and directory names are case sensitive

    hashtag
    🧱 Set up Docker

    Dockerarrow-up-right is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime. This means you can run and install software without installing the dependencies in the local operating system.

    A Docker container is an environment which is isolated from the rest of the operating system i.e. you can create files, programs, delete and everything but that will not affect your OS. It is a local virtual OS within your OS ;

    For flepiMoP, we have a Docker container that will help you get running quickly ;

    Make sure you have the Docker software installed, and then open your command prompt or terminal application ;

    circle-info

    Helpful tools

    To understand the basics of Docker, refer Docker Basicsarrow-up-right. The following Docker Tutorialarrow-up-right may also be helpful ;

    To install Docker for Mac, refer to the following link: Installing Docker for Macarrow-up-right. Pay special attention to the specific chip your Mac has (Apple Silicon vs Intel), as installation files and directions differ

    To install Docker for Windows, refer to the following link: Installing Docker for Windowsarrow-up-right

    To find the Windows Command Prompt, type “Command Prompt" in the search bar and open it. This may be helpful for new users ;

    To find the Apple Terminal, type "Terminal" in the search bar or go to Applications -> Utilities -> Terminal ;

    chevron-right⚠️ Getting errors on a Mac?hashtag

    If you have a newer Mac computer that runs with an Apple Silicon chip, you may encounter errors. Here are a few tips to avoid them:

    • Make sure you have Mac OS 11 or above

    • Install any minor updates to the operating system

    • Install Rosetta 2 for Ma ;

      • In terminal type softwareupdate --install-rosetta

    • Make sure you've installed the Docker version that matches with the chip your Mac has (Intel vs Apple Silicon).

    • Update Docker to the latest version

      • On Mac, updating Docker may require you to uninstall Docker before installing a newer version. To do this, open the Docker Desktop application and click the Troubleshoot icon (the small icon that looks like an insect at the top right corner of the window). Click the Uninstall button. Once this process is completed, open Applications in Finder and move Docker to the Trash. If you get an error message that says Docker cannot be deleted because it is open, then open Activity Monitor and stop all Docker processes. Then put Docker in the Trash. Once Docker is deleted, install the new Docker version appropriate for your Mac chip. After reinstallation is complete, restart your computer.

    hashtag
    Run the Docker image

    First, make sure you have the latest version of the flepimop Docker (hopkinsidd/flepimop) downloaded on your machine by opening your terminal application and entering:

    Next, run the Docker image by entering the following, replace <FLEPI_PATH> and <PROJECT_PATH> with the path names for your machine (no quotes or brackets, just the path text):

    triangle-exclamation

    On Windows: If you get an error, you may need to delete the "\" line breaks and submit as a single continuous line of code.

    In this command, we run the Docker container, creating a volume and mounting (-v) your code and project directories into the container. Creating a volume and mounting it to a container basically allocates space in Docker for it to mirror - and have read and write access - to files on your local machine ;

    The folder with the flepiMoP code <PROJECT_PATH> will be on the path flepimop within the Docker environment, while the project folder will be at the path `drp. ;

    {% hint style="success" %} You now have a local Docker container installed, which includes the R and Python versions required to run flepiMop with all the required packagers already installed ; {% endhint %}

    {% hint style="info" %} You don't need to re-run the above steps every time you want to run the model. When you're done using Docker for the day, you can simply "detach" from the container and pause it, without deleting it from your machine. Then you can re-attach to it when you next want to run the model ; {% endhint %}

    hashtag
    Define environment variables

    Create environmental variables for the paths to the flepimop code folder and the project folder:

    `

    ``bash export FLEPI_PATH=/home/app/flepimop/ export PROJECT_PATH=/home/app/drp/

    Each installation step may take a few minutes to run.

    circle-info

    Note: These installations take place in the Docker container and not the local operating system. They must be made once while starting the container and need not be done for every time you run a model, provided they have been installed once. You will need an active internet connection for pulling the Docker image and installing the R packages (since some are hosted online), but not for other steps of running the model

    hashtag
    Run the code

    Everything is now ready 🎉 The next step depends on what sort of simulation you want to run: One that includes inference (fitting model to data) or only a forward simulation (non-inference). Inference is run from R, while forward-only simulations are run directly from the Python package gempyor.

    In either case, navigate to the project folder and make sure to delete any old model output files that are there

    hashtag
    Inference run

    An inference run requires a configuration file that has the inference section. Stay in the $PROJECT_PATH folder, and run the inference script, providing the name of the configuration file you want to run (ex. config.yml ;

    This will run the model and create a lot of output files in $PROJECT_PATH/model_output/ ;

    The last few lines visible on the command prompt should be:

    [[1]]

    [[1]][[1]]

    [[1]][[1]][[1]]

    NULL

    If you want to quickly do runs with options different from those encoded in the configuration file, you can do that from the command line, for example

    where:

    • n is the number of parallel inference slots,

    • j is the number of CPU cores to use on your machine (if j > n, only n cores will actually be used. If j <n, some cores will run multiple slots in sequence)

    • k is the number of iterations per slots.

    You can put all of this together into a single script that can be run all at once ;

    hashtag
    Non-inference run

    Stay in the $PROJECT_PATH folder, and run a simulation directly from forward-simulation Python package gempyor,call flepimop simulate providing the name of the configuration file you want to run (ex. config.yml):

    circle-exclamation

    It is currently required that all configuration files have an interventions section. There is currently no way to simulate a model with no interventions, though this functionality is expected soon. For now, simply create an intervention that has value zero ;

    You can put all of this together into a single script that can be run all at once ;

    hashtag
    Finishing up

    You can avoid repeating all the above steps every time you want to run the code. When the docker run command creates an container, it is stored locally on your computer with all the installed packages/variables/etc you created. You can leave this container and come back to it whenever you want, without having to redo all this set up ;

    When you're in the Docker container, figure out the name Docker has given to the container you created by typing

    the output will be something silly like

    write this down for later reference. You can also see the container name in the Docker Desktop app's Containers tab ;

    To "detach" from the Docker container and stop it, type CTLR + c

    The command prompt for your terminal application is now just running locally, not in the Docker container ;

    Next time you want to re-start and "attach" the container, type

    at the command line or hit the play button ▶️ beside the container's name in the Docker app. Replace container_name with the name for your old container ;

    Then "attach" to the container by typing

    The reason that stopping/starting a container is separate from detaching/attaching is that technically you can leave a container (and any processes within it) running in the background and exit it. In case you want to do that, detach and leave it running by typing CTRL + p then quickly CTRL + q. Then when you want to attach to it again, you don't need to do the part about starting the container ;

    If you the core model code within the flepimop repository (flepimop/flepimop/gempyor_pkg/ or flepimop/flepimop/R_packages) has been edited since you created the contained, or if the R or Python package requirements have changed, then you'll have to re-run the steps to install the packages, but otherwise, you can just start running model code!

    Before any runarrow-up-right
    docker pull hopkinsidd/flepimop:latest-dev
    docker run -it \
      -v <FLEPI_PATH>:/home/app/flepimop \
      -v <PROJECT_PATH>:/home/app/drp \
    hopkinsidd/flepimop:latest-dev
    
    Go into the code directory and do the installation the R and Python code packages
    
    ```bash
    cd $FLEPI_PATH # move to the flepimop directory
    Rscript build/local_install.R # Install R packages
    pip install --no-deps -e flepimop/gempyor_pkg/ # Install Python package gempyor
    cd $PROJECT_PATH       # goes to your project repository
    rm -r model_output/ # delete the outputs of past run if there are
    flepimop-inference-main -c config.yml
    flepimop-inference-main -j 1 -n 1 -k 1 -c config.yml
    docker pull hopkinsidd/flepimop:latest-dev
    docker run -it \
      -v <FLEPI_PATH>:/home/app/flepimop \
      -v <PROJECT_PATH>:/home/app/drp \
    hopkinsidd/flepimop:latest-dev
    export FLEPI_PATH=/home/app/flepimop/
    export PROJECT_PATH=/home/app/drp/
    cd $FLEPI_PATH
    Rscript build/local_install.R
    pip install --no-deps -e flepimop/gempyor_pkg/
    cd $PROJECT_PATH
    rm -rf model_output
    flepimop-inference-main -j 1 -n 1 -k 1 -c config.yml
    flepimop simulate config.yml
    docker pull hopkinsidd/flepimop:latest-dev
    docker run -it \
      -v <FLEPI_PATH>:/home/app/flepimop \
      -v <PROJECT_PATH>:/home/app/drp \
    hopkinsidd/flepimop:latest-dev
    export FLEPI_PATH=/home/app/flepimop/
    export PROJECT_PATH=/home/app/drp/
    cd $FLEPI_PATH
    Rscript build/local_install.R
    pip install --no-deps -e flepimop/gempyor_pkg/
    cd $PROJECT_PATH
    rm -rf model_output
    flepimop simulate config.yml
    docker ps
    > festive_feistel
    docker start container_name
    docker attach container_name
    Command Prompt Video Tutorialarrow-up-right

    Advanced run guides

    For running the model locally, especially for testing, non-inference runs, and short chains, we provide a guide for setting up and running in a conda environmentarrow-up-right, and provide a Docker container for use. A Docker container is an environment which is isolated from the rest of the operating system i.e. you can create files, programs, delete and everything but that will not affect your OS. It is a local virtual OS within your OS. We recommend Docker for users who are not familiar with setting up environments and seek a containerized environment to quickly launch jobs ;

    For longer inference runs across multiple slots, we provide instructions and scripts for two methods to launch on SLURM HPC and on AWS using Docker. These methods are best for launching large jobs (long inference chains, multi-core and computationally expensive model runs), but not the best methods for debugging model setups.

    hashtag
    Running locally

    hashtag
    Running longer inference runs across multiple slots

    Running with Docker locally 🛳chevron-right
    https://github.com/HopkinsIDD/flepiMoP/blob/documentation-gitbook/documentation/gitbook/how-to-run/advanced-run-guides/quick-start-guide-conda.mdchevron-right
    Running on AWS 🌳chevron-right
    Running On A HPC With Slurmchevron-right