LogoLogo
JHU-IDDCOVID-19 Scenario Modeling hubCOVID-19 Forecast Hub
  • Home
  • 🦠gempyor: modeling infectious disease dynamics
    • Modeling infectious disease dynamics
    • Model Implementation
      • flepiMoP's configuration file
      • Specifying population structure
      • Specifying compartmental model
      • Specifying initial conditions
      • Specifying seeding
      • Specifying observational model
      • Distributions
      • Specifying time-varying parameter modifications
      • Other configuration options
      • Code structure
    • Model Output
  • 📈Model Inference
    • Inference Description
    • Inference Implementation
      • Specifying data source and fitted variables
      • (OLD) Configuration options
      • (OLD) Configuration setup
      • Code structure
    • Inference Model Output
    • Inference with EMCEE
  • 🖥️More
    • Setting up the model and post-processing
      • Config writer
      • Diagnostic plotting scripts
      • Create a post-processing script
      • Reporting
    • Advanced
      • File descriptions
      • Numerical methods
      • Additional parameter options
      • Swapping model modules
      • Using plug-ins 🧩[experimental]
  • 🛠️How To Run
    • Before any run
    • Quick Start Guide
    • Multiple Configuration Files
    • Synchronizing Files
    • Advanced run guides
      • Running with Docker locally 🛳
      • Running locally in a conda environment 🐍
      • Running on AWS 🌳
      • Running On A HPC With Slurm
    • Common errors
    • Useful commands
    • Tips, tricks, FAQ
  • 🗜️Development
    • Git and GitHub Usage
    • Guidelines for contributors
  • Deprecated pages
    • Module specification
  • JHU Internal
    • US specific How to Run
      • Running with Docker locally (outdated/US specific) 🛳
      • Running on Rockfish/MARCC - JHU 🪨🐠
      • Running with docker on AWS - OLD probably outdated
        • Provisioning AWS EC2 instance
        • AWS Submission Instructions: Influenza
        • AWS Submission Instructions: COVID-19
      • Running with RStudio Server on AWS EC2
    • Inference scratch
Powered by GitBook
On this page
  • Step 1. Create the configuration file.
  • Step 2. Start and access AWS submission box
  • Step 3. Setup the environment
  • Step 4. Model Setup
  • Step 5. Launch job on AWS batch
  • Step 6. Document the Submission
Edit on GitHub
Export as PDF
  1. JHU Internal
  2. US specific How to Run
  3. Running with docker on AWS - OLD probably outdated

AWS Submission Instructions: Influenza

This page, along with the other AWS run guides, are not deprecated in case we need to run flepiMoP on AWS again in the future, but also are not maintained as other platforms (such as longleaf and rockfish) are preferred for running production jobs.

Step 1. Create the configuration file.

see Building a configuration file

Step 2. Start and access AWS submission box

Spin up an Ubuntu submission box if not already running. To do this, log onto AWS Console and start the EC2 instance.

Update IP address in .ssh/config file. To do this, open a terminal and type the command below. This will open your config file where you can change the IP to the IP4 assigned to the AWS EC2 instance (see AWS Console for this):

notepad .ssh/config

SSH into the box. In the terminal, SSH into your box. Typically we name these instances "staging", so usually the command is:

ssh staging

Step 3. Setup the environment

Now you should be logged onto the AWS submission box.

Update the github repositories. In the below example we assume you are running mainbranch in Flu_USA andmainbranch in COVIDScenarioPipeline. This assumes you have already loaded the appropriate repositories on your EC2 instance. Have your GitHub ssh key passphrase handy so you can paste it when prompted (possibly multiple times) with the git pull command. Alternatively, you can add your github key to your batch box so you do not have to log in repeated (see X).

cd Flu_USA
git config --global credential.helper cache
git pull 

cd COVIDScenarioPipeline
git pull	
git checkout main
git pull
cd ..

Initiate the docker. Start up and log into the docker container, pull the repos from GitHub, and run setup scripts to setup the environment. This setup code links the docker directories to the existing directories on your box. As this is the case, you should not run job submission simultaneously using this setup, as one job submission might modify the data for another job submission.

sudo docker pull hopkinsidd/covidscenariopipeline:latest-dev
sudo docker run -it \
  -v /home/ec2-user/Flu_USA:/home/app/drp \
  -v /home/ec2-user/Flu_USA/COVIDScenarioPipeline:/home/app/drp/COVIDScenarioPipeline \
  -v /home/ec2-user/.ssh:/home/app/.ssh \
hopkinsidd/covidscenariopipeline:latest-dev  
    
cd ~/drp 
git config credential.helper store 
git pull 
git checkout main
git config --global credential.helper 'cache --timeout 300000'

cd ~/drp/COVIDScenarioPipeline 
git pull 
git checkout main
git pull 

Rscript local_install.R && 
   python -m pip install --upgrade pip &&
   pip install -e gempyor_pkg/ && 
   pip install boto3 && 
   cd ~/drp

Step 4. Model Setup

To run the via AWS, we first run a setup run locally (in docker on the submission EC2 box) ;

Setup environment variables. Modify the code chunk below and submit in the terminal. We also clear certain files and model output that get generated in the submission process. If these files exist in the repo, they may not get cleared and could cause issues. You need to modify the variable values in the first 4 lines below. These include the SCENARIO, VALIDATION_DATE, COVID_MAX_STACK_SIZE, and COMPUTE_QUEUE. If submitting multiple jobs, it is recommended to split jobs between 2 queues: Compartment-JQ-1588569569 and Compartment-JQ-1588569574.

If not resuming off previous run:

export SCENARIO=FCH_R1_highVac_pesImm_2022_Oct30 && 
   export VALIDATION_DATE="2022-10-16" && 
   export COVID_MAX_STACK_SIZE=1000 && 
   export COMPUTE_QUEUE="Compartment-JQ-1588569574" &&
   export CENSUS_API_KEY=c235e1b5620232fab506af060c5f8580604d89c1 && 
   export COVID_RESET_CHIMERICS=TRUE &&
   rm -rf model_output data/us_data.csv data-truth &&
   rm -rf data/mobility_territories.csv data/geodata_territories.csv &&
   rm -rf data/seeding_territories.csv

If resuming from a previous run, there are an additional couple variables to set. This is the same for a regular resume or continuation resume. Specifically:

  • RESUME_ID - the COVID_RUN_INDEX from the run resuming from.

  • RESUME_S3 - the S3 bucket where this previous run is stored

export SCENARIO=FCH_R1_highVac_pesImm_2022_Nov27 && 
   export VALIDATION_DATE="2022-11-27" && 
   export COVID_MAX_STACK_SIZE=1000 && 
   export COMPUTE_QUEUE="Compartment-JQ-1588569574" &&
   export CENSUS_API_KEY=c235e1b5620232fab506af060c5f8580604d89c1 && 
   export COVID_RESET_CHIMERICS=TRUE &&
   rm -rf model_output data/us_data.csv data-truth &&
   rm -rf data/mobility_territories.csv data/geodata_territories.csv &&
   rm -rf data/seeding_territories.csv
   
export RESUME_ID=FCH_R1_highVac_pesImm_2022_Nov20 &&
  export RESUME_S3=USA-20221120T194228

Preliminary model run. We do a setup run with 1 to 2 iterations to make sure the model runs and setup input data. This takes several minutes to complete, depending on how complex the simulation will be. To do this, run the following code chunk, with no modification of the code required:

export COVID_RUN_INDEX=$SCENARIO && 
   export CONFIG_NAME=config_$SCENARIO.yml && 
   export CONFIG_PATH=/home/app/drp/$CONFIG_NAME && 
   export COVID_PATH=/home/app/drp/COVIDScenarioPipeline && 
   export PROJECT_PATH=/home/app/drp && 
   export INTERVENTION_NAME="med" && 
   export COVID_STOCHASTIC=FALSE && 
   rm -rf $PROJECT_PATH/model_output $PROJECT_PATH/us_data.csv &&
   rm -rf $PROJECT_PATH/seeding_territories.csv && 
   cd $PROJECT_PATH && Rscript $COVID_PATH/R/scripts/build_US_setup.R -c $CONFIG_NAME && 
   Rscript $COVID_PATH/R/scripts/build_flu_data.R -c $CONFIG_NAME && 
   Rscript $COVID_PATH/R/scripts/full_filter.R -c $CONFIG_NAME -j 1 -n 1 -k 1 && 
   printenv CONFIG_NAME

Step 5. Launch job on AWS batch

Configure AWS. Assuming that the simulations finish successfully, you will now enter credentials and submit your job onto AWS batch. Enter the following command into the terminal ;

aws configure

You will be prompted to enter the following items. These can be found in a file called new_user_credentials.csv ;

  • Access key ID when prompted

  • Secret access key when prompted

  • Default region name: us-west-2

  • Default output: Leave blank when this is prompted and press enter (The Access Key ID and Secret Access Key will be given to you once in a file)

Launch the job. To launch the job, use the appropriate setup based on the type of job you are doing. No modification of these code chunks should be required.

export CONFIG_PATH=$CONFIG_NAME &&
cd $PROJECT_PATH &&
$COVID_PATH/batch/inference_job.py -c $CONFIG_PATH -q $COMPUTE_QUEUE &&
printenv CONFIG_NAME
export CONFIG_PATH=$CONFIG_NAME &&
cd $PROJECT_PATH &&
$COVID_PATH/batch/inference_job.py -c $CONFIG_PATH -q $COMPUTE_QUEUE -j 1 -k 1 &&
printenv CONFIG_NAME

NOTE: Resume and Continuation Resume runs are currently submitted the same way, resuming from an S3 that was generated manually. Typically we will also submit any Continuation Resume run specifying --resume-carry-seeding as starting seeding conditions will be manually constructed and put in the S3.

Carrying seeding (do this to use seeding fits from resumed run):

export CONFIG_PATH=$CONFIG_NAME &&
cd $PROJECT_PATH &&
$COVID_PATH/batch/inference_job.py -c $CONFIG_PATH -q $COMPUTE_QUEUE --resume-carry-seeding --restart-from-location=s3://idd-inference-runs/$RESUME_S3 --restart-from-run-id=$RESUME_ID &&
printenv CONFIG_NAME

Discarding seeding (do this to refit seeding again):

export CONFIG_PATH=$CONFIG_NAME &&  
cd $PROJECT_PATH &&
$COVID_PATH/batch/inference_job.py -c $CONFIG_PATH -q $COMPUTE_QUEUE --resume-discard-seeding --restart-from-location=s3://idd-inference-runs/$RESUME_S3 --restart-from-run-id=$RESUME_ID &&
printenv CONFIG_NAME

Single Iteration + Carry seeding (do this to produce additional scenarios where no fitting is required):

export CONFIG_PATH=$CONFIG_NAME &&
cd $PROJECT_PATH &&
$COVID_PATH/batch/inference_job.py -c $CONFIG_PATH -q $COMPUTE_QUEUE --resume-carry-seeding --restart-from-location=s3://idd-inference-runs/$RESUME_S3 --restart-from-run-id=$RESUME_ID -j 1 -k 1 &&
printenv CONFIG_NAME

NOTE: A Resume and Continuation Resume are currently submitted the same way, but with --resume-carry-seeding specified and resuming from an S3 that was generated manually.

Step 6. Document the Submission

Commit files to Github. After the job is successfully submitted, you will now be in a new branch of the population repo. Commit the ground truth data files to the branch on github and then return to the main branch:

git add data/ 
git config --global user.email "[email]" 
git config --global user.name "[github username]" 
git commit -m"scenario run initial" 
branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p')
git push --set-upstream origin $branch

git checkout main
git pull

Save submission info to slack. We use a slack channel to save the submission information that gets outputted. Copy this to slack so you can identify the job later. Example output:

Setting number of output slots to 300 [via config file]
Launching USA-20220923T160106_inference_med...
Resuming from run id is SMH_R1_lowVac_optImm_2018 located in s3://idd-inference-runs/USA-20220913T000opt
Discarding seeding results
Final output will be: s3://idd-inference-runs/USA-20220923T160106/model_output/
Run id is SMH_R1_highVac_optImm_2022
Switched to a new branch 'run_USA-20220923T160106'
config_SMH_R1_highVac_optImm_2022.yml
PreviousProvisioning AWS EC2 instanceNextAWS Submission Instructions: COVID-19

Last updated 1 month ago