AWS Submission Instructions: Influenza

This page, along with the other AWS run guides, are not deprecated in case we need to run flepiMoP on AWS again in the future, but also are not maintained as other platforms (such as longleaf and rockfish) are preferred for running production jobs.

Step 1. Create the configuration file.

see Building a configuration file

Step 2. Start and access AWS submission box

Spin up an Ubuntu submission box if not already running. To do this, log onto AWS Console and start the EC2 instance.

Update IP address in .ssh/config file. To do this, open a terminal and type the command below. This will open your config file where you can change the IP to the IP4 assigned to the AWS EC2 instance (see AWS Console for this):

notepad .ssh/config

SSH into the box. In the terminal, SSH into your box. Typically we name these instances "staging", so usually the command is:

ssh staging

Step 3. Setup the environment

Now you should be logged onto the AWS submission box.

Update the github repositories. In the below example we assume you are running mainbranch in Flu_USA andmainbranch in COVIDScenarioPipeline. This assumes you have already loaded the appropriate repositories on your EC2 instance. Have your GitHub ssh key passphrase handy so you can paste it when prompted (possibly multiple times) with the git pull command. Alternatively, you can add your github key to your batch box so you do not have to log in repeated (see X).

cd Flu_USA
git config --global credential.helper cache
git pull 

cd COVIDScenarioPipeline
git pull	
git checkout main
git pull
cd ..

Initiate the docker. Start up and log into the docker container, pull the repos from GitHub, and run setup scripts to setup the environment. This setup code links the docker directories to the existing directories on your box. As this is the case, you should not run job submission simultaneously using this setup, as one job submission might modify the data for another job submission.

sudo docker pull hopkinsidd/covidscenariopipeline:latest-dev
sudo docker run -it \
  -v /home/ec2-user/Flu_USA:/home/app/drp \
  -v /home/ec2-user/Flu_USA/COVIDScenarioPipeline:/home/app/drp/COVIDScenarioPipeline \
  -v /home/ec2-user/.ssh:/home/app/.ssh \
hopkinsidd/covidscenariopipeline:latest-dev  
    
cd ~/drp 
git config credential.helper store 
git pull 
git checkout main
git config --global credential.helper 'cache --timeout 300000'

cd ~/drp/COVIDScenarioPipeline 
git pull 
git checkout main
git pull 

Rscript local_install.R && 
   python -m pip install --upgrade pip &&
   pip install -e gempyor_pkg/ && 
   pip install boto3 && 
   cd ~/drp

Step 4. Model Setup

To run the via AWS, we first run a setup run locally (in docker on the submission EC2 box).

Setup environment variables. Modify the code chunk below and submit in the terminal. We also clear certain files and model output that get generated in the submission process. If these files exist in the repo, they may not get cleared and could cause issues. You need to modify the variable values in the first 4 lines below. These include the SCENARIO, VALIDATION_DATE, COVID_MAX_STACK_SIZE, and COMPUTE_QUEUE. If submitting multiple jobs, it is recommended to split jobs between 2 queues: Compartment-JQ-1588569569 and Compartment-JQ-1588569574.

If not resuming off previous run:

export SCENARIO=FCH_R1_highVac_pesImm_2022_Oct30 && 
   export VALIDATION_DATE="2022-10-16" && 
   export COVID_MAX_STACK_SIZE=1000 && 
   export COMPUTE_QUEUE="Compartment-JQ-1588569574" &&
   export CENSUS_API_KEY=c235e1b5620232fab506af060c5f8580604d89c1 && 
   export COVID_RESET_CHIMERICS=TRUE &&
   rm -rf model_output data/us_data.csv data-truth &&
   rm -rf data/mobility_territories.csv data/geodata_territories.csv &&
   rm -rf data/seeding_territories.csv

Preliminary model run. We do a setup run with 1 to 2 iterations to make sure the model runs and setup input data. This takes several minutes to complete, depending on how complex the simulation will be. To do this, run the following code chunk, with no modification of the code required:

export COVID_RUN_INDEX=$SCENARIO && 
   export CONFIG_NAME=config_$SCENARIO.yml && 
   export CONFIG_PATH=/home/app/drp/$CONFIG_NAME && 
   export COVID_PATH=/home/app/drp/COVIDScenarioPipeline && 
   export DATA_PATH=/home/app/drp && 
   export INTERVENTION_NAME="med" && 
   export COVID_STOCHASTIC=FALSE && 
   rm -rf $DATA_PATH/model_output $DATA_PATH/us_data.csv &&
   rm -rf $DATA_PATH/seeding_territories.csv && 
   cd $DATA_PATH && Rscript $COVID_PATH/R/scripts/build_US_setup.R -c $CONFIG_NAME && 
   Rscript $COVID_PATH/R/scripts/build_flu_data.R -c $CONFIG_NAME && 
   Rscript $COVID_PATH/R/scripts/full_filter.R -c $CONFIG_NAME -j 1 -n 1 -k 1 && 
   printenv CONFIG_NAME

Step 5. Launch job on AWS batch

Configure AWS. Assuming that the simulations finish successfully, you will now enter credentials and submit your job onto AWS batch. Enter the following command into the terminal:

aws configure

You will be prompted to enter the following items. These can be found in a file called new_user_credentials.csv.

  • Access key ID when prompted

  • Secret access key when prompted

  • Default region name: us-west-2

  • Default output: Leave blank when this is prompted and press enter (The Access Key ID and Secret Access Key will be given to you once in a file)

Launch the job. To launch the job, use the appropriate setup based on the type of job you are doing. No modification of these code chunks should be required.

export CONFIG_PATH=$CONFIG_NAME &&
cd $DATA_PATH &&
$COVID_PATH/batch/inference_job.py -c $CONFIG_PATH -q $COMPUTE_QUEUE --non-stochastic &&
printenv CONFIG_NAME

NOTE: A Resume and Continuation Resume are currently submitted the same way, but with --resume-carry-seeding specified and resuming from an S3 that was generated manually.

Step 6. Document the Submission

Commit files to Github. After the job is successfully submitted, you will now be in a new branch of the population repo. Commit the ground truth data files to the branch on github and then return to the main branch:

git add data/ 
git config --global user.email "[email]" 
git config --global user.name "[github username]" 
git commit -m"scenario run initial" 
branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p')
git push --set-upstream origin $branch

git checkout main
git pull

Save submission info to slack. We use a slack channel to save the submission information that gets outputted. Copy this to slack so you can identify the job later. Example output:

Setting number of output slots to 300 [via config file]
Launching USA-20220923T160106_inference_med...
Resuming from run id is SMH_R1_lowVac_optImm_2018 located in s3://idd-inference-runs/USA-20220913T000opt
Discarding seeding results
Final output will be: s3://idd-inference-runs/USA-20220923T160106/model_output/
Run id is SMH_R1_highVac_optImm_2022
Switched to a new branch 'run_USA-20220923T160106'
config_SMH_R1_highVac_optImm_2022.yml

Last updated