Spin up an Ubuntu submission box if not already running. To do this, log onto AWS Console and start the EC2 instance.
Update IP address in .ssh/config file. To do this, open a terminal and type the command below. This will open your config file where you can change the IP to the IP4 assigned to the AWS EC2 instance (see AWS Console for this):
notepad .ssh/config
SSH into the box. In the terminal, SSH into your box. Typically we name these instances "staging", so usually the command is:
ssh staging
🧱 Setup
Now you should be logged onto the AWS submission box. If you haven't yet, set up your directory structure.
🗂 Create the directory structure (ONCE PER USER)
Type the following commands:
gitclonehttps://github.com/HopkinsIDD/flepiMoP.gitgitclonehttps://github.com/HopkinsIDD/Flu_USA.gitgitclonehttps://github.com/HopkinsIDD/COVID19_USA.gitcdCOVID19_USAgitclonehttps://github.com/HopkinsIDD/flepiMoP.gitcd..# or any other data directories
Note that the repository is cloned nested, i.e the flepiMoP repository is INSIDE the data repository.
Have your Github ssh key passphrase handy so you can paste it when prompted (possibly multiple times) with the git pull command. Alternatively, you can add your github key to your batch box so you don't have to enter your token 6 times per day.
gitconfig--globalcredential.helperstoregitconfig--globaluser.name"{NAME SURNAME}"gitconfig--globaluser.emailYOUREMAIL@EMAIL.COMgitconfig--globalpull.rebasefalse# so you use merge as the default reconciliation method
cd COVID19_USA
git config --global credential.helper cache
git pull
git checkout main
git pull
cd flepiMoP
git pull
git checkout main
git pull
cd ..
🚀 Run inference using AWS (do everytime)
🛳 Initiate the Docker
Start up and log into the docker container, and run setup scripts to setup the environment. This setup code links the docker directories to the existing directories on your box. As this is the case, you should not run job submission simultaneously using this setup, as one job submission might modify the data for another job submission.
To set up the environment for your run, run the following commands. These are specific to your run, i.e., change VALIDATION_DATE, FLEPI_RUN_INDEX and RESUME_LOCATION as required. If submitting multiple jobs, it is recommended to split jobs between 2 queues: Compartment-JQ-1588569569 and Compartment-JQ-1588569574.
NOTE: If you are not running a resume run, DO NOT export the environmental variable RESUME_LOCATION.
Then prepare the pipeline directory (if you have already done that and the pipeline hasn't been updated (git pull says it's up to date). You need to set $DATA_PATH to your data folder. For a COVID-19 run, do:
cd $DATA_PATHexport FLEPI_PATH=$(pwd)/flepiMoPcd $FLEPI_PATHgitcheckoutmaingitpullgitconfig--globalcredential.helper'cache --timeout 300000'#install gempyor and the R modules. There should be no error, please report if not.# Sometimes you might need to run the next line two times because inference depends# on report.generation, which is installed later, in alphabetical order.# (or if you know R well enough to fix that 😊)Rscriptbuild/local_install.R# warnings are ok; there should be no error.python-mpipinstall--upgradepip&pipinstall-eflepimop/gempyor_pkg/&pipinstallboto3&cd..
For now, just in case: update the arrow package from 8.0.0 in the docker to 11.0.3.
Now flepiMoP is ready 🎉.
cd $DATA_PATHgitpullgitcheckoutmain
Do some clean-up before your run. The fast way is to restore the $DATA_PATH git repository to its blank states (⚠️ removes everything that does not come from git):
gitreset--hard&&gitclean-f-d# this deletes everything that is not on github in this repo !!!
I want more control over what is deleted
if you prefer to have more control, delete the files you like, e.g
If you still want to use git to clean the repo but want finer control or to understand how dangerous is the command, read this.
rm-rfmodel_outputdata/us_data.csvdata-truth&&rm-rfdata/mobility_territories.csvdata/geodata_territories.csv&&rm-rfdata/seeding_territories.csv&&rm-rfdata/seeding_territories_Level5.csvdata/seeding_territories_Level67.csv# don't delete model_output if you have another run in //rm-rf $DATA_PATH/model_output
Then run the preparatory data building scripts and you are good
export CONFIG_PATH=config_FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc.yml # if you haven't already done thisRscript $FLEPI_PATH/datasetup/build_US_setup.R# For covid doRscript $FLEPI_PATH/datasetup/build_covid_data.R# For Flu doRscript $FLEPI_PATH/datasetup/build_flu_data.R
If this fails, you may want to investigate this error. In case this succeeds, then you can proceed by first deleting the model_output:
rm -r model_output
Launch your inference batch job on AWS
Assuming that the initial test simulation finishes successfully, you will now enter credentials and submit your job onto AWS batch. Enter the following command into the terminal:
aws configure
You will be prompted to enter the following items. These can be found in a file you received from Shaun called new_user_credentials.csv.
Access key ID when prompted
Secret access key when prompted
Default region name: us-west-2
Default output: Leave blank when this is prompted and press enter (The Access Key ID and Secret Access Key will be given to you once in a file)
Now you're fully set to go 🎉
To launch the whole inference batch job, type the following command:
This command infers everything from you environment variables, if there is a resume or not, what is the run_id, etc., and the default is to carry seeding if it is a resume (see below for alternative options).
If you'd like to have more control, you can specify the arguments manually:
We allow for a number of different jobs, with different setups, e.g., you may not want to carry seeding. Some examples of appropriate setups are given below. No modification of these code chunks should be required.
cd $DATA_PATH $FLEPI_PATH/batch/inference_job_launcher.py --aws -c $CONFIG_PATH -q $COMPUTE_QUEUE --non-stochastic
NOTE: Resume and Continuation Resume runs are currently submitted the same way, resuming from an S3 that was generated manually. Typically we will also submit any Continuation Resume run specifying --resume-carry-seeding as starting seeding conditions will be manually constructed and put in the S3.
Carrying seeding (do this to use seeding fits from resumed run):
After the job is successfully submitted, you will now be in a new branch of the data repo. Commit the ground truth data files to the branch on github and then return to the main branch:
Send the submission information to slack so we can identify the job later. Example output:
Launching USA-20230426T135628_inference_med on aws...
>> Job array: 300 slot(s) X 5 block(s) of 55 simulation(s) each.
>> Final output will be: s3://idd-inference-runs/USA-20230426T135628/model_output/
>> Run id is SMH_R17_noBoo_lowIE_phase1_blk1
>> config is config_SMH_R17_noBoo_lowIE_phase1_blk1.yml
>> FLEPIMOP branch is main with hash 3773ed8a20186e82accd6914bfaf907fd9c52002
>> DATA branch is R17 with hash 6f060fefa9784d3f98d88a313af6ce433b1ac913