arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

Running on Rockfish/MARCC - JHU 🪨🐠

or any HPC using the slurm workload manager

hashtag
🗂️ Files and folder organization

Rockfish administrators provided several partitionsarrow-up-right with different properties. For our needs (storage intensive and shared environment), we work in the /scratch4/struelo1/ partition, where we have 20T of space. Our folders are organized as:

  • code-folder: /scratch4/struelo1/flepimop-code/ where each user has its own subfolder, from where the repos are cloned and the runs are launched. e.g for user chadi, we'll find:

    • /scratch4/struelo1/flepimop-code/chadi/covidsp/Flu_USA

    • /scratch4/struelo1/flepimop-code/chadi/COVID19_USA

    • /scratch4/struelo1/flepimop-code/chadi/flepiMoP

circle-exclamation

Note that the repository is cloned flat, i.e the flepiMoP repository is at the same level as the data repository, not inside it!

  • output folder:/scratch4/struelo1/flepimop-runs stores the run outputs. After an inference run finishes, it's output and the logs files are copied from the $PROJECT_PATH/model_output to /scratch4/struelo1/flepimop-runs/THISRUNJOBNAME where the jobname is usually of the form USA-DATE.

circle-exclamation

When logging on you'll see two folders data_struelo1 and scr4_struelo1, which are shortcuts to /data/struelo1 and /scratch4/struelo1. We don't use data/struelo1.

hashtag
Login on rockfish

Using ssh from your terminal, type in:

and enter your password when prompted. You'll be into rockfish's login node, which is a remote computer whose only purpose is to prepare and launch computations on so-called compute nodes.

hashtag
🧱 Setup (to be done only once per USER )

Load the right modules for the setup:

Now, type the following line so git remembers your credential and you don't have to enter your token 6 times per day:

Now you need to create the conda environment. You will create the environment in two shorter commands, installing the python and R stuff separately. This can be extremely long if done in one command, so doing it in two helps. This command is quite long you'll have the time to brew some nice coffee ☕️:

hashtag
Clone the FlepiMoP and other model repositories

Use the following commands to have git clone the FlepiMoP repository and any other model repositories you'd like to work on through https. In the code below, $USER is a variable that contains your username.

You will be prompted to provide your GitHub username and password. Note that from 2021, GitHub has changed the use of passwords to the use of , so the prompted "password" is not the password you use to login. Instead, we recommend using the more safe ssh protocol to clone GitHub repositories. To do so, first on the Rockfish cluster and then copy the generated public key from the Rockfish cluster to your local computer by opening a terminal and prompting,

scp -r <username>@rfdtn1.rockfish.jhu.edu:/home/<username>/.ssh/<key_name.pub> .

Then to your GitHub account. Next, make a file ~/.ssh/config by using the command vi ~/.ssh/config`. Press 'I' to go into insert mode and paste the following chunck of code,

Press 'esc' to exit INSERT model followed by ':x' to save and exit the file. By adding this configuration file, you make sure Rockfish doesn't forget your ssh key when you log out. Now clone the github repositories as follows,

and you will not be prompted for credentials.

hashtag
Setup your Amazon Web Services (AWS) credentials

This can be done in a second step -- but is necessary in order to push and pull data to Amazon Simple Storage Service (S3). Setup AWS by running,

Then run ./aws-cli/bin/aws configure to set up your credentials,

To get the (secret) access key, ask the AWS administrator (Shaun Truelove) to generate them for you.

hashtag
🚀 Run inference using slurm (do everytime)

log-in to rockfish via ssh, then type:

which will prepare the environment and setup variables for the validation date (choose as day after end_date_groundtruth), the resume location and the run index for this run. If you don't want to set a variable, just hit enter.

circle-check

Note that now the run-id of the run we resume from is automatically inferred by the batch script :)

chevron-rightwhat does this do || it returns an errorhashtag

This script runs the following commands to setup up the environment, which you can run individually as well.

and the it does some prompts to fix the following 3 enviroment variables. You can skip this part and do it later manually.

circle-exclamation

Check that the conda environment is activated: you should see(flepimop-env) on the left of your command-line prompt.

Then prepare the pipeline directory (if you have already done that and the pipeline hasn't been updated (git pull says it's up to date) then you can skip these steps

Now flepiMoP is ready 🎉. If the R command doesn't work, try r and if that doesn't work run module load r/4.0.2`.

Next step is to setup the data. First $PROJECT_PATH to your data folder, and set any data options. If you are using the Delph Epidata API, first register for a key here: . Once you have a key, add that below where you see [YOUR API KEY]. Alternatively, you can put that key in your config file in the inference section as gt_api_key: "YOUR API KEY".

For a COVID-19 run, do:

for Flu do:

Now for any type of run:

Do some clean-up before your run. The fast way is to restore the $PROJECT_PATH git repository to its blank states (⚠️ removes everything that does not come from git):

chevron-rightI want more control over what is deletedhashtag

if you prefer to have more control, delete the files you like, e.g

If you still want to use git to clean the repo but want finer control or to understand how dangerous is the command, .

Run the preparatory script for the data and you are good,

If you want to profile how the model is using your memory resources during the run:

Now you may want to test that it works :

If this fails, you may want to investigate this error. In case this succeeds, then you can proceed by first deleting the model_output:

hashtag
Launch your inference batch job

When an inference batch job is launched, a few post processing scripts are called to run automatically postprocessing-scripts.sh. You can manually change what you want to run by editing this script.

Now you're fully set to go 🎉

To launch the whole inference batch job, type the following command:

This command infers everything from you environment variables, if there is a resume or not, what is the run_id, etc. The part after the "2" makes sure this file output is redirected to a script for logging, but has no impact on your submission.

If you'd like to have more control, you can specify the arguments manually:

If you want to send any post-processing outputs to #flepibot-test instead of csp-production, add the flag --slack-channel debug

Commit files to Github. After the job is successfully submitted, you will now be in a new branch of the data repo. Commit the ground truth data files to the branch on github and then return to the main branch:

but DO NOT finish up by git checking main like the aws instructions, as the run will use data in the current folder.

hashtag
Monitor your run

TODO JPSEH WRITE UP TO HERE

Two types of logfiles: in `$PROJECT_PATH`: slurm-JOBID_SLOTID.out and and filter_MC logs:

```tail -f /scratch4/struelo1/flepimop-runs/USA-20230130T163847/log_FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc_100.txt

```

hashtag
Helpful commands

When approching the file number quota, type

to find which subfolders contains how many files

hashtag
Common errors

  • Check that the python comes from conda with which python if some weird package missing errors arrive. Sometime conda magically disappears.

  • Don't use ipython as it breaks click's flags

cleanup:

hashtag
Get a notification on your phone/mail when a run is done

We use for notification. Install ntfy on your Iphone or Android device. Then subscribe to the channel ntfy.sh/flepimop_alerts where you'll receive notifications when runs are done.

  • End of job notifications goes as urgent priority.

hashtag
How to use slurm

Check your running jobs:

where job_id has your full array job_id and each slot after the under-score. You can see their status (R: running, P: pending), how long they have been running and soo on.

To cancel a job

hashtag
Running an interactive session

To check your code prior to submitting a large batch job, it's often helpful to run an interactive session to debug your code and check everything works as you want. On 🪨🐠 this can be done using interact like the below line, which requests an interactive session with 4 cores, 24GB of memory, for 12 hours.

The options here are [-n tasks or cores], [-t walltime], [-p partition] and [-m memory], though other options can also be included or modified to your requirements. More details can be found on the .

hashtag
Moving files to your local computer

Often you'll need to move files back and forth between Rockfish and your local computer. To do this, you can use Open-On-Demand, or any other command line tool.

scp -r <user>@rfdtn1.rockfish.jhu.edu:"<file path of what you want>" <where you want to put it in your local>

hashtag
Installation notes

These steps are already done an affects all users, but might be interesting in case you'd like to run on another cluster

hashtag
Install slack integration

So our 🤖-friend can send us some notifications once a run is done.

  • ...

  • (we keep separated repositories by users so that different versions of the pipeline are not mixed where we run several runs in parallel. Don't hesitate to create other subfolders in the code folder (/scratch4/struelo1/flepimop-code/chadi-flusight, ...) if you need them.

  • personal acces tokensarrow-up-right
    generate an ssh private-public keypairarrow-up-right
    add the public keyarrow-up-right
    https://cmu-delphi.github.io/delphi-epidata/arrow-up-right
    read thisarrow-up-right
    ntfy.sharrow-up-right
    ARCH User Guidearrow-up-right
    ssh {YOUR ROCKFISH USERNAME}@login.rockfish.jhu.edu
    module purge
    module load gcc/9.3.0
    module load anaconda3/2022.05  # very important to pin this version as other are buggy
    module load git                # needed for git
    module load git-lfs            # git-lfs (do we still need it?)
    git config --global credential.helper store
    git config --global user.name "{NAME SURNAME}"
    git config --global user.email YOUREMAIL@EMAIL.COM
    git config --global pull.rebase false # so you use merge as the default reconciliation method
    # install all python stuff first
    conda create -c conda-forge -n flepimop-env numba pandas numpy seaborn tqdm matplotlib click confuse pyarrow sympy dask pytest scipy graphviz emcee xarray boto3 slack_sdk
    
    # activate the enviromnment and install the R stuff
    conda activate flepimop-env
    conda install -c conda-forge r-readr r-sf r-lubridate r-tigris r-tidyverse r-gridextra r-reticulate r-truncnorm r-xts r-ggfortify r-flextable r-doparallel r-foreach r-arrow r-optparse r-devtools r-tidycensus r-cdltools r-cowplot 
    cd /scratch4/struelo1/flepimop-code/
    mkdir $USER
    cd $USER
    git clone https://github.com/HopkinsIDD/flepiMoP.git
    git clone https://github.com/HopkinsIDD/Flu_USA.git
    # or any other model repositories
    Host github.com
        User git
        IdentityFile ~/.ssh/<key_name>
    cd /scratch4/struelo1/flepimop-code/
    mkdir $USER
    cd $USER
    git clone git@github.com:HopkinsIDD/flepiMoP.git
    git clone git@github.com:HopkinsIDD/Flu_USA.git
    # or any other model repositories
    cd ~ # go in your home directory
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    unzip awscliv2.zip
    ./aws/install -i ~/aws-cli -b ~/aws-cli/bin
    ./aws-cli/bin/aws --version
    # AWS Access Key ID [None]: Access key
    # AWS Secret Access Key [None]: Secret Access key
    # Default region name [None]: us-west-2
    # Default output format [None]: json
    source /scratch4/struelo1/flepimop-code/$USER/flepiMoP/batch/slurm_init.sh
    module purge
    module load gcc/9.3.0
    module load git
    module load git-lfs
    module load slurm
    module load anaconda3/2022.05
    conda activate flepimop-env
    export CENSUS_API_KEY={A CENSUS API KEY}
    export FLEPI_RESET_CHIMERICS=TRUE
    export FLEPI_PATH=/scratch4/struelo1/flepimop-code/$USER/flepiMoP
    
    # And then it asks you some questions to setup some enviroment variables
    export VALIDATION_DATE="2023-01-29"
    export RESUME_LOCATION=s3://idd-inference-runs/USA-20230122T145824
    export FLEPI_RUN_INDEX=FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc
    cd /scratch4/struelo1/flepimop-code/$USER
    export FLEPI_PATH=$(pwd)/flepiMoP
    cd $FLEPI_PATH
    git checkout main
    git pull
    
    # install dependencies ggraph and tidy graph
    R
    > install.packages(c("ggraph","tidygraph"))
    > quit()
    
    # install the R module
    Rscript build/local_install.R # warnings are ok; there should be no error.
    
    # install gempyor
    pip install --no-deps -e flepimop/gempyor_pkg/
    cd /scratch4/struelo1/flepimop-code/$USER
    export PROJECT_PATH=$(pwd)/COVID19_USA
    export GT_DATA_SOURCE="csse_case, fluview_death, hhs_hosp"
    export DELPHI_API_KEY="[YOUR API KEY]"
    cd /scratch4/struelo1/flepimop-code/$USER
    export PROJECT_PATH=$(pwd)/Flu_USA
    cd $PROJECT_PATH
    git pull 
    git checkout main
    git reset --hard && git clean -f -d  # this deletes everything that is not on github in this repo !!!
    rm -rf model_output data/us_data.csv data-truth &&
       rm -rf data/mobility_territories.csv data/geodata_territories.csv &&
       rm -rf data/seeding_territories.csv && 
       rm -rf data/seeding_territories_Level5.csv data/seeding_territories_Level67.csv
    
    # don't delete model_output if you have another run in //
    rm -rf $PROJECT_PATH/model_output
    # delete log files from previous runs
    rm *.out
    export CONFIG_PATH=config_FCH_R16_lowBoo_modVar_ContRes_blk4_Jan29_tsvacc.yml
    Rscript $FLEPI_PATH/datasetup/build_US_setup.R
    
    # For covid do
    Rscript $FLEPI_PATH/datasetup/build_covid_data.R
    
    # For Flu (do not do this for the scenariohub!)
    R
    > install.packages(c("RSocrata"))
    Rscript $FLEPI_PATH/datasetup/build_flu_data.R
    
    # build seeding
    Rscript $FLEPI_PATH/datasetup/build_initial_seeding.R
    export FLEPI_MEM_PROFILE=TRUE
    export FLEPI_MEM_PROF_ITERS=50
    flepimop-inference-main -c $CONFIG_PATH -j 1 -n 1 -k 1 
    rm -r model_output
    python $FLEPI_PATH/batch/inference_job_launcher.py --slurm 2>&1 | tee $FLEPI_RUN_INDEX_submission.log
    python $FLEPI_PATH/batch/inference_job_launcher.py --slurm \
                        -c $CONFIG_PATH \
                        -p $FLEPI_PATH \
                        --data-path $PROJECT_PATH \
                        --upload-to-s3 True \
                        --id $FLEPI_RUN_INDEX \
                        --fs-folder /scratch4/struelo1/flepimop-runs \
                        --restart-from-location $RESUME_LOCATION
    git add data/ 
    git commit -m"scenario run initial" 
    branch=$(git branch | sed -n -e 's/^\* \(.*\)/\1/p')
    git push --set-upstream origin $branch
    find . -maxdepth 1 -type d | while read -r dir
     do printf "%s:\t" "$dir"; find "$dir" -type f | wc -l; done 
    rm -r /scratch4/struelo1/flepimop-runs/
    rm -r model_output
    cd $COVID_PATH;git pull;cd $PROJECT_PATH
    rm *.out
    squeue -u $USER
    scancel JOB_ID
    interact -p defq -n 4 -m 24G -t 12:00:00
    cd /scratch4/struelo1/flepimop-code/
    nano slack_credentials.sh
    # and fill the file:
    export SLACK_WEBHOOK="{THE SLACK WEBHOOK FOR CSP_PRODUCTION}"
    export SLACK_TOKEN="{THE SLACK TOKEN}"