LogoLogo
JHU-IDDCOVID-19 Scenario Modeling hubCOVID-19 Forecast Hub
  • Home
  • 🦠gempyor: modeling infectious disease dynamics
    • Modeling infectious disease dynamics
    • Model Implementation
      • flepiMoP's configuration file
      • Specifying population structure
      • Specifying compartmental model
      • Specifying initial conditions
      • Specifying seeding
      • Specifying observational model
      • Distributions
      • Specifying time-varying parameter modifications
      • Other configuration options
      • Code structure
    • Model Output
  • 📈Model Inference
    • Inference Description
    • Inference Implementation
      • Specifying data source and fitted variables
      • (OLD) Configuration options
      • (OLD) Configuration setup
      • Code structure
    • Inference Model Output
    • Inference with EMCEE
  • 🖥️More
    • Setting up the model and post-processing
      • Config writer
      • Diagnostic plotting scripts
      • Create a post-processing script
      • Reporting
    • Advanced
      • File descriptions
      • Numerical methods
      • Additional parameter options
      • Swapping model modules
      • Using plug-ins 🧩[experimental]
  • 🛠️How To Run
    • Before any run
    • Quick Start Guide
    • Multiple Configuration Files
    • Synchronizing Files
    • Advanced run guides
      • Running with Docker locally 🛳
      • Running locally in a conda environment 🐍
      • Running on AWS 🌳
      • Running On A HPC With Slurm
    • Common errors
    • Useful commands
    • Tips, tricks, FAQ
  • 🗜️Development
    • Git and GitHub Usage
    • Guidelines for contributors
  • Deprecated pages
    • Module specification
  • JHU Internal
    • US specific How to Run
      • Running with Docker locally (outdated/US specific) 🛳
      • Running on Rockfish/MARCC - JHU 🪨🐠
      • Running with docker on AWS - OLD probably outdated
        • Provisioning AWS EC2 instance
        • AWS Submission Instructions: Influenza
        • AWS Submission Instructions: COVID-19
      • Running with RStudio Server on AWS EC2
    • Inference scratch
Powered by GitBook
On this page
  • Git setup
  • Get a notification on your phone/mail when a run is done
  • Install slack integration
  • Delphi Epidata API
  • 🚀 Run inference using slurm (do everytime)
  • Filepaths structure
  • Steps to first local run
  • Launch the docker locally
  • Pipeline git-fu (dealing with the commute_data)
Edit on GitHub
Export as PDF
  1. How To Run

Useful commands

PreviousCommon errorsNextTips, tricks, FAQ

Last updated 2 months ago

Git setup

Type the following line so git remembers your credential and you don't have to enter your token 6 times per day:

git config --global credential.helper store
git config --global user.name "{NAME SURNAME}"
git config --global user.email YOUREMAIL@EMAIL.COM
git config --global pull.rebase false # so you use merge as the default reconciliation method

Get a notification on your phone/mail when a run is done

We use for notification. Install ntfy on your Iphone or Android device. Then subscribe to the channel ntfy.sh/flepimop_alerts where you'll receive notifications when runs are done.

  • End of job notifications goes as urgent priority.

Install slack integration

Within included example postprocessing scripts, we include a helper script that sends a slack message with some output snapshots of our model output. So our 🤖-friend can send us some notifications once a run is done.

cd /scratch4/struelo1/flepimop-code/
nano slack_credentials.sh
# and fill the file:
export SLACK_WEBHOOK="{THE SLACK WEBHOOK FOR CSP_PRODUCTION}"
export SLACK_TOKEN="{THE SLACK TOKEN}"

Delphi Epidata API

export DELPHI_API_KEY="[YOUR API KEY]"

🚀 Run inference using slurm (do everytime)

TODO: add how to run test, and everything

Don't paste them if you don't know what they do

Filepaths structure

in configs with a setup name: USA

model_output/{FileType}/{Prefix}{Index}.{run_id}.{FileType}.{Extension}
                           ^ 
                          setup name(USA)/scenario(inference/med)/run_id/{Inference stuff}
                                                                           ^ global/{final, intermediate}/slot#.

where, eg:

  • the index is 1

  • the run_id is 2021.12.14.23:56:12.CET

  • the prefix is USA/inference/med/2021.12.14.23:56:12.CET/global/intermediate/000000001.

Steps to first local run

export COVID_PATH=$(pwd)/COVIDScenarioPipeline
export PROJECT_PATH=$(pwd)/COVID19_USA
conda activate covidSP
cd $COVID_PATH
Rscript local_install.R
pip install --no-deps -e gempyor_pkg # before: python setup.py develop --no-deps
git lfs install
git lfs pull
export CENSUS_API_KEY=YOUR_KEY
cd $PROJECT_PATH
git restore data/
export CONFIG_PATH=config_smh_r11_optsev_highie_base_deathscases_blk1.yml
Rscript $COVID_PATH/R/scripts/build_US_setup.R
Rscript $COVID_PATH/R/scripts/create_seeding.R
Rscript $COVID_PATH/R/scripts/full_filter.R -j 1 -n 1 -k 1

where:

Launch the docker locally

docker pull hopkinsidd/covidscenariopipeline:latest-dev
docker run -it -v "$(pwd)":/home/app/covidsp hopkinsidd/covidscenariopipeline:latest-dev

Pipeline git-fu (dealing with the commute_data)

git restore --staged sample_data/united-states-commutes/commute_data.csv
git stash push sample_data/united-states-commutes/commute_data.csv
git reset sample_data/united-states-commutes/commute_data.csv

If you are using the Delph Epidata API, first . Once you have a key, add that below where you see [YOUR API KEY]. Alternatively, you can put that key in your config file in the inference section as gt_api_key: "YOUR API KEY".

nnn is slots

jjj is core

kkk is iteration per slot

because a big file get changed and added automatically. Since Git 2.13 (Q2 2017), you can stash individual files, with . One of these should work.

🛠️
ntfy.sh
register for a key
git stash push