LogoLogo
JHU-IDDCOVID-19 Scenario Modeling hubCOVID-19 Forecast Hub
  • Home
  • gempyor: modeling infectious disease dynamics
    • Modeling infectious disease dynamics
    • Model Implementation
      • flepiMoP's configuration file
      • Specifying population structure
      • Specifying compartmental model
      • Specifying initial conditions
      • Specifying seeding
      • Specifying observational model
      • Distributions
      • Specifying time-varying parameter modifications
      • Other configuration options
      • Code structure
    • Model Output
  • Model Inference
    • Inference Description
    • Inference Implementation
      • Specifying data source and fitted variables
      • (OLD) Configuration options
      • (OLD) Configuration setup
      • Code structure
    • Inference Model Output
    • Inference with EMCEE
  • More
    • Setting up the model and post-processing
      • Config writer
      • Diagnostic plotting scripts
      • Create a post-processing script
      • Reporting
    • Advanced
      • File descriptions
      • Numerical methods
      • Additional parameter options
      • Swapping model modules
      • Using plug-ins 🧩[experimental]
  • How To Run
    • Quick Start Guide
    • Multiple Configuration Files
    • Synchronizing Files
    • Advanced run guides
      • Running On A HPC With Slurm
      • Running with Docker locally 🛳
      • Running on AWS 🌳
    • Environment Variables
    • Common errors
    • Useful commands
    • Tips, tricks, FAQ
  • Development
    • Installing flepiMoP For Development
    • Git and GitHub Usage
    • Guidelines for contributors
  • Deprecated pages
    • Module specification
  • JHU Internal
    • US specific How to Run
      • Running with Docker locally (outdated/US specific) 🛳
      • Running on Rockfish/MARCC - JHU 🪨🐠
      • Running with docker on AWS - OLD probably outdated
        • Provisioning AWS EC2 instance
        • AWS Submission Instructions: Influenza
        • AWS Submission Instructions: COVID-19
      • Running with RStudio Server on AWS EC2
    • Inference scratch
  • Technical Reference
    • click commands
Powered by GitBook
On this page
Edit on GitHub
Export as PDF
  1. How To Run

Advanced run guides

PreviousSynchronizing FilesNextRunning On A HPC With Slurm

Last updated 10 months ago

CtrlK
  • Running locally
  • Running longer inference runs across multiple slots

For running the model locally, especially for testing, non-inference runs, and short chains, we provide a guide for setting up and running in a conda environment, and provide a Docker container for use. A Docker container is an environment which is isolated from the rest of the operating system i.e. you can create files, programs, delete and everything but that will not affect your OS. It is a local virtual OS within your OS. We recommend Docker for users who are not familiar with setting up environments and seek a containerized environment to quickly launch jobs ;

For longer inference runs across multiple slots, we provide instructions and scripts for two methods to launch on SLURM HPC and on AWS using Docker. These methods are best for launching large jobs (long inference chains, multi-core and computationally expensive model runs), but not the best methods for debugging model setups.

Running locally

Running longer inference runs across multiple slots

Running with Docker locally 🛳
Running on AWS 🌳
Running On A HPC With Slurm