Running Simulations on a Cluster¶
The gwmock batch command allows you to build and submit gwmock simulations as
batch jobs on a Slurm-based cluster.
Overview¶
The gwmock batch command has two mutually exclusive modes:
-
Create a batch-ready configuration file from one of the provided examples. This mode is triggered by the
--getoption. -
Generate a Slurm submit script (and optionally submit the job) from an existing configuration file that already contains a
batchsection.
1. Create a Batch-ready Configuration File¶
Use this mode when starting from an example configuration file in the in the
examples/
directory and you want to prepare a configuration file that includes all
necessary information for batch submission.
gwmock batch --get <example_label> [options]
This command requires the label of the example configuration file to copy, which
can be obtained using the gwmock config --list command. It copies
examples/<example_label>/config.yaml and adds a complete batch section (see
the Examples page).
The following default resources are always added:
nodes: 1
ntasks-per-node: 1
cpus-per-task: 1
mem: 16GB
Note
gwmock currently does not support multi-threaded execution. To modify the memory request, edit the configuration file manually.
Commonly used options (only allowed with --get)¶
-
--job-name <name>Job name that will appear in SLURM (stored asbatch.job-name). Default:gwmock_job. -
--scheduler <scheduler>Name of the scheduler (onlyslurmcurrently supported). Default:slurm. -
--account <account>SLURM account/project to charge. -
--cluster <partition>SLURM cluster or partition to run on. -
--time <time>Wall time limit inhh:mm:ssformat. -
--extra-line '<command>'Add a custom shell line to the submit script before the simulation command (e.g. environment setup, module loads, conda activate). Can be repeated multiple times. -
--output <path>Destination for the new configuration file. Default:config.yamlin the current directory. -
--overwriteOverwrite the output configuration file if it already exists.
Example¶
The following command:
gwmock batch --get default_config \
--job-name gwmock_test \
--account my_account \
--cluster cluster_name \
--time 02:00:00 \
--extra-line 'export PATH="/my_account/miniconda3/bin:$PATH"' \
--extra-line 'eval "$(conda shell.bash hook)"' \
--extra-line 'conda activate /my_account/miniconda3/envs/my_env'
add the following batch section to the configuration file:
batch:
scheduler: slurm # Default
job-name: gwmock_test
resources:
nodes: 1 # Default
ntasks-per-node: 1 # Default
cpus-per-task: 1 # Default
mem: 16GB # Default
submit:
account: my_account
cluster: cluster_name
time: 02:00:00
extra_lines:
- export PATH="/my_account/miniconda3/bin:$PATH"
- eval "$(conda shell.bash hook)"
- conda activate /my_account/miniconda3/envs/my_env
2. Generate and Submit a Slurm Job¶
Use this mode when you already have a configuration file that contains a valid
batch section.
gwmock batch <config.yaml> [--submit]
This command requires the path to a configuration file that contains a batch
section with at least scheduler and job-name (default resources are
assumed). When executed, the following actions are performed:
-
Directories are created under
<working-directory>/slurm/:output/– stdout fileserror/– stderr filessubmit/– the generated.submitscript
-
A SLURM submit script is written containing:
- All
#SBATCHdirectives frombatch.resources - Any additional
#SBATCHdirectives frombatch.submit(account, cluster, time, etc.) - All custom lines from
batch.extra_lines(if present) - The command
gwmock simulate <absolute_path_to_config.yaml>
- All
-
If
--submitis used,sbatchis called.
Optional¶
-
--submitImmediately submit the generated job usingsbatch. Without this flag, only the submit script is created. -
--overwriteOverwrite an existing submit script if it already exists.
Example¶
# Just generate the submit script and save in `<working-directory>/slurm/submit`
gwmock batch config.yaml
# Generate and submit immediately
gwmock batch config.yaml --submit