Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision |
public:usage:goethe-hlr [2020/05/20 09:12] – [Login] keiling | public:usage:goethe-hlr [2020/12/09 16:55] – [Node Types And Constraints] keiling |
---|
| ''general1'' | Intel Skylake CPU|yes| | | ''general1'' | Intel Skylake CPU|yes| |
| ''general2'' | Intel Ivy Bridge CPU \\ Intel Broadwell CPU|yes \\ yes| | | ''general2'' | Intel Ivy Bridge CPU \\ Intel Broadwell CPU|yes \\ yes| |
| ''gpu'' | n/a | not yet | | | ''gpu'' | AMD EPYC 7452 |yes| |
| ''test'' | Intel Skylake CPU|yes| | | ''test'' | Intel Skylake CPU|yes| |
| |
Nodes are used **exclusively**, i.e. only whole nodes are allocated for a job and no other job can use the same nodes concurrently. | Nodes are used **exclusively**, i.e. only whole nodes are allocated for a job and no other job can use the same nodes concurrently. |
The following instructions shall provide you with the basic information you need to get started with SLURM on our systems. However, the official SLURM documentation covers some more use cases (also in more detail). Please read the SLURM man pages (e.g. ''man sbatch'' or ''man salloc'') and/or visit http://www.schedmd.com/slurmdocs. It's highly recommended. | The following instructions shall provide you with the basic information you need to get started with SLURM on our systems. However, the official SLURM documentation covers some more use cases (also in more detail). Please read the SLURM man pages (e.g. ''man sbatch'' or ''man salloc'') and/or visit http://www.schedmd.com/slurmdocs. It's highly recommended. |
| |
Helpful SLURM link: [[https://slurm.schedmd.com/faq.html|SLURM FAQ]] | Helpful SLURM links: [[https://slurm.schedmd.com/faq.html|SLURM FAQ]]\\ |
| SLURM documentation: [[https://slurm.schedmd.com|SLURM]] |
==== The test Partition: Your First Job Script ==== | ==== The test Partition: Your First Job Script ==== |
| |
|general1|''#SBATCH %%--%%partition=general1''|Intel Skylake CPU|yes| | |general1|''#SBATCH %%--%%partition=general1''|Intel Skylake CPU|yes| |
|general2|''#SBATCH %%--%%partition=general2'' \\ ''#SBATCH %%--%%constraint=broadwell''|Intel Broadwell CPU|yes| | |general2|''#SBATCH %%--%%partition=general2'' \\ ''#SBATCH %%--%%constraint=broadwell''|Intel Broadwell CPU|yes| |
|gpu|''#SBATCH %%--%%partition=gpu''| n/a |not yet| | |gpu|''#SBATCH %%--%%partition=gpu'' |AMD EPYC 7452 |yes| |
|test|''#SBATCH %%--%%partition=test''|Intel Skylake CPU|yes| | |test|''#SBATCH %%--%%partition=test''|Intel Skylake CPU|yes| |
| |
| ''MaxArraySize'' | 1001| the maximum job array size | | | ''MaxArraySize'' | 1001| the maximum job array size | |
| |
| For the partition ''test'' we have following limits: |
| |
| ^Limit^Value^Description^ |
| | ''MaxTime'' | 2 hours | the maximum run time for jobs | |
| | ''MaxJobsPU'' | 3| max. number of jobs a user is able to run simultaneously | |
| | ''MaxSubmitPU'' | 4| max. number of jobs in running or pending state | |
| | ''MaxNodesPU'' | 3| max. number of nodes a user is able to use at the same time | |
==== GPU Jobs ==== | ==== GPU Jobs ==== |
| |