Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
public:usage:goethe-hlr [2020/05/20 09:21] – [Read More] kalkbrenner | public:usage:goethe-hlr [2021/03/17 16:50] (current) – created geier | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== | + | ====== |
- | The [[..:service:Goethe-HLR]] is a general-purpose computer cluster based on Intel CPU architectures running Scientific Linux 7.6 and [[# | + | https://csc.uni-frankfurt.de/ |
- | ===== Login ===== | ||
- | |||
- | An SSH client is required to connect to the cluster. On a Linux system the command is usually: | ||
- | < | ||
- | |||
- | <note important> | ||
- | You may receive a warning from the system that something with the security is wrong. We switched the old LOEWE Cluster IP to our new GOETHE Cluster. If you used the LOEWE Cluster in the past you receive a warning that something is wrong. This is related to the unique LOEWE key within the clientsoftware you use differs from the new unique GOETHE-HLR key. Just erase your old LOEWE key and everything is set.\\ \\ | ||
- | If you may use Linux just look up '' | ||
- | |||
- | On Windows systems please use/install a Windows SSH client (e.g. PuTTY, or the Cygwin ssh package). | ||
- | |||
- | After your [[first login]] you will get the message, that your password has expired and you have to change it. Please use the password provided by CSC at the prompt, choose a new one and retype it. You will be logged out automatically. Now you can login with your new password and work on the cluster. | ||
- | |||
- | <note warning> | ||
- | |||
- | '' | ||
- | |||
- | on the command line. On a login node, any process that exceeds the CPU-time limit (e.g. a long running test program or a long running rsync) will be killed automatically.</ | ||
- | |||
- | ===== Environment Modules ===== | ||
- | |||
- | |||
- | There are several versions of software packages installed on our systems. The same name for an executable (e.g. mpirun) and/or library file may be used by more than one package. The environment module system, with its '' | ||
- | |||
- | < | ||
- | |||
- | If you want to know more about module commands, the '' | ||
- | |||
- | |||
- | ===== Compiling Software ===== | ||
- | |||
- | You can compile your software on the login nodes (or on any other node, inside a job allocation). On Goethe-HLR several compiler suites are available. While GCC version 4.8.5 is the built-in OS default, you can list additional compilers and libraries by running '' | ||
- | |||
- | * GNU compilers | ||
- | * Intel compilers | ||
- | * MPI libraries | ||
- | |||
- | For the right compilation commands please consider: | ||
- | |||
- | <note important> | ||
- | [[https:// | ||
- | </ | ||
- | |||
- | |||
- | To compile and manage software which is not available under "'' | ||
- | |||
- | |||
- | ===== Debugging ===== | ||
- | |||
- | The [[http:// | ||
- | |||
- | - Compile your code with your favored MPI using the debug option '' | ||
- | mpicc -g -o mpi_prog mpi_prog.c</ | ||
- | - Load the TotalView module by running< | ||
- | module load debug/ | ||
- | - Allocate the resources you need using salloc, e.g.< | ||
- | salloc -n 4 --partition=test --time=00: | ||
- | - Start a TotalView debugging session, e.g.< | ||
- | totalview </ | ||
- | - Choose Debug a parallel session | ||
- | - Choose your executable (mpi_prog), Parallel System (e.g. Intel MPI CSC or openmpi-m), number of tasks and load the session | ||
- | |||
- | ===== Storage ===== | ||
- | |||
- | There are various storage systems available on the cluster. In this section we describe the most relevant: | ||
- | |||
- | * your home directory ''/ | ||
- | * your scratch directory ''/ | ||
- | * the non-shared local storage (i.e. only accessible from the compute node it's connected to, max. 1.4 TB, slow) under ''/ | ||
- | * and the two (slow) archive file systems ''/ | ||
- | |||
- | Please use your home directory for small permanent files, e.g. source files, libraries and executables. | ||
- | |||
- | <note important> | ||
- | |||
- | {{ : | ||
- | |||
- | By default, the space in your home directory is limited to 10 GB and in your scratch directory to 5 TB and/or 800000 inodes (which corresponds to approximately 200000+ files). You can check your homedir and scratch usage by running the '' | ||
- | |||
- | < | ||
- | |||
- | If you need local storage on the compute nodes, you have to add the '' | ||
- | <code bash> | ||
- | ... | ||
- | |||
- | mkdir / | ||
- | scontrol show hostnames $SLURM_JOB_NODELIST | xargs -i ssh {} \ | ||
- | rsync -a / | ||
- | / | ||
- | </ | ||
- | |||
- | In addition to the " | ||
- | |||
- | rsync arc01:/ | ||
- | ... | ||
- | cd / | ||
- | rsync [--progress] -a < | ||
- | or, for '' | ||
- | rsync arc02:/ | ||
- | ... | ||
- | cd / | ||
- | rsync [--progress] -a < | ||
- | |||
- | The space is limited by //N// on each of the both systems. Limits are set for an entire group (there' | ||
- | |||
- | df -h / | ||
- | |||
- | on the command line. The corresponding hardware resides in separate server rooms. There is no automatic backup. However, for a user, a possible backup scenario is to backup his or her data manually to both storage systems, '' | ||
- | |||
- | < | ||
- | |||
- | Although our storage systems are protected by RAID mechanisms, we can't guarantee the safety of your data. It is within the responsibility of the user to backup important files. | ||
- | </ | ||
- | |||
- | ===== Running Jobs With SLURM ===== | ||
- | |||
- | On our systems, compute jobs and resources are managed by SLURM (Simple Linux Utility for Resource Management). The compute nodes are organized in the partition (or queue) named '' | ||
- | |||
- | ^Partition^Node type^Implemented^ | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | |||
- | Nodes are used **exclusively**, | ||
- | |||
- | In this document we discuss several job types and use cases. In most cases, a compute job falls under one (or more than one) of the following categories: | ||
- | |||
- | * [[# | ||
- | * [[# | ||
- | * [[# | ||
- | * [[# | ||
- | * [[# | ||
- | |||
- | For every compute job you have to submit a job script (unless working interactively using '' | ||
- | |||
- | sbatch jobscript.sh | ||
- | |||
- | on a login node. A SLURM job script is a shell script which may contain SLURM directives (options), i.e. pseudo-comment lines starting with | ||
- | |||
- | #SBATCH ... | ||
- | | ||
- | The SLURM options define the resources to be allocated for the job (and some other properties). Otherwise the script contains the "job logic", | ||
- | |||
- | |||
- | ==== Read More ==== | ||
- | |||
- | The following instructions shall provide you with the basic information you need to get started with SLURM on our systems. However, the official SLURM documentation covers some more use cases (also in more detail). Please read the SLURM man pages (e.g. '' | ||
- | |||
- | Helpful SLURM links: [[https:// | ||
- | SLURM documentation: | ||
- | ==== The test Partition: Your First Job Script ==== | ||
- | |||
- | You can use the (very small) '' | ||
- | |||
- | <code bash> | ||
- | #!/bin/bash | ||
- | #SBATCH --job-name=foo | ||
- | #SBATCH --partition=test | ||
- | #SBATCH --nodes=2 | ||
- | #SBATCH --ntasks=160 | ||
- | #SBATCH --cpus-per-task=1 | ||
- | #SBATCH --mem-per-cpu=512 | ||
- | #SBATCH --time=00: | ||
- | #SBATCH --no-requeue | ||
- | #SBATCH --mail-type=FAIL | ||
- | |||
- | srun hostname | ||
- | sleep 3 | ||
- | </ | ||
- | |||
- | 1) For SLURM, a CPU core (a CPU thread, to be more precise) is a CPU.\\ | ||
- | 2) Prevent the job from being requeued after a failure.\\ | ||
- | 3) Send an e-mail if sth. goes wrong.\\ | ||
- | |||
- | The '' | ||
- | |||
- | Although nodes are allocated exclusively, | ||
- | |||
- | As already mentioned, after saving the above job script as e.g. '' | ||
- | |||
- | sbatch jobscript.sh | ||
- | |||
- | on the command line. The job's output streams ('' | ||
- | |||
- | ==== Job Monitoring ==== | ||
- | |||
- | For job monitoring (to check the current state of your jobs) you can use the '' | ||
- | |||
- | If you need to cancel a job, you can use the '' | ||
- | |||
- | ==== Node Types And Constraints ==== | ||
- | |||
- | On Goethe-HLR **four different types** of compute nodes are available. There are | ||
- | ^Number^Type^Vendor^CPU^Cores per CPU^Cores per Node^Hyper-Threads per Node^RAM [GB]^ | ||
- | |412|dual-socket |Intel|Xeon Skylake Gold 6148 | ||
- | |72 |dual-socket |Intel|Xeon Skylake Gold 6148 | ||
- | |139|dual-socket |Intel|Xeon Broadwell E5-2640 v4|10|20|40|128| | ||
- | |47 |dual-socket \\ GPU|Intel \\ AMD|Xeon Ivy Bridge E5-2650 v2 \\ FirePro S10000|6|12|24|128| | ||
- | |||
- | In order to separate the node types, we employ the concept of partitions. We provide three partitions | ||
- | ^Partition^Partition/ | ||
- | |general1|''# | ||
- | |general2|''# | ||
- | |gpu|''# | ||
- | |test|''# | ||
- | |||
- | ==== Per-User Resource Limits ==== | ||
- | |||
- | On Goethe-HLR, you have the following limits for the partitions '' | ||
- | |||
- | ^Limit^Value^Description^ | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | | '' | ||
- | |||
- | ==== GPU Jobs ==== | ||
- | |||
- | Currently there are no GPU nodes available. In future: if you want to use GPUs in your calculations, | ||
- | < | ||
- | ==== Hyper-Threading ==== | ||
- | |||
- | On compute nodes you can use Hyper-Threading. That means, in addition to each physical CPU core a virtual core is available. SLURM identifies all physical and virtual cores of a node, so that you have 80 logical CPU cores on an Intel Skylake node, 40 logical CPU cores on an Intel Broadwell or Ivy Bridge node, and 24 logical CPU cores on a GPU node. If you don't want to use HT, you can disable it by adding | ||
- | |||
- | ^Node type^sbatch command^ | ||
- | |Skylake|''# | ||
- | |Broadwell / Ivy Bridge|''# | ||
- | |||
- | to your job script. Then you'll get half the threads per node (which will correspond to the number of cores). This can be beneficial in some cases (some jobs may run faster and/or more stable). | ||
- | |||
- | ==== Bundling Single-Threaded Tasks ==== | ||
- | |||
- | **Note:** Please also see the Job Arrays section below. Because only full nodes are given to you, you have to ensure, that the available resources | ||
- | |||
- | <code bash># | ||
- | #SBATCH --partition=general1 | ||
- | #SBATCH --nodes=1 | ||
- | #SBATCH --ntasks=40 | ||
- | #SBATCH --cpus-per-task=1 | ||
- | #SBATCH --mem-per-cpu=2000 | ||
- | #SBATCH --time=01: | ||
- | #SBATCH --mail-type=FAIL | ||
- | |||
- | export OMP_NUM_THREADS=1 | ||
- | |||
- | # | ||
- | # Replace by a for loop. | ||
- | |||
- | ./program input01 >& 01.out & | ||
- | ./program input02 >& 02.out & | ||
- | |||
- | ... | ||
- | |||
- | ./program input40 >& 40.out & | ||
- | # Wait for all child processes to terminate. | ||
- | wait | ||
- | </ | ||
- | |||
- | In this (SIMD) example we assume, that there is a program (called '' | ||
- | |||
- | If the running times of your processes vary a lot, consider using the //thread pool pattern//. Have a look at the '' | ||
- | |||
- | ==== Job Arrays ==== | ||
- | |||
- | If you have a lot of single-core computations to run, job arrays are worth a look. Telling SLURM to run a job script as a job array will result in running that script multiple times (after the corresponding resources have been allocated). Each instance will have a distinct '' | ||
- | |||
- | Due to our full-node policy, you still have to ensure, that your jobs don't waste any resources. Let's say, you have 320 single-core tasks. In the following example 320 tasks are run inside a job array while ensuring that only 40-core nodes are used and that each node runs exactly 40 tasks in parallel. | ||
- | |||
- | <code bash># | ||
- | #SBATCH --partition=general1 | ||
- | #SBATCH --nodes=1 | ||
- | #SBATCH --ntasks=40 | ||
- | #SBATCH --cpus-per-task=1 | ||
- | #SBATCH --mem-per-cpu=2000 | ||
- | #SBATCH --time=00: | ||
- | #SBATCH --array=0-319: | ||
- | #SBATCH --mail-type=FAIL | ||
- | |||
- | my_task() { | ||
- | # Print the given " | ||
- | # followed by the hostname of the executing node. | ||
- | | ||
- | echo "$K: $HOSTNAME" | ||
- | |||
- | # Do nothing, just sleep for 3 seconds. | ||
- | sleep 3 | ||
- | } | ||
- | |||
- | # | ||
- | # Every 40-task block will run on a separate node. | ||
- | |||
- | for I in $(seq 40); do | ||
- | # This is the " | ||
- | # 320 tasks, J will range from 1 to 320. | ||
- | | ||
- | |||
- | # Put each task into background, so that tasks are executed | ||
- | # concurrently. | ||
- | | ||
- | |||
- | # Wait a little before starting the next one. | ||
- | sleep 1 | ||
- | done | ||
- | |||
- | # Wait for all child processes to terminate. | ||
- | wait | ||
- | </ | ||
- | |||
- | If the task running times vary a lot, consider using the //thread pool pattern//. Have a look at the '' | ||
- | |||
- | ==== OpenMP Jobs ==== | ||
- | |||
- | For OpenMP jobs, set the '' | ||
- | |||
- | <code bash># | ||
- | #SBATCH --partition=general1 | ||
- | #SBATCH --ntasks=1 | ||
- | #SBATCH --cpus-per-task=40 | ||
- | #SBATCH --mem-per-cpu=200 | ||
- | #SBATCH --mail-type=ALL | ||
- | #SBATCH --time=48: | ||
- | |||
- | export OMP_NUM_THREADS=40 | ||
- | ./ | ||
- | </ | ||
- | |||
- | |||
- | ==== MPI Jobs ==== | ||
- | |||
- | **Remember: | ||
- | |||
- | See also: http:// | ||
- | |||
- | As an example, we want to run a program that spawns 80 Open MPI ranks and where 1200 MB of RAM are allocated for each rank. | ||
- | |||
- | <code bash># | ||
- | #SBATCH --partition=general1 | ||
- | #SBATCH --ntasks=80 | ||
- | #SBATCH --cpus-per-task=1 | ||
- | #SBATCH --mem-per-cpu=1200 | ||
- | #SBATCH --mail-type=ALL | ||
- | #SBATCH --extra-node-info=2: | ||
- | #SBATCH --time=48: | ||
- | |||
- | module load mpi/ | ||
- | export OMP_NUM_THREADS=1 | ||
- | mpirun -n 80 ./ | ||
- | </ | ||
- | |||
- | ==== Combining Small MPI Jobs ==== | ||
- | |||
- | As mentioned earlier, running small jobs while full nodes are allocated leads to a waste of resources. In cases where you have, let's say, a lot of 20-rank MPI jobs (with similar runtimes and low memory consumption), | ||
- | |||
- | <code bash># | ||
- | #SBATCH --partition=general1 | ||
- | #SBATCH --nodes=1 | ||
- | #SBATCH --ntasks=40 | ||
- | #SBATCH --cpus-per-task=1 | ||
- | #SBATCH --mem-per-cpu=2000 | ||
- | #SBATCH --time=48: | ||
- | #SBATCH --mail-type=FAIL | ||
- | |||
- | export OMP_NUM_THREADS=1 | ||
- | mpirun -np 20 ./program input01 >& 01.out & | ||
- | # Wait a little before starting the next one. | ||
- | sleep 3 | ||
- | mpirun -np 20 ./program input02 >& 02.out & | ||
- | # Wait for all child processes to terminate. | ||
- | wait | ||
- | </ | ||
- | |||
- | You might also need to disable core binding (please see the '' | ||
- | |||
- | ==== Hybrid Jobs: MPI/OpenMP ==== | ||
- | |||
- | MVAPICH2 example script (40 ranks, 6 threads each and 200 MB per thread, i.e. 1.2 GB per rank; so, for 40*6 threads, you'll get six 40-core nodes): | ||
- | |||
- | <code bash># | ||
- | #SBATCH --partition=general1 | ||
- | #SBATCH --ntasks=40 | ||
- | #SBATCH --cpus-per-task=6 | ||
- | #SBATCH --mem-per-cpu=200 | ||
- | #SBATCH --mail-type=ALL | ||
- | #SBATCH --extra-node-info=2: | ||
- | #SBATCH --time=48: | ||
- | |||
- | export OMP_NUM_THREADS=6 | ||
- | export MV2_ENABLE_AFFINITY=0 | ||
- | mpirun -np 40 ./ | ||
- | </ | ||
- | |||
- | Please note, that this is just an example. You may or may not run it as-it-is with your software, which is likely to have a different scalability. | ||
- | |||
- | You have to disable the core affinity when running hybrid jobs with MVAPICH2. Otherwise all threads of an MPI rank will be pinned to the same core. Our example now includes the command | ||
- | |||
- | <code bash> | ||
- | export MV2_ENABLE_AFFINITY=0 | ||
- | </ | ||
- | |||
- | which disables this feature. The OS scheduler is now responsible for the placement of the threads during the runtime of the program. But the OS scheduler can dynamically change the thread placement during the runtime of the program. This leads to cache invalidation, | ||
- | |||
- | ==== Local Storage ==== | ||
- | |||
- | On each node there is up to 1.4 TB of local disk space (see also [[# | ||
- | |||
- | ==== The salloc Command ==== | ||
- | |||
- | For interactive workflows you can use SLURM' | ||
- | |||
- | < | ||
- | salloc: Granted job allocation 197553 | ||
- | salloc: Waiting for resource configuration | ||
- | salloc: Nodes node45-[002-005] are ready for job | ||
- | [user@loginnode ~]$ | ||
- | </ | ||
- | |||
- | Now you can '' | ||
- | |||
- | < | ||
- | [user@loginnode ~]$ ssh node45-002 | ||
- | [user@node45-002 ~]$ hostname | ||
- | node45-002.cm.cluster | ||
- | [user@node45-002 ~]$ logout | ||
- | Connection to node45-002 closed. | ||
- | ... | ||
- | [user@loginnode ~]$ ssh node45-003 | ||
- | [user@node45-003 ~]$ hostname | ||
- | node45-003.cm.cluster | ||
- | [user@node45-003 ~]$ logout | ||
- | Connection to node45-003 closed. | ||
- | ... | ||
- | [user@loginnode ~]$ ssh node45-005 | ||
- | [user@node45-005 ~]$ hostname | ||
- | node45-005.cm.cluster | ||
- | [user@node45-005 ~]$ logout | ||
- | Connection to node45-005 closed. | ||
- | </ | ||
- | |||
- | Or you can use '' | ||
- | |||
- | < | ||
- | node45-002.cm.cluster | ||
- | node45-003.cm.cluster | ||
- | node45-005.cm.cluster | ||
- | node45-004.cm.cluster | ||
- | [user@loginnode ~]$ | ||
- | </ | ||
- | |||
- | Finally you can terminate your interactive job session by running '' | ||
- | |||
- | < | ||
- | salloc: Relinquishing job allocation 197553 | ||
- | [user@loginnode ~]$ | ||
- | </ | ||
- | |||
- | ==== Planning Work ===== | ||
- | |||
- | By using the '' | ||
- | |||
- | - Submit a sleep job (allocate twenty intel20 nodes for 3 days), you can logout after running this command (but check the output of the squeue command first, if there is no corresponding pending job, then sth. went wrong): < | ||
- | $ sbatch --begin=202X-07-23T08: | ||
- | --partition=general2 --mem=120g \ | ||
- | --wrap=" | ||
- | - Wait until the time has come (07/23/202X 8:00am or later, there is no guarantee, that the allocation will be made on time, but the earlier you submit the job, the more likely you'll get the resources by that time). | ||
- | - Find out whether the sleep job is running (i.e. is in R state) and run a new job step within that allocation (see also http:// | ||
- | $ squeue | ||
- | JOBID PARTITION | ||
- | 2717365 | ||
- | |||
- | $ srun --jobid 2717365 hostname | ||
- | ...</ | ||
- | - Finally, don't forget to release the allocation, if there' | ||
- | $ scancel 2717365</ | ||