Atlas

About the Machine

The atlas.hpc.ut.ee machine is primarily meant for problems requiring large amounts of memory.

Usage of this machine is analogous to the usage of the rocket cluster – everything is done through the scheduling system. Keep in mind, though, that the scheduling system running on atlas.hpc.ut.ee is completely separate from that of the rocket cluster, and no jobs submitted on atlas will end up on rocket nodes or vice versa.

In case there are enough idle resources on the ebc cluster (30 nodes with 256GB of RAM each, running CentOS 7), the atlas scheduling system will, however, also schedule jobs there. In case it is necessary for your job to run on the atlas node specifically (e.g. it uses software dependencies only available on atlas), you can specify --nodelist=atlas using SLURM.

Atlas is presently running Scientific Linux 6.8.

Please do not try to run your computations without the scheduling system!

To manage your jobs, you can use the same commands as on the rocket.hpc.ut.ee machine (see Using SLURM).

By default, a single processor core and 2GB of RAM is reserved for each individual job. It is possible to change the amount of requested memory with the “–mem” parameter to the srun command:

$ srun --mem=8000 --ntasks-per-node=8 --pty bash

This requests the resource manager for 8 cores and 8GB of RAM and starts an interactive bash session (–pty bash).

Hardware (HP ProLiant DL980 G7):

  • 8 x Intel(R) Xeon(R) CPU E7- 2850  @ 2.00GHz (80 cores in total)
  • 2TB RAM
  • ~4TB local storage SAS storage
  • 4X QDR Infiniband

Storage:

  • /tmp (4TB) – Local storage space. The fastest storage available at atlas.hpc.ut.ee.Only use this for short term input and intermediary result storage. Do not use it for long term data storage, as it might be lost on reboot.
  • /gpfs/hpc/home – connected to Atlas over a 1Gbit/sec ethernet network.