The main part of the Rocket cluster consists of:
- 40 high density AMD CPU nodes (called ares 1-20, artemis 1-20)
- 8 compute nodes with GPUs (falcon 1 to 6, pegasus 1 and 2)
- 4 high memory Intel machines (called bfr 1 to 4)
- 12 Intel CPU nodes (called sfr 1 to 12)
- 2 headnode (login1.hpc.ut.ee, login2.hpc.ut.ee)
- 8 testing nodes (called stage1 – stage8)
In addition to these nodes, there are a few GPFS filesystem servers which will provide fast storage for the entire cluster.
All the machines mentioned above are connected to a fast Infiniband fabric, powered by Mellanox switches.
In addition to Infiniband, all aforementioned machines are also connected to a regular ethernet network for easier access. Machines are connected together with 1/10/25/40 Gbit/s Ethernet in order to provide fast access from these machines to outside of the cluster network, to the University central network and beyond, depending on necessity.
All nodes in the Rocket cluster are running the latest RHEL 9.
You can submit your computations to the cluster using SLURM.