Skip to main content

Systems

The Hamilton HPC service has evolved through a number of hardware generations. The latest generation is called Hamilton8 and provides a total of 15,616 CPU cores, 36TB RAM and 1.9PB disk space. It runs a linux operating system (Rocky linux 8).

Hamilton8 is formed of many powerful servers, which are connected to each other via high-performance interconnects and share access to a fast filestore. Access to the bulk of its compute resouces is provided by logging into one of the login servers (login nodes) and using the SLURM resource management system.  Hamilton8 has both CPU and GPU nodes.

CPU provision:

  • 120 standard compute nodes, each with 128 CPU cores (2x AMD EPYC 7702), 256GB RAM (246GB available to users) and 400GB local SSD storage.
  • 2 high-memory compute nodes, each with 128 CPU cores (2x AMD EPYC 7702), 2TB RAM and 400GB local SSD storage.
  • Infiniband HDR 200GB/s high performance interconnect, with a 2.6:1 fat tree topology formed of non-blocking islands of up to 26 nodes.

Each compute node has the following internal structure

  • 2 CPU sockets.
  • Each socket is divided into 4 NUMA domains, each with its own memory channels.
  • Each NUMA domain is split into 4 groups (or "chiplets"), each with its own 16MB L3 cache.
  • Each group contains 4 CPU cores, each with its own 512KB L2 cache and 32KB L1 cache.

GPU provision:

1 GPU node with:

  • 128 CPU cores (2x AMD EPYC 9555), 2.2TB RAM, 3TB local NVMe storage
  • 8x NVIDIA H200 NVL, each with 144GB
  • 2x NVLink bridges, each connected to four gpus
  • 2x 100Gbit/s ethernet