Skip to main content

Facilities and Equipment

We have now moved into a purpose-built £40M new building at Upper Mountjoy in Durham, which is indicative of the major investment the University is making in Computer Science. The building will also include colleagues from Mathematical Sciences and will mean we can further develop joint teaching and research strategies.

The building has academic offices, offices for research staff and students, open-plan space for undergraduate students to work, breakout spaces to collaborate, labs, computer rooms and, of course, a café.

An innovative feature of the new building is the Hazan Venture Lab which is run by Careers and Enterprise and will be the first space in the University dedicated to developing student enterprise and entrepreneurship.

We are fortunate to have the following hardware available – several local and regional supercomputers for High-Performance Computing (HPC), a GPGPU-driven supercomputer primarily used for data analysis and machine learning, several visualisation and data postprocessing labs, and so forth. On top of this, we host additional local kit which we use to give students and researchers a safe environment to prototype solutions, to explore novel technologies before they hit the market, or to actually design new solutions. 


High Performance Computing Cluster

Durham hosts its own local supercomputers (the Hamilton family) and the DiRAC Cosma machine. In Computer Science, we furthermore have our own Dine installation which we drive in collaboration with DiRAC. Dine is a small 16-node general purpose supercomputer. Its outstanding feature is that it hosts SmartNICs – network devices which are themselves equipped with processors that can hijack and reroute messages in the network, alter message content, or run computations on messages while they travel through the supercomputer network. This allows our researchers to investigate novel data transportation concepts, to simulate different supercomputer topologies, or to work on novel security concepts.

Further details about High Performance Computing Cluster.

Bede, The N8’s new high performance computing platform, is now open for users.

Researchers who wish to use the system should visit the N8 CIR website for details of the application process:

Bede is the N8’s Power and GPU-based high-performance computing (HPC) platform. Unlike traditional x86 platforms, Bede is GPU-based and makes use of NVIDIA’s NV Link high-bandwidth interconnects to move tensor outputs between the GPU and system memory. This unique architecture is ideally suited to artificial intelligence (AI) and machine learning (ML) applications. Bede will also be able to work with much higher resolution imagery than has previously been possible.

The system has been tailored so that the hardware and software environments make it as easy as possible for researchers to maximise the benefits of its unique GPU-based architecture.

Experienced users of high-performance platforms will be familiar with accessing a system such as Bede. However, researchers new to HPC shouldn’t be put off from applying for access. Each of the N8 universities has a research software engineer (RSE) dedicated to supporting Bede and its users:

The system is not restricted to N8 CIR’s current research priorities of Digital Health and Digital Humanities. Researchers from all academic domains should consider applying for access. Each of the N8 Universities have equal access to the machine. In addition, there is a separate allocation for non-N8 EPSRC-funded researchers.

Full details of the system can be found on our website at:

The page includes overviews of the system’s hardware, its software environments, details of the comprehensive RSE support available to users and an explanation of how to register projects and user accounts.


Our NVIDIA CUDA Centre (NCC) GPGPU system is a multi-node / multi-GPU server that we use primarily to prototype machine learning (ML) algorithms and codes before they hit the University's "big" system Bede. The local system features the latest ML software stack mainly focusing around PyTorch and TensorFlow, and consists at the moment of 48 GPUs (across different GPU generations) and 218 CPU cores. This cluster is regularly updated to meet modern ML requirements.

This cluster is multifunctional in that it supports all aspects of research and teaching. UG and PG (both masters and PhD) students are able to access this cluster for running novel ML studies. In terms of teaching the cluster supports projects modules and classroom based demonstrations and workshops. For example the Computer Science Jupyter Notebooks run off this cluster.

For documentation regarding using this cluster go to here.

 Visualisation Lab

The purpose is insight, not numbers. Based on Richard Hamming's famous quote, Durham's Computer Science hosts its own visualisation lab equipped with a set of freely movable large 3d screens (x15), along with racks of 4k screens (x9), a 3d autostereoscopic monitor that does not require glasses, and plenty of smaller kit. The lab allows us to visualise the large data we obtain from various machines and to interactively navigate through big result sets. It also allows us to play around with novel visualisation concepts – both from a software and a hardware perspective – and to prototype software that later runs in big visualisation centres.

 Visualisation lab