You are here

James and Charles CDT cluster

Printer-friendly versionPrinter-friendly version

The James and Charles cluster is for Pervasive Parallelism CDT and Data Science CDT students.

Types of hardware

This cluster has several types of computer node.

The head node
The head node is called cdtcluster. To use the "charles" GPU nodes, first ssh cdtcluster then use Slurm.

Here's how to use a cluster without breaking it:

The "charles" GPU nodes
To use the "charles" GPU nodes, first login to a head node (see above).
The "charles" nodes are a mix of Dell PowerEdge R730 and Dell PowerEdge T630. Each has two 16 core Xeon CPUs.
The GPUs are a mix of NVIDIA cards: Tesla K40m, GeForce GTX Titan X and GeForce Titan X.
charles01 to charles04 each have one Tesla K40m and one GeForce GTX Titan X.
charles05 to charles10 each have two Tesla K40m.
charles11 to charles14 have a mix of GeForce GTX Titan X and GeForce Titan X (Pascal).
charles15 to charles19 have 4 GeForce Titan X (Pascal).
The nodes have local filespace. Most nodes have between 0.6TB and 1.6TB, except for charles11-14 which have about 8TB each. The local filespace is in /disk/scratch and on some nodes also in /disk/scratch_big or /disk/scratch1. All space is on HDDs, there are no SSDs.
The "apollo" GPU nodes
To use the "apollo" GPU nodes, first login to a head node (see above).
The two "apollo" nodes (apollo1 and apollo2) are HP ProLiant XL270d Gen 9 servers with 256GB of RAM. Each node has eight Tesla P100 GPUs.
The nodes have 6.5TB of local filespace in /disk/scratch (on SSD).
The "james" multiprocessor nodes
Each of these 21 nodes, numbered james01 to james21, is a Dell PowerEdge R815 with four 16 core Opteron CPUs, 256GB memory and 4TB of disk space (all on HDD).

To use a "james" node just login with ssh, for example ssh james17.


Please note - for the duration of the COVID-19 situation, james[12-21] have been appropriated to act as extra RDP servers. And James[01-04] are being used to test slurm if you wish to test your code on slurm please submit an rt ticket


The big memory nodes
anne and mary each have 1TB memory but are otherwise like the "james" nodes.

Schedule your jobs with Slurm

Use of this cluster's GPU nodes is controlled by Slurm.

Software

The cluster runs the standard SL7 version of DICE. If you would like software to be added, please ask Computing Support.

Files and backups

  • Jobs on the compute nodes cannot access your files in AFS. Instead there are home directories on a local distributed filesystem. You can copy files from AFS by logging in to a head node then using the cp command. AFS home directories can be found under /afs/inf.ed.ac.uk/user/.
  • There is no disk quota but space is limited so please delete files you have finished with.
  • Your files are NOT backed up. DO NOT USE THIS FILESYSTEM AS THE ONLY STORAGE FOR IMPORTANT DATA. Disk space will be reclaimed once your access has finished. It is your responsibility to make copies of anything you wish to keep. There is some redundancy for disaster recovery.

Admin mailing list

Users of this cluster are subscribed to the low traffic cdtcluster mailing list.

Last reviewed: 
16/04/2019

System Status

Home dirs (AFS)
Network
Mail
Other services
University services
Scheduled downtime

Choose a topic