The Charles cluster is for Data Science CDT students.
Types of hardware
This cluster has several types of computer node.
- The head node
- The head node is called
cdtcluster
. To use the "charles" GPU nodes, firstssh cdtcluster
then use Slurm.Here's how to use a cluster without breaking it:
- The "charles" GPU nodes
-
To use the "charles" GPU nodes, first login to a head node (see above).
The "charles" nodes are a mix of Dell PowerEdge R730 and Dell PowerEdge T630. Each has two 16 core Xeon CPUs.
The GPUs are a mix of NVIDIA cards: GeForce 3090, GeForce GTX Titan X and GeForce Titan X. The best way to identify the available cards is to connect to the cluster and query Slurm.
The nodes have local filespace. Most nodes have between 0.6TB and 1.6TB, except forcharles11-14
which have about 8TB each. The local filespace is in/disk/scratch
and on some nodes also in/disk/scratch_big
or/disk/scratch1
. All space is on HDDs, there are no SSDs.
- The "apollo" GPU nodes
-
To use the "apollo" GPU nodes, first login to a head node (see above).
The two "apollo" nodes (apollo1
andapollo2
) are HP ProLiant XL270d Gen 9 servers with 256GB of RAM. Each node has eight Tesla P100 GPUs.
The nodes have 6.5TB of local filespace in/disk/scratch
(on SSD).
- Jobs on the compute nodes cannot access your files in AFS. Instead there are home directories on a local distributed filesystem. You can copy files from AFS by logging in to a head node then using the cp command. AFS home directories can be found under
/afs/inf.ed.ac.uk/user/
. - There is no disk quota but space is limited so please delete files you have finished with.
- Your files are NOT backed up. DO NOT USE THIS FILESYSTEM AS THE ONLY STORAGE FOR IMPORTANT DATA. Disk space will be reclaimed once your access has finished. It is your responsibility to make copies of anything you wish to keep. There is some redundancy for disaster recovery.
Schedule your jobs with Slurm
Use of this cluster's GPU nodes is controlled by Slurm.
Software
The cluster is being upgraded to run the Ubuntu Focal version of DICE. Some nodes are still in the process of being upgraded from the previous SL7 version of DICE. If you would like software to be added, please ask Computing Support.
Files and backups
Admin mailing list
Users of this cluster are subscribed to the low traffic cdtcluster mailing list.
Last reviewed:
03/05/2023