The ILCC cluster is for members of ILCC and for CDT in NLP students.

As of 30th June 2024, the cluster is being merged with the School's Research Cluster.
If you have access to the ILCC cluster, you will be able to see extra Slurm partitions when logged into any head node of the merged cluster.
If you used the ILCC cluster prior to the merge, your previous ilcc-cluster home directory is accessible at /disk/nfs/ostrom/$USER from compute and head nodes. Going forwards you have access to /home/$USER which is on the cluster-wide lustre filesystem shared between all the nodes.

Types of hardware

This cluster has several types of computer node. Access has transitioned from ssh access to slurm controlled access.

You can query slurm for the partitions, nodes and GPU resources ("gres"): sinfo -OPartition,NodeList:60,Gres:60
This will only show resources that you can access.

The head node
ilcc-cluster will direct you to an appropiate head node. Alternatively, the cluster can be accessed by the research/teaching head nodes (mlp, mlp1, mlp2). To use the slurm controlled nodes, first ssh ilcc-cluster then use Slurm.

Here's how to use a cluster without breaking it:

You can also refer to the research cluster documentation:

Schedule your jobs with Slurm

Use of this cluster's GPU nodes is controlled by Slurm.

Software

The research cluster (all head nodes and compute nodes) run Ubuntu Focal DICE. If you would like software to be added, please ask Computing Support.

Files and backups

  • Jobs on the compute nodes cannot access your files in AFS. Instead there are home directories on a local distributed filesystem. You can copy files from AFS by logging in to a head node then using the cp command. AFS home directories can be found under /afs/inf.ed.ac.uk/user/. Alternatively you can download files directly to the local disk space on the node.
  • There is no disk quota but space is limited so please delete files you have finished with.
  • Your files are NOT backed up. DO NOT USE THIS FILESYSTEM AS THE ONLY STORAGE FOR IMPORTANT DATA. Disk space may be reclaimed once your access has finished. It is your responsibility to make copies of anything you wish to keep. There is some redundancy for disaster recovery.
  • please tidy up files on the local disk when you're finished using them. Note that files in /disk/scratch will be automatically deleted if they have not been accessed for over 90 days.
  • After the merge with the Research and Teaching cluster, your home directory will be on the Lustre (wikipedia) filesystem. (Although a small number of users are still using the previous GlusterFS (wikipedia) filesystem.) NFS (wikipedia) is used to provide access to /disk/nfs/ostrom/.

Pre-existing files on the nodes

If you have data in scratch space on the nodes you can access it by running an srun job on the head node and specifying -p --nodename= like this

srun -p ILCC_GPU --nodelist=ostrom --pty bash
[ostrom]myuser: cd /disk/scratch
[ostrom]myuser: ls
aaaa	  bbbb	    uuuuuu  s1111111	s2222222   [...]

Admin mailing list

Users of this cluster will be subscribed to a low traffic mailing list in the near future.

Last reviewed: 
03/07/2024