The ILCC cluster was for members of ILCC and for CDT in NLP students.

This cluster has now been merged into the Informatics Compute Facility (ICF). The ILCC nodes are available through the slurm partitions ILCC-Standard and ILCC-CDT.

If you used the ILCC cluster prior to the merge, your previous ilcc-cluster home directory is accessible at /disk/nfs/ostrom/$USER from compute and head nodes. Going forwards you have access to /home/$USER which is on the cluster-wide lustre filesystem shared between all the nodes.

Types of hardware

This cluster has several types of computer node. Access has transitioned from ssh access to slurm controlled access.

You can query slurm for the partitions, nodes and GPU resources ("gres"): sinfo -OPartition,NodeList:60,Gres:60
This will only show resources that you can access.

The head nodes
The cluster can be accessed by the head nodes (icf, icf2). To use the slurm controlled nodes, first ssh icf then use Slurm.

Here's how to use a cluster without breaking it:

You can also refer to the research cluster documentation:

The status and resources of the ICF, using Slurm-Web, can be viewed at https://icfwebview.inf.ed.ac.uk
Resources in a partition can be seen by clicking on Resources in the left menu bar then filtering by partition. Here the GPUs and cores that are in use on the compute nodes can be viewed by expanding the diagram of the nodes.

Schedule your jobs with Slurm

Use of this cluster's GPU nodes is controlled by Slurm.

Software

The research cluster (all head nodes and compute nodes) run Ubuntu Focal DICE. If you would like software to be added, please ask Computing Support.

Files and backups

  • Jobs on the compute nodes cannot access your files in AFS. Instead there are home directories on a local distributed filesystem. You can copy files from AFS by logging in to a head node then using the cp command. AFS home directories can be found under /afs/inf.ed.ac.uk/user/. Alternatively you can download files directly to the local disk space on the node.
  • There is no disk quota but space is limited so please delete files you have finished with.
  • Your files are NOT backed up. DO NOT USE THIS FILESYSTEM AS THE ONLY STORAGE FOR IMPORTANT DATA. Disk space may be reclaimed once your access has finished. It is your responsibility to make copies of anything you wish to keep. There is some redundancy for disaster recovery.
  • please tidy up files on the local disk when you're finished using them. Note that files in /disk/scratch will be automatically deleted if they have not been accessed for over 90 days.
  • After the merge with the Research and Teaching cluster, your home directory will be on the Lustre (wikipedia) filesystem. (Although a small number of users are still using the previous GlusterFS (wikipedia) filesystem.) NFS (wikipedia) is used to provide access to /disk/nfs/ostrom/.

Pre-existing files on the nodes

If you have data in scratch space on the nodes you can access it by running an srun job on the head node and specifying -p --nodename= like this

srun -p ILCC_GPU --nodelist=ostrom --pty bash
[ostrom]myuser: cd /disk/scratch
[ostrom]myuser: ls
aaaa	  bbbb	    uuuuuu  s1111111	s2222222   [...]
Last reviewed: 
06/05/2026