You are here

GPGPU Computing

Printer-friendly versionPrinter-friendly version


There has been increasing interest in using GPGPU based systems in the school and we have some expertise in purchasing and managing system equipped with these processors. NVidia/CUDA seems to have become the default environment for this and this is where most of our experience lies.

What kind of problems are suitable for GPGPU?

It's difficult to give a general answer to this. If you have code involving lots of large (especially nested) loops then you should look at GPGPU, actually you should look to parallelise your code anyway. If the code that executes within the loops can be reduced to a fairly simple algorithm then it's a good candidate. Finally if you have a dataset that fits, or can be made to fit into the memory structures of the hardware you have then it's a very good candidate. If your code involves manipulating large matrices GPGPU should be jumping up and down at you shouting "me..ME...pick me". Note however that you will have to do some fairly low level C programming to get the best results.

What do I need to get started?

If you are interested in trying GPU computing IS run courses a couple of times a year as part of their support for EDDIE. The course is a very good background on GPUs and the tutorials provide a simple tester into compiling and running CUDA code on EDDIE nodes and also a taster of some of the techniques required.
If you want to go on from there you can usually purchase a fairly inexpensive
graphics card to do initial development work and can provide information on
purchasing GPUs.

General Access GPU machine

We have one general access GPGPU machine which is available to all staff and PhD students on request. Priority will be given to users who have no other access to GPU hardware. Depending on usage levels the hardware may be available for u/g project work. If you wish to propose a project based on this resource please contact support BEFORE publishing the proposal otherwise it may not be available.


For all hardware you will require a device driver and a development framework. In general (and for nVidia in particular) this probably be a proprietary driver. The drive will have to be compatible with both your hardware and choice of development framework. There are two frameworks in general use:


CUDA (after barracuda) is nVidia's proprietary framework, which is mature and which runs C/C++ and Fortran code natively. Third party wrappers are available for a extensive number of languages, including Python, Perl, Fortran, Java, Ruby, Lua, Haskell, R and MATLAB.


OpenCL (Open Computing Language) is an open standard heterogeneous platform. Code runs natively in C99 with APIs available in a number of programming languages (python, julia, Java, C++)


We (will shortly) provide a general access GPU compute service running on DICE. We also manage a number of servers for various groups with a common software set based largely on standard DICE desktops. These are exclusively nVidia based GPUs and we provide access to a mix of software, largely driven by user request (If you'd like CUDA added to your desktop just request it using the usual support mechanisms).

To be more flexible we install each CUDA SDK in /opt/cuda-X.X as some users have requirements for specific versions. /opt/cuda should always point to the current "supported" version and users can select which SDK to use by setting their environment appropriately using $PATH and $LD_LIBRARY_PATH type modifications. We are looking at upgrading the OS from SL6 to Sl7.

Package Versions
CUDA 3.0, 3.1, 3.2.16, 4.0.17, 4.1.28, 5.0.35, 6.5.19, 7.0.28
Package Versions
CUDA 7.0.28

Further Reference

Last reviewed: 

System Status

Home dirs (AFS)
Other services
Scheduled downtime

Choose a topic