High Performance Computing (HPC) can be accessed by UTS researchers via the eResearch HPCC (High Performance Computing Cluster).
The goals are:
- provide a shared resource across the UTS research community
- provide a test-bed for larger HPCC projects destined for Intersect/NCI.
We have a site specifically for documentation of how to use the HPCC at https://hpc.research.uts.edu.au. The HPC status page on the right, which shows the nodes and queues status, will be moved to the documentation site soon.
Access to the Cluster
The HPCC consists of:
- Seventeen nodes for compute, one node for login and a head node.
- The number of cores in each node range from 28 to 64. Total number of cores is a little over 600.
- Most cores have 256 GB of RAM but some have 512 GB for applications that require more memory. Total distributed memory is about 4 TB.
- Some nodes contain GPU processing units. Either dual Nvidia Tesla K80 or a Tesla P100.
- Most nodes have at least 3 TB and some have 6 TB of local attached disk.
- 700 TB of Isilon storage shared with other eResearch infrastructure.
Acknowledging use of the HPCC
We would appreciate the following text or similar to be used for achknowledgemt. acknowledgement:
"Computational facilities were provided by the UTS eResearch High Performance Computer Cluster."