High Performance Computing
High Performance Computing (HPC) can be accessed by UTS researchers or higher
degree students. These systems are not available to undergraduates or for teaching.
Note that these resources are limited and if your team or research require more
resources than the UTS can supply then private nodes may be possible.
The number and size of private nodes though is limited due to power and space constraints.
Contact eResearch for further information.
There are four avenues for UTS researchers to access High Performance Computing (HPC):
- The eResearch iHPC is an interactive HPC.
- The eResearch HPC cluster is a traditional command line batch submission system
- NCI and Intersect's HPC clusters
- Amazon Web Services
So that we can best advise on what system would suit your needs when you email us provide the following information:
- What faculty are you in?
- A brief description of your research project.
- What code do you wish to run?
- If you are a research student, who is your supervisor?
The eResearch iHPC (Interactive HPC)
The iHPC (Interactive High Performance Computing) is a unique high performance computing facility (HPC) consisting of a number of clusters. Unlike many other HPC facilities who provide command access only, this facility was designed to give users, many of whom have little or no Unix experience, a fully interactive graphical environment that is similar to the Windows or Mac desktop environment they are used to and yet simultaneously allow users to benefit from the significant performance improvements gained from the use of high end computer hardware.
For access go to the iHPC Portal page and click the "Account Request" link. You can also find further information about the iHPC on that site.
The eResearch HPC Cluster
This system is a High Performance Computer Cluster running Linux. It provides a high core count (56 CPU cores) and high memory (up to 1.5 Terabytes per node). The large core count and high memory is well suited to processing large bioinformatics datasets, running complex physics simulations, or large multidimensional datasets such as those used in remote sensing. If your program is multithreaded and can span multiple nodes then you can use even more cores and memory. Jobs can run for extended periods of time, days to weeks.
Your compute jobs are run by a scheduling system. The scheduling system guarantees that the resources required for your code will be available when your code runs. Your code will not share CPU cores or memory with other compute jobs. The HPC Getting Started page explains how you run compute jobs on this system. We can provide training on the use of this system to researchers. This system is also a "stepping stone" to the use of the NCI national facility (see below).
For access email us at eResearch-IT@uts.edu.au with a brief description of your project and the code you wish to run.
National Computing Infrastructure (NCI) and Intersect's HPC Clusters
NCI is a national facility providing high performance computing. Like the eResearch HPC cluster this is a traditional command-line based batch submission system running Linux. It has 155,000 cores, 567 Terabytes of memory and 640 GPUs. Information on the National HPC systems can be found on the NCI site NCI HPC Systems. NCI run regular training courses on the use of this system to researchers.
For access email us at with a brief description of your project and the code you wish to run. eResearch-IT@uts.edu.au.
For access and further information email us at eResearch-IT@uts.edu.au.
Access to all UTS Systems
Access to the iHPC and the HPC require the use of the UTS VPN Client. Instructions on installing this can be found on the ServiceConnect page This site requires a UTS logon.
Acknowledging use of the HPC Facilities
We would appreciate the following text or similar to be used for acknowledgement in your publications:
"Computational facilities were provided by the UTS eResearch High Performance Compute Facilities."
NCI also require acknowledgement of the use of their facilities in your publications.