High-end Systems

December 2019 update

Since external funding for our systems ceased at the end of 2016, we have all been working hard to ensure these valuable resources were accessible to all stakeholders and, that as they reached the end of their life, replacements and upgrades were available. At the strategic level, this has meant we’ve been fully engaged in the University’s planning for this next stage in the maturing of research computing as it is being supported through the Petascale Campus Initiative (PCI). At the operational level, the move to incorporate our systems and people into the University’s Research Platform Services (ResPlat) was an obvious one.

Throughout 2018 our very experienced staff worked with the ResPlat team to make this transition. We are very pleased to know that our systems’ support staff officially transferred to ResPlat at the end of that year and that this continuity of service was provided to our users. Further, with our Snowy cluster being added to Spartan, we were able to arrange for those migrating projects to have a private partition with the first 11 nodes of Snowy. All the Snowy nodes were dedicated to Melbourne Bioinformatics users until the hardware upgrades coming through the PCI started to come online in 2019.

Meanwhile, users were encouraged to do training on launching and managing jobs on Spartan as we made this transition.

From now on, for all help with your research computing needs, please contact the support team via hpc-support@nullunimelb.edu.au. If it relates specifically to Melbourne Bioinformatics expertise, your query will be forwarded to our experts. 

Our best wishes to you for success with your research in the future, we look forward to continuing to share our experience with the University data-intensive research community through a range of exciting projects and activities.

Systems

Lenovo x86 system – Snowy

  • Peak performance – compute nodes currently performing at 30 teraFLOPS.
  • 992 Intel Haswell compute cores running at 2.3GHz.
  • 29 nodes with 128GB RAM and 32 cores per node.
  • 2 nodes with 512GB RAM and 32 cores per node.
  • Connected to a high speed, low latency FDR Mellanox InfiniBand switch for inter-process communications.
  • The system runs the RHEL 6 operating system, a variety of Linux.

Storage infrastructure 

  • 700TB GPFS Parallel Data Store
  • 1PB HSM tape system, made available through GPFS