Spread the love

Monash University’s Clayton facility has received an upgrade to its supercomputer, updating to the Nvidia-powered M3 as part of its additional AU$4.1 million MASSIVE cash injection. By Mark Johnston

Monash University received an M3 high performance supercomputer upgrade, using Dell’s super compute platform and powered by Graphics Processing Unit (GPU) giant Nvidia. According to Steve Oberlin, CTO at Tesla (Nvidia’s accelerated computing business unit) the M3 is a high performance computer that will accelerate the university’s scientific research.

Monash University has invested AU$4.1 million (S$4.24 million) in this new AU$5.7 million project to fund M3, which is located at the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) in Clayton, Victoria.

Driving Advancement Through Collaboration

In collaboration with the CSIRO and the Australian Synchrotron, MASSIVE is a high performance computing facility designed specifically to process complex data.
According to Monash, over the past five years, MASSIVE has played a key role in driving discoveries across many disciplines including biomedical sciences, materials research, engineering, and geosciences.

Professor Christina Mitchell, Dean of the faculty of medicine, nursing and health sciences, said there is much we can learn from the data MASSIVE is receiving, with M3 holding the capability to link valuable data from the likes of MRIs to cancer research.

“M3 will be particularly important to the faculty of medicine by providing computing capacity that is malleable, connected, and can be shaped to support the needs of Monash’s strategic research domains,” she said. “The data is worthless without analysis. that is why we are so excited about MASSIVE.”

Aside from the M3 being ‘crazy fast’, Mr Oberlin said the upgrade is based on the Nvidia Tesla K80 for data processing and high-end visualisation, which is the latest generation of the platform with approximately half a terabyte per second of memory bandwidth.

In addition to the 50 Nvidia Tesla K80 GPU co-processors with 2 GPU chips per card, the M3 also comprises of 8 Nvidia Grid K1 GPUs for medium end visualisation which will support up to 32 concurrent users; 1,700 Intel Haswell CPU cores; a 1.15 petabyte Lustre parallel file system capable of reading data at a peak of 24 gigabytes a second; and a 100 gigabytes per second Ethernet Mellanox Spectrum network.

Upgrading In Stages

M3 is yet another extension of the M-series, with the university moving through previous M1 and M2 stages. According to Mr Oberlin, the M3 is an addition of capability, with Monash confirming the M3 is approximately four times faster than the M2.

“High performance computing has been a really interesting thing to watch,” Mr Oberlin said. “How computational science has become both part of the discipline that helps people with theory, helps people with experimentation, with discovery, and also simulation and modelling has become a valid science domain in and of itself.

“If you look at all of the space discoveries, the biggest scientific instruments in the world, almost all of them rely heavily on extremely high performance computing to do modelling, to set up their experiments, to understand what is happening, and to sort through reams and reams of results, or to convert the output into a form that can be understood by mere humans.”

Designed For Research

The Monash M3 system is designed specifically for research and Nvidia expects other research customers will be looking to deploy K80 in the near future for the same benefits that Monash will be getting, including massive parallel compute power and the ability to process large amounts of data.

Nvidia said that to be able to process big data in real-time or near real-time is the holy grail, adding that if you cannot get at it in real-time, you will never catch up.

Mr Oberlin said that for years, the Australian Synchrotron has also been using Nvidia accelerators to do near real-time construction of images that help them to understand the fundamentals of the atomic structure of the materials that it is shining a very bright light through.

“It makes that instrument something that is actually useful and usable to help advance the likes of medicine and material sciences,” he said.

Also located in Melbourne, the Synchrotron is a cyclic particle accelerator that produces a powerful source of light, revealing the innermost structure of materials in very high detail. It has been operated by the Australian Nuclear Science and Technology Organisation since 2013 and according to the Minister for Industry, Innovation and Science Christopher Pyne, it benefits over 4,000 researchers annually, including from Australia and New Zealand. Australia’s Synchrotron is also the largest piece of scientific infrastructure in the southern hemisphere.

Earlier this month, the New Zealand government and research sector announced a total investment of AU$4.5 million over three years for the Synchrotron, and as part of the federal government’s AU$1.1 billion National Innovation and Science Agenda unveiled in December, the Australian Synchrotron was allocated AU$520 million, slated to commence later this year.

In collaboration with the CSIRO and the Victorian government, Monash, as well as the Australian Synchrotron, purchased two IBM supercomputers back in 2011.

The three-year project cost approximately AU$8 million, with the state government contributing AU$1.45 million, the National Computing Initiative fronting AU$1.2 million, and the remaining costs split among Monash University, the Synchrotron, CSIRO, and the Victorian Partnership for Advanced Computing.

Earlier this month, the Faculty of Science at the University of Western Australia (UWA) welcomed its own high-performance computing (HPC) cluster to its Perth campus to assist with computational chemistry, biology, and physics.

According to Dr Amir Karton, head of the computational chemistry lab at UWA’s School of Chemistry and Biochemistry, the Pople HPC places the faculty in a unique position for supporting their advanced research, saying the machine will be used for conducting multi-scale simulations of biochemical processes, studying gravitational waves, and simulating combustion processes which generate compounds important for seed germination.

The University of Queensland took a dive into big data technology after signing a multimillion-dollar deal with Australian high performance computing player Xenon Systems for a bespoke supercomputer in April last year, designed to crunch the numbers for large research projects.

The Australian Bureau of Meteorology expects to have its AU$77 million Cray XC-40 supercomputer up and running by mid-2016, and the Department of Defence’s Defence Science and Technology Group should have its own supercomputer later this year after going to tender in September, seeking a high performance Linux-based machine to support aerodynamic simulation and execute its Computational Fluid Dynamics simulations.

Extra Computing Firepower Will Help Biomedical Research

The system features a 1.15 petabyte Lustre parallel file system capable of reading data at 24 gigabytes per second, which is around four times faster than the system used for its older MASSIVE-2 supercomputer. It is connected to the rest of Monash’s research infrastructure through a 100 Gb/s Mellanox spectrum network.

The new supercomputer sits in the data centre at Monash University’s main Clayton campus, alongside its university’s pre-existing MASSIVE-2 cluster. The MASSIVE-1 supercomputer is located at the nearby Australian Synchrotron. MASSIVE–3 was officially switched on at a launch event hosted by Australia’s Chief Scientist and former Monash University chancellor Alan Finkel. “There’s a nexus between science and technology. Some think they are separate, but really they enable each other,” Mr Finkel said at the launch.

“We need world-class high-performance computing to make the massive breakthroughs.”

Monash Unviersity’s dean of the medicine faculty Christina Mitchell said the new supercomputer would be used for research into protein expression, genomic data and cellular-level microscopy.

“I heard a very compelling talk from a cancer researcher who argued we should stop doing experiments and start analysing the data we already have, because many of the answers are in the data researchers already have – if only they can make the links,” Mitchell said. “When you think there are 7 billion humans and 30,000 genes, we need to have some way of looking at that data and analysing it.

“Examining the links between gene sequences and human outcomes can help us to identify how to prevent disease… But that data is worthless without analysis, and that’s why we are so excited by the MASSIVE project.”

CSIRO director of manufacturing Keith McLean said uses of the new supercomputer will include data processing, synchotron beam-line imaging, imaging medical beam-lines in real-time 3D, as well as 4D-imaging of materials, organisms and organs.

“Partnering makes it easier to achieve critical mass and economies of scale, while also tapping into expertise from across the university system, and by partnering we can also achieve economies of utilisation.” McLean said biosciences now account for over 50 percent usage of MASSIVE. Monash University also today said it had become the first partner of Nvidia’s Technology Centre in Singapore, which will see the chip vendor invest in and assist with research involving the supercomputer.

At their launch in March 2012, MASSIVE 1 and 2 each contained 42 Intel servers and 84 Nvidia GPUs, delivering a theoretical peak of 49 TFlops. Since then, MASSIVE-2 has been upgraded and now includes 1720 CPU-cores and 244 Nvidia GPUs across 118 nodes.

Accompanying the three supercomputers is a data visualisation facility called CAVE-2, which the university rolled out two years ago. The launch of the new Monash supercomputer comes just weeks after Adelaide University unveiled its new supercomputer, dubbed Phoenix, which benchmarked at 300 TFlops at launch.

Ultrasonic Gas Leak Detection – Your First Line Of Defense
Four Ways To Maximise Assets And Reduce Operational Costs In Oil And Gas Operations