Some world records are the stuff of legend. The official land-speed record is 763 mph. The tallest man living measures 251 cm. The fastest ball bowled by any bowler is 100.23 mph and the heaviest vehicle pulled over a level, 100 ft course weighs 68,090 kg. Compared to these feats, records for supercomputing can seem a little flat. However, they are no less impressive and indeed, and stand to have a far greater impact on our day-to-day lives.
As you may already be aware, ANSYS invests a great deal in partnerships to develop high performance computing (HPC) scalability and performance. From the likes of HPC partners such as Cray and SGI to supercomputer centers such as the National Center for Supercomputing Applications (NCSA) and HLRS in Stuttgart, ANSYS is committed to developing software that can scale to massive computing power and loads.
Part of these partnerships is the achievement of incredible supercomputing that was announced today in our press release with Cray and HLRS. But are these records just great headlines or is there more substance to them?
The simple answer is yes. Supercomputing records point to an incredible set of capabilities currently under development. When applied to industry, these capabilities drive strategic, long-term investments that are focused on improving competitiveness.
To put it more basically, chances are that supercomputing will be vital to how an engineering or manufacturing enterprise will find new ways to improve, be it new products or new processes. Those improvements tend to require greater computational scalability. For example, in our field of simulation, these records demonstrate the ability to enable larger, higher-fidelity simulation models which provide greater insight in the product performance.
These world records matter because supercomputers are only as good as the algorithms used on them for simulations. To fully exploit a supercomputer like Hazel Hen at HLRS, we must develop codes capable of using large amounts of the many processing cores available.
How does this then go beyond the supercomputing laboratory? First, it enables the simulation of complete, complex systems that previously were unavailable.
As you can learn from this webinar, turbomachinery design teams at GE Global Research rely on modeling and simulation software running on world-class high-performance computers. Aerospace manufacturers now have ‘extreme scalability’ demands for running huge simulation models either in the form of complete aircraft engines, or real-time, CFD-based in-flight simulations. Or there is the growing need to model accurately the impact and efficacy of complete wind and tidal turbine farms, mixing structural finite element analysis with dynamic CFD. This need to handle huge amounts of multiphysics will only increase as products throughout different industries become smarter and more sophisticated.
Elsewhere, computational demands will increase as companies develop digital twins of complete physical products or processes. ANSYS has already demonstrated the fusion of digital twin simulation and the Internet of Things to proactively troubleshoot and control complete pumping systems. At an industrial scale, this would need huge computing power.
As you can read in one of my previous blogs, initiatives such as the Private Sector Program (PSP) at NCSA, that brings together industrial users such as Boeing, Caterpillar, GE and Rolls Royce, hardware and software vendors and HPC experts are already becoming hotbeds of the application of supercomputing. For the PSP, the relentless breaking of records matches their own drive to continually develop ground-breaking new capabilities.
There is a very clear commercial driver behind all of this — the need for engineering and manufacturing leaders to innovate to remain competitive. In this blog, I already mentioned that two-thirds of US companies that use HPC say that “increasing performance of computational models is a matter of competitive survival”. Fifty-six percent of the same respondents said that new applications in the development queue will require significant increases in scalability.
Again to translate that into everyday language; the likes of Boeing, Airbus, Ferrari, Caterpillar and other engineering leaders know that they need to use the most powerful supercomputers to stay ahead. And again, at the risk of repetition, this means that the supercomputing records matter.
As a closing example, consider the current attempts being made by Guy Martin to break the speed world record for the fastest motorcycle. One look at his bike tells you it has been designed with aerodynamics in mind, but at speeds of almost 275 mph, these aerodynamics, alongside the acoustics, structural strength of the bike and even the thermal and electromagnetic impact of the sensors on board, becomes one big question mark.
The only way to answer that question is with HPC-enabled simulation that is now within the reach of more and more engineering teams through cloud computing, to make access even more widespread.
Simply put, to continue breaking records and redefining the possible in the real world, we must continually push the envelope of computing power.