Cloud Computing Best Practices for Engineering Simulation Pt.2

In the first part of this two-part post, I already addressed four of the eight cloud computing best practices that are fundamentally related to simulation data and end-user access. Now I’ll address best practices that are associated with licensing, HPC workloads, and business support for cloud deployments.

5 – Re-Use On-Premise Licenses (or Not)

recycle-in-skyRight behind data security, the next most common question and issue I see relates to software licensing on the cloud. I believe that the ability to use licenses that already exist on-premise is key. This gives customers the ability to migrate their infrastructure to the cloud without having to decide how to apportion their license assets between on-premise and cloud. With the right network setup (either a site-to-site VPN or a point-to-point firewall configuration), the licenses that exist on-premise can be used interchangeably between the cloud and on-premise.

That said, and as the headline of this paragraph implies, there is an alternative – which is to just move licenses to a cloud-based license server. And/or buy more licenses and house them on the cloud license server. In many ways this is simpler, and probably a good way to get started, but just doesn’t give the flexibility you get if licensing is served between the customer and the cloud.

6 – Consider a Mix of Business Models

ansys elastic licensing performanceThis best practice is a bit of a cautionary tale: careful what you ask for. I find that most customers moving to the cloud immediately ask for a pay-as-you-go model. In practice, pay-as-you-go models (either for software or hardware) may sound great, but produce the risk of exceeding the simulation budget fairly quickly. After all, it’s difficult to know for certain how many computing resources (again in terms of software and hardware) end-users will need for their simulation projects. So budget predictability is key, and I think that it should be part of a mix of business models as another best-practice on the cloud.

On the software side, it’s pretty evident that the traditional lease or paid-up model is most cost attractive for addressing compute capacity needs for average workloads. When customers are facing burst capacity needs for short-term projects (scheduled well in advance), short-term leases remain likely most suited. And, customers may clearly be most interested in some kind of usage-based licensing model to best meet their fluctuating/peak workload needs.

On the hardware side, customers are having similar needs: I hear more and more that customers want to mix their on-premise hardware with cloud computing resources. Among other drivers, I often say that cloud allows customers to use the compute capacity what they need, when they need it, and pay for what they use.

With the recent addition of our new Elastic Licensing, I believe that we will be in sync with what customers want on both software and hardware side: support of a mix of business/licensing models addressing their variety of needs, from average, to burst to fluctuating workloads.

7 – Match Your Cloud to Your HPC Workload

Cloud for simulation is fundamentally about getting access to computing power. But not all workloads will require the same cloud backend. If you’re after extreme HPC scaling, you may need to go to a national lab where 30,000 cores can be allocated to a single job. If you’re trying to get throughput for 100 or 1000 simultaneous jobs – to explore a design space – you’ll need a cloud backend that supports that kind of job submission (for example, a job submission tool that supports using ANSYS Workbench and our RSM).  It might be that GPU acceleration will help your application – or not. It might be that your job performance is limited by I/O and a parallel file system is key – or not.  So this is an obvious point, but customers may consider benchmarking their workloads on the cloud backend to be sure they are getting the right match of technology to application.

hpc workload cloud computing

I always encourage our cloud partners to run and publish our standard benchmarks, and we’re currently working with them to test and certify ANSYS workloads. For our ANSYS Enterprise Cloud solution, we have tested and certified a complete simulation environment (“our reference architecture”) including ANSYS applications, batch and interactive use, HPC management, license managers, etc.

8 – Start Small, Grow Organically… but Think Big

Although I hear companies (particularly their CIOs) expressing ambitions to move 100% to the cloud, it is not uncommon to look for a significant project that is not well-served by in-house infrastructure and consider tackling that project in the cloud. I think this is particularly true when companies have already significantly invested in on-premise computing resources. Because of the transformation nature, I also believe that cloud adoption is best driven by a senior executive. In order to get the executive buy-in, making the journey to cloud will have to deliver the benefits they’re seeking such as improved operational agility, greater IT flexibility, and accelerated innovation. Finally, without IT team involvement and a clear plan to get there, companies run the risk of a failed cloud adoption.

steps to cloud computing successMy experience has been that the more successful cloud projects result from a step-by-step approach to their adoption. I believe that most companies are likely to reach a hybrid cloud model in the long run, and that the mix across private, managed and public cloud sourcing will evolve over time. That is why I recommend starting with a holistic view covering all the different options. Through our cloud core team, we help customers gain clarity on cloud solutions, identify the various cloud options that can work for their engineering simulation activities, and subsequently draw actionable steps.

While the focus of my post has so far been on customers migrating existing workloads to the cloud, it’s worth noting that technology startups have a unique opportunity to be “born in the cloud” for simulation; eliminating migration challenges and allowing for lighter computing devices on premise.

Cloud – Not “One Size Fits All”

From the above, I hope you can also agree with me that cloud is not a “one size fits all” offering. Multiple solutions are needed to address different customer needs. On the one hand, I notice customers working on or interested in different HPC & cloud infrastructure. From desktop or workstation-based computing to so-called private or enterprise HPC based, to hosted cloud and public cloud. On the other hand, customers clearly have different requirements and practices for their engineering simulation – from HPC scale up and software utilization optimization to job scheduling, need for more collaboration across geographically distributed teams, and more mobility requirements.

Therefore, I am strong believer of the ANSYS Open Cloud StrategyTM, that is:


This entry was posted in High Performance Computing, Tips & Tricks and tagged , , , , by Wim Slagter. Bookmark the permalink.

About Wim Slagter

Wim is Director for High Performance Computing (HPC) and Cloud Alliances at ANSYS, Inc. In his role, he is also responsible for the overall design and execution of the global HPC & Cloud partner program within Corporate Marketing at ANSYS. Prior to that, Wim was the Lead Product Manager of HPC. Wim was also Corporate Product Manager for the CFD product line and the Business Manager for Explicit Dynamics and Offshore. Before joining ANSYS in 2003, Wim worked at MSC Software since 1994 in various management positions related to software development, consulting, and sales. Wim holds a Ph.D. degree in Aerospace Engineering from the Technical University of Delft in the Netherlands.