A quick look back at AWS re:Invent 2015
Credit for the title belongs to Pam Murphy, COO of Infor, who delivered this gem in the keynote session of the Global Partner Summit at Amazon Web Services (AWS) re:Invent 2015 conference, held in Las Vegas from Oct. 6-9. If you had any doubts that cloud computing is gaining steam (apologies for mixing water vapor metaphors) attending this event would have ended them. Over 18,000 attendees were at the main conference and about 4,000 attended the Partner Summit (ANSYS is an AWS Advanced Technology Partner). Continue reading
Advanced simulation tools are essential for contemporary and competitive product design. But it is the assembly of these tools into an effective, automated design system that gives leading companies an additional advantage. One such company is Denmark-based Grundfos, one of the world’s leading pump manufacturers.
Grundfos estimates that pumps currently account for 10 percent of the world’s total electricity consumption. This fact provides ample incentive to improve pump efficiency, given the current drive for energy efficiency and reduction in carbon emissions. Grundfos produces pumps for a wide range of applications: circulator pumps for the heating, ventilating and air conditioning industry as well as pumps for water supply, sewage, boiler, and other industrial applications and for inclusion in the equipment of other OEM’s. With such a broad line of products, it is clear that there is plenty of potential for putting an automated design loop system to work. Continue reading
While reading “Out of Our Minds” by Sir Ken Robinson —published in 2003 — one prediction that blew my mind was the possibilities of backing up our brain information. It was not convincing, even considering some forty odd years into the future. I did a Google Search to discover that actually the book quoted a prediction by renowned futurologist Dr. Ian Pearson.
“By about 2040, there will be a backup of our brains in a computer somewhere, so that when you die it won’t be a major career problem.” – Ian Pearson
Based on a recent announcement that ANSYS and Cray has smashed supercomputing records, an editor of a well-known magazine followed up on and asked me whether this achievement might help to compensate the slowdown of Moore’s Law. Although I was able to briefly respond, it was also end of the day and while driving home the question stayed in my head and was the origin of this blog. Continue reading
I was speaking with an ANSYS HFSS developer about a year ago when he mentioned they were starting to see customers who wanted to run 3-D full wave electromagnetic field simulations that would need more than a terabyte of computer system memory, something this developer hadn’t been able to do before. Continue reading
Today, we announced our new ANSYS Enterprise Cloud solution, a combined service and software solution designed to help our global accounts move simulation into the public cloud. Based on my own discussions with customers, the solution is well-matched to current trends and business challenges. Let me explain. Continue reading
In a previous blog, I was expressing our privilege of having a strong HPC technology partnership with NVIDIA. Earlier this week, we announced a supercomputing milestone of scaling to 36,000 cores with fluid dynamics simulations being achieved thanks to a strategic partnership with the National Center for Supercomputing Applications (NCSA). Now, you may wonder what the relevance of this achievement is for you when you don’t have access to a supercomputer. Continue reading
In the first part of this two-part post about high-performance computing, I already addressed three commonly-held myths associated with HPC. Now I’ll address three myths that are related to particular concerns about HPC adoption.
Myth #4: “Without internal IT support, HPC cluster adoption is undoable” Continue reading
You’ve heard all the talk about simulation-based design. You’ve listened to colleagues— maybe even some of your competitors — wax on about how doing robust simulation studies early on in the design cycle leads to more and better product ideas while also optimizing use of materials. In fact, you’re sold on the need to embrace advanced analysis, but you just don’t see how it’s feasible given the perceived complexity and cost of the simulation software — not to mention, the high-powered workstation gear. Continue reading
Looking back at my notes from conversations with many engineers during our recent ANSYS Convergence Conferences, I must admit that I still came across some myths and misconceptions about high-performance computing (HPC) for engineering simulation. Let me share six really striking ones with you:
- HPC is available on supercomputers only
- HPC is only useful for CFD simulations
- I don’t need HPC – my job is running fast enough
- Without internal IT support, HPC cluster adoption is undoable
- Parallel scalability is all about the same, right?
- HPC software and hardware are relative expensive