Do you routinely tackle mechanical simulations in the field of linear dynamics? Are you unhappy with your simulation times? If you answered yes to both of these questions then keep on reading. ANSYS Mechanical 17.0 has many exciting new features — far too many to cover here. This post focuses on only one of these new features: the improved performance for linear dynamics simulations in Distributed ANSYS. Continue reading
Regarding high performance computing (HPC), there are numerous improvements introduced into ANSYS Mechanical APDL16.0. However, I would like to focus this post on a feature that demonstrates the technological leadership of our company. ANSYS Mechanical APDL 15.0 was the first commercial FEA software product to support the Intel® Xeon Phi™ coprocessor. In ANSYS Mechanical APDL 16.0, we extend support for Xeon Phi hardware to virtually all users. The Xeon Phi coprocessors can now be used on either Linux or Windows, as well as with shared-memory parallel (SMP) and distributed-memory parallel (DMP). Continue reading
Some time ago, I wrote a couple of posts describing the performance of ANSYS Mechanical APDL on several different tablet computers. Previously, I had studied two separate tablets: one from Fujitsu, which was more of a shrunken laptop with an Intel® Core i5 processor and a second from Dell, which had an Intel® Atom™ processor and was more in line with the look and feel of an iPad. The Fujitsu tablet was clearly faster, but bulkier and pricier. The Dell tablet was lighter, smaller, cheaper, and also less powerful. Continue reading
ANSYS 15.0 contains a number of amazing achievements in the area of high performance computing (HPC) for the Mechanical APDL product. The performance is up to 5 times faster than previous releases, especially at higher core counts, by means of improved domain decomposition algorithms.
In addition, new parallel functionality was added in this release. One of the most important new features was the subspace eigensolver for vibration analyses, which supports distributed memory parallel and can be several times faster than the widely used block Lanczos eigensolver. Continue reading
In the first part of this two-part post about tablet computing and running ANSYS Mechanical, I laid out the specs of each of the tablets I tested. Now I’ll take a deeper dive and compare their performance related to computation and battery life.
Tablet Computing Performance Study
When comparing the performance of these two tablets, I used a high-end workstation as a baseline for which to compare the results, since a majority of engineers still perform FEA simulations on workstations or servers. This workstation contains two Intel Xeon E5-2670 processors (2.93 GHz), 128 GB of RAM, and a 500 GB SSD; it runs the 64-bit version of Windows 7. I ran ANSYS simulations with two different equation solvers: sparse direct and PCG iterative. The sparse direct solver is computationally demanding and requires high compute rates for good performance. The PCG iterative solver works differently and requires high-memory bandwidth to achieve strong performance. Some interesting data came from these runs. Continue reading
You may recall my blog titled “From Supercomputers to Handhelds,” which discussed the concept of tablet computing capably running engineering simulations. As I mentioned, the tablet space is quickly evolving. My explorations continue on this subject today.
Looking back across time, technology advances have resulted in increased performance of computers relative to their size. When ANSYS was founded in 1970, finite element analysis (FEA) simulations were typically performed on large mainframes that filled entire rooms — these were the supercomputers of that era. Such large systems were necessary to run compute-intensive programs such as ANSYS software.
By the early 1990s, ANSYS simulations could be performed on personal computers (PCs). In those years, simulations on PCs were not nearly as large and complex as those being solved on larger servers, but PCs continued to evolve over time.
More recently, the distributed solver in the ANSYS Mechanical product family was developed to allow engineers to run FEA simulations on large clusters, which is the hardware of choice for today’s supercomputers. In fact, in 2008 several mechanical simulations were performed on one of the top 100 supercomputers in the world, using the Distributed ANSYS capability with calculations reaching over 1 Teraflop (over 1 trillion floating point operations per second).
Enough history. The purpose of this blog is to demonstrate that while ANSYS Mechanical software supports such speed and complexity required for the most numerically challenging and hardware-resource-intensive simulations, the power of a supercomputer is available in a device that fits into the palm of your hand. Continue reading
A couple of weeks ago, I attended the Society for Industrial and Applied Mathematics conference on Computational Science and Engineering (CSE13). There I listened to a number of presentations given by mathematicians and engineers, who talked about running software programs on some of the biggest supercomputers in the world. When ANSYS was first founded in 1970, finite element analysis (FEA) simulations were typically performed on large “mainframes” that filled entire rooms — these were the supercomputers of that era.
More recently, the distributed solver in the ANSYS Mechanical product family was developed to allow engineers to run FEA simulations on large clusters, which is the hardware of choice for today’s supercomputers. In fact, in 2008 several mechanical simulations were performed on one of the TOP100 supercomputers in the world using the distributed ANSYS capability with calculations reaching over 1 Teraflop (over 1 trillion calculations per second). However, the point I want to raise today is that while ANSYS Mechanical software supports such speed and complexity required for the most numerically challenging and hardware-resource-intensive simulations, the power of a supercomputer is now available in the palm of your hand. Continue reading