HPC going mainstream in the Dutch private sector
On 7 October 2014, VORtech organized the HPC seminar ‘Taking the lead with Computational Speed’. The event provided various perspectives on High Performance Computing (HPC), both from the business-oriented side and from the technical side. Around 70 representatives from large and small businesses and institutes gathered in the KIVI Meeting Center in The Hague; a merry mix of ICT managers, software engineers and experts in engineering, finance and research.
Looking back at the seminar a number of highlights come to mind. The headlines:
- The adoption of HPC in Dutch enterprises is growing. Acceptance is much more an organizational issue than a technical issue.
- Convenience is the key enabler for the commercial sector to start reaping the benefits of HPC.
- HPC technology is moving towards massive parallelism. To benefit from this, the software has to be made to fit the massive parallel paradigm.
- The various flavors of HPC, such as cluster, cloud or GPU, do not equally serve all software applications. Whether the application benefits from some form of HPC, depends on the application.
Read on for full coverage of the seminar and the messages brought to the audience by the speakers!
Convenience is key
Mark Roest, director of VORtech, kicked off the HPC Seminar with his view on the Dutch HPC market as a HPC expertise provider. Over the last two decades, Mark observed the move from the early adopters towards mainstream adoption. The prime factor enabling this change is convenience. Nowadays, HPC is widely available; it is in your desktop and on-demand for everyone in the cloud. The large software vendors are supporting it without the end user having to worry about it.
How to get your organization on board
The application of HPC in both larger and smaller companies was discussed by three speakers:
- The organizational challenge of HPC adoption. When it comes to consolidating and organizing the use of HPC in a larger company, Jeroen Willems, Infrastructure architect at ASML, has learned a few lessons in the past years. He observed that the challenges for such a transformation are only partially technical. The organizational challenges are not to be overlooked: how to convince and align the management, the R&D engineers, and the IT staff in the company towards an integrated HPC infrastructure and organization?
- The perception of security risks of HPC-on-demand. Marco van Goethem, modeling and testing expert at Technip Benelux, has been dealing with the use of HPC-on-demand in a private company. He noted that the key blocker for using cloud resources is higher management’s perception of the security risk to sensitive company data. In spite of technical measures, even using dedicated point-to-point connections, business executives are extremely cautious of the idea that their data is somewhere outside of its physical walls. Data being “on the web” or “in the cloud” is easily interpreted as “out on the street”.
- HPC in large and small enterprises. For the implementation of HPC in a company there is no difference in behavior between a small enterprise and a large business. This is the experience of Maurice Bouwhuis, relations and innovation manager at SURFsara. In the end, you are always talking to a small group of engineers and computer guys. This is equally true for a department of a large industrial complex as it is for a small or medium enterprise (SME).
New technology requires new programming
The second important line in the presentations was the technology-driven push to the HPC market. This was covered by four contributions on the impact of technological developments:
- Workstations and clusters. Marcin Zielinski, HPC specialist at ClusterVision, discussed the HPC developments in mainstream technologies. Marcin observed that the performance improvements come from a gradual improvement of throughput and from an increasing number of parallel compute nodes (CPUs, accelerators) towards massively parallel systems. So if you want your software application to benefit from the speed improvements of the platform, it has to be (massively) parallel as well.
- GPUs. Rob van Nieuwpoort, eScience Centre, gave an overview of the state of and trends in GPU platforms. Today’s GPUs are the precursors of tomorrow’s mainstream platforms. Their massively parallel infrastructure requires software that is fit for such massive parallelism. Even though the GPU platforms and the programming paradigm are constantly evolving, the investment in porting applications to GPU is not wasted because the mainstream technology (the CPU architecture) is going in the same direction. Yet, performance gains by GPUs depend very much on the application and the implementation. This makes it hard to predict the potential gain as well as the required cost.
- HPC in the Cloud. Koos Huijssen, scientific software engineer at VORtech, gave a brief overview of the state of and trends in “HPC in the cloud”. He discussed the two flavors: “HPC on demand” and “cloud computing”. Which of the two fits your needs, depends on your application.
- Numerical algorithms. Finally, Kees Vuik, head of the Numerical Analysis research group, TU Delft, emphasized that smart algorithms, that are robust, adaptive and fit for their purpose, are key in constructing high performance computational software. In the past decades the developments on various algorithms has resulted in a dramatic improvement in the order of complexity. Each combination of simulation problem and computational platform requires a well-fitting algorithm.
HPC is on the move!
All in all, the seminar gave the participants an overview of what’s happening in HPC. HPC in the Dutch private sector is very much alive, and kicking its way into the mainstream business. For VORtech these are truly exciting times!