A new era for HPC-driven innovation

Michaël Renotte I 11:06 am, 25th August

More and more data is being created and collected every day, and post-digital businesses want to leverage the insights that come from it, thus driving demands for greater computing capabilities. 


IDC found that in 2020, 64.2 ZB of data was created, captured, or replicated, and that number is expected to grow to 180 ZB by 2025. But of all the data created in 2020, only 10.6% was useful for analysis or for AI/ML models, and only about 44% of that was actually used, which means that businesses are currently underutilizing their data and losing value. Increasingly, the answer to this massive data concern is found in high performance computing (HPC), also known as supercomputing. HPC isn’t fundamentally new: the phones we carry around in our pockets would have been considered supercomputers 30 years ago. But a combination of GPUs and other purpose-built chips is starting to push HPC capabilities to new thresholds and benchmarks previously thought to be decades away – an acceleration that is rapidly making these capabilities mission critical for businesses everywhere. 


Shaping the future of High Performance Computing 


High-performance computing has evolved rapidly since its genesis in 1964 with the introduction of the CDC 6600, the world’s first supercomputer. Since then, the amount of data the world generates has exploded, and accordingly, the need for HPC to be able to process data more rapidly and efficiently has become pivotal. 


This requirement to process data more efficiently has forced HPC designers to think outside the box in terms of not just how the data is processed but where it’s processed and what ends up getting processed. 


With cloud computing now firmly established, the floodgates opened up to a whole new world of supercomputing innovation and experimentation. Here are the top five drivers likely to impact the effectiveness of HPC systems and what they mean for the potential of the modern enterprise to fully capitalize on its new wealth of data: 


Artificial Intelligence 


It would be very hard to talk about HPC without mentioning Artificial Intelligence. Over the last years, with the advent of the Internet of Things, 5G, and other data-driven technologies, the amount of data available for meaningful, impactful AI has actually grown enough for Artificial Intelligence to have an impact on high-performance computing, and vice versa. 


High-performance computers are needed to power AI workloads, but it turns out that AI itself can now be used to improve HPC data centers. For example, AI can monitor overall system health, including the state of storage, servers, and networking gear, ensuring correct configuration and predicting equipment failure. Companies can also use AI to reduce electricity consumption and improve efficiency by optimizing heating and cooling systems. 


AI is also important for security in HPC systems, as it can be used to screen incoming and outgoing data for malware. It can also protect data through behavioral analytics and anomaly detection. 


Edge computing 


Companies can deploy their high-performance computing data center on premises, in the cloud, at the "edge", or with some combination of these. However, more and more organizations are choosing distributed (i.e. edge) deployments for the faster response times and bandwidth-saving benefits they bring. 


Centralized data centers are simply too slow for modern applications, which require data computation and storage to take place as close to the application or device as possible to meet increasingly stringent, 5G-enabled latency SLAs. 


Speed is of course a key component of high-performance computing, as the faster HPCs can compute data, the more data they can compute, and the more complex problems they can solve. As edge computing becomes increasingly popular, high-performance computers will become even more powerful and valuable. 


HPC as a Service 


The emergence of the cloud led to a as-a-service revolution, and high performance computing is now joining the movement. Many vendors have switched from selling HPC equipment to providing HPC as a service (HPCaaS). This allows companies that don’t have the in-house knowledge, resources, or infrastructure to create their own HPC platform to take advantage of HPC via the cloud. 


Now, many major cloud providers, such as Amazon Web Services, Google, and Alibaba, offer HPCaaS. The benefits of HPCaaS include ease of deployment, scalability, and predictability of costs. 


GPU computing 


Originally designed for gaming, graphics processing units (GPUs) have evolved into one of the most important types of computing technology. A GPU is a specialized processing unit capable of processing many pieces of data simultaneously, making GPUs useful for machine learning, video editing, and gaming applications. 


Applications that use GPUs for HPC include weather forecasting, data mining, and other diverse processes that require this speed and amount of data computation. NVIDIA is the largest maker of GPUs. 


Modern data storage 


The three key components of an high-performance computing system are computing, networking, and storage. Because storage is one of the most important elements, it’s key to have a powerful, modern data storage solution if you’re using or plan to use HPC. 


To be able to accommodate the vast amount of data involved in high-performance computing, the HPC system’s data storage system should be able to make data from any node available at any time, handle any size of data request, support performance-oriented protocols, scale rapidly to keep up with increasingly demanding latency SLAs, and keep your HPC system genuinely future-proof. 


Fueling Europe’s prosperity 


HPC is key to Europe’s future prosperity, digital transformation and resilience. With €7 billion in funding from Horizon Europe, Digital Europe Program and the Connecting Europe Facility, the European Commission is determined to strengthen investments in supercomputing. It aims to build up supercomputing and data processing capacities by buying world-class exascale supercomputers, post-exascale facilities, and supporting an ambitious HPC research and innovation agenda. 


In Luxembourg, with UNI's new high-performance computer MeluXina, HPC is becoming more accessible than ever before. Start-ups, SMEs and larger companies as well as research organizations can already run HPC workloads and take advantage of MeluXina. The Luxembourg national competence centre in HPC has been set up in the context of the EuroCC project, which is co-funded by the EU via the EuroHPC Joint Undertaking and by the Ministry of the Economy. This European collaborative project aims to establish national HPC competence centres in 33 countries across Europe. 


Subscribe to our Newsletters

There are no any top news
Info Message: By continuing to use the site, you agree to the use of cookies. Privacy Policy Accept