What is supercomputing?

Supercomputing is the process of using massive CPU resources and high-speed networking for complex data processing at scale. Certain use cases, like geoscientific simulations or DNA analysis, require the simultaneous processing of billions or even trillions of data points within a short timeframe. Supercomputing technology is a form of high-performance computing that utilizes hundreds or even thousands of nodes that work in parallel to solve complex problems together. The supercomputing nodes are highly optimized with hardware-based accelerators that perform calculations, exchange, and integrate data at speeds not reachable by ordinary machines.

What are the use cases of supercomputing?

Supercomputing has a wide range of applications. While not an exhaustive list, here is a selection of examples demonstrating how businesses use supercomputing.

Computational fluid dynamics

Computational fluid dynamics (CFD) is the process of using complex mathematical modeling to trace heat transfer, fluid movement, momentum, and other related processes. CFD relies on rapidly producing comprehensive simulations that supercomputing excels at creating. For example, Formula 1 uses computational fluid dynamics technology to test the aerodynamic properties of cars. Using supercomputing, they can simulate different details when designing their vehicles, reducing time to market and increasing efficiency. 

Genomic research

The advanced processing power of supercomputers is useful when studying the highly complex structure, function, and map of genomes. The quadrillion floating point operations per second that supercomputers offer allow researchers to conduct genomic research at scale. As a real-world example, the National Library of Medicine uses powerful supercomputers to produce the Sequence Read Archive (SRA). The SRA has sequencing results from over nine million experiments and allows bioinformaticists to analyze its contents comprehensively. 

Gaming

Game engineers must ensure that gamers can access their games without dropped packets, congestion, or loss of frames. Supercomputing allows developers to enhance graphics, simulate physics, and render realistic settings. Cloud computing’s multiple processors help to process data and deliver high performance to gamers and developers. For example, NICE DCV delivers remote desktops and application streaming to any device via the cloud. This support eliminates the need for expensive dedicated workstations. By using these supercomputers, developers can reach high performance while optimizing costs. 

Medical research

Medical research is the research, development, and production of new pharmaceuticals and chemicals. Supercomputing resources give researchers the processing power needed to investigate trillions of data points at once. Modern supercomputers help everywhere, from molecular modeling to producing new materials for human health.

Good Chemistry is an example of supercomputing in action. This innovative company aims to create a more sustainable world by solving complex problems related to material sciences. It uses supercomputing to simulate the development of new ways of breaking the chemical bonds of pre- and poly-fluoroalkyl substances, which are detrimental to human health.

What are the benefits of supercomputing?

Supercomputers can pool resources to provide quadrillions of floating point operations per second. Organizations use this high-performance supercomputing technology to access the following benefits.

Accelerate time to market

Supercomputing supports the digital prototyping of complex new products, improving efficiency and accelerating time to market in industries like pharmaceutical, geothermal research, and other mathematical domains. It uses parallel processing to significantly reduce the time taken to complete complex calculations and physical simulations. Calculations that generally take weeks are completed in a fraction of that time. You can increase the speed of research and development stages by accelerating simulations. 

Adopt AI/ML

Artificial intelligence (AI) and machine learning (ML) technologies require massive computational power to process large data quantities. You can use supercomputing to handle high-volume data processing, like trillions of data points per second, to provide a high degree of insight into vast datasets. Supercomputing powers AI and ML, allowing those technologies to perform at scale.

Big data analysis

Supercomputing can analyze trillions of data points in a fraction of the time it would typically take. You can analyze massive datasets, as well as perform pattern recognition, data extraction, analysis, and other data analytics operations. Supercomputing helps to streamline big data analysis in industries where parallel processing of millions of data at once is vital—like finance, scientific research, and medicine. 

Faster performance

The typical supercomputer is much faster than conventional computers as IT tools resources from potentially thousands of parallel processors. Supercomputers are an indispensable tool—their higher performance lends itself to increasing the speed of any computer-related process. 

How does supercomputing work?

Supercomputing uses clusters of compute nodes spread across a connected network. Each node performs a subset of the same task so that together they compute a final complex result. A high-performance computing cluster is made up of hundreds or even thousands of computing nodes, where each node contains approximately eight to 128 CPUs. Grid middleware then connects these computing resources with high-level applications that request processing power as needed.

Nodes

Three types of nodes facilitate most supercomputer functioning:

User node

The user node requests resources from the computing grid. Once an end-user makes this request, it passes through the middleware and notifies all of the nodes on the grid computing system.

Provider node

A provider node gives resources to the computational grid. When they receive a new request, they begin to conduct the task. Many provider nodes allow for symmetric multiprocessing with a high-point operations-per-second ability. The middleware collects and returns results.

Control node

The control node acts as an administrator, managing the allocation of all provider node resources. The middleware communicator runs on the control node, distributing tasks to specific providers. 

What are the components of a supercomputing system?

A supercomputing system consists of several core components that work together to deliver high performance. Below is an explanation of the main components.

Network interface

Supercomputing uses custom-made network interfaces that allow you to run applications with inter-node communication. These interfaces enhance the performance of inter-instance communication and help to scale workloads. These technologies use a mixture of Message Passing Interface (MPI) and ML applications to deliver on-demand elasticity. 

Remote display protocol

Supercomputers use a remote display protocol, so customers can access applications from data centers or the cloud on their devices. This protocol allows you to run intensive applications remotely, streaming the user interface to simpler devices. This component eliminates the need for expensive dedicated workstations and offers flexible deployments.

Cluster management tool

Cluster management tools enable you to manage and deploy high-performance computing clusters. This tool involves a simple GUI to give you access to the resources that power grid networks. You can use a cluster management tool to submit queries, perform parallel processing, and effectively manage your resource clusters.

Unified interface

A unified interface allows you to use cloud-native services in collaboration with parallel clusters. You can use one singular interface to submit jobs to your network of supercomputers or your on-premise infrastructure. A unified interface allows you to track all of your computing resources and maximize application performance.

Cooling and power management

Supercomputing systems use powerful cooling systems to increase energy efficiency. As these systems use lots of computing power, they need help cooling down so as not to overheat. Energy-efficient systems can deliver high performance while consuming fewer resources.

How can AWS help support your supercomputing requirements?

AWS High-Performance Computing offers fast networking and virtually unlimited infrastructure, allowing you to run complex simulations and deep learning workloads in the cloud. AWS offers an entire suite of HPC products and services to provide you with faster insights, more computational power, and unlimited scalability. For example:

  • Amazon EC2 UltraClusters helps you scale to thousands of GPUs or ML accelerators, yielding on-demand usage of supercomputers.
  • NICE DCV offers a high-performance remote display protocol with comprehensive security, optimized costs, and flexible deployments to remote desktops.
  • AWS ParallelCluster acts as an open-source cluster management tool, empowering you with automatic resource scaling, seamless migration to the cloud, and easy cluster infrastructure management.

 Get started with supercomputing on AWS by creating a free account today.

Next Steps on AWS

Check out additional product-related resources
Check out Compute Services 
Sign up for a free account

Instant get access to the AWS Free Tier.

Sign up 
Start building in the console

Get started building in the AWS management console.

Sign in