Amazon EC2 P4 Instances

High performance for ML training and HPC applications in the cloud

Why Amazon EC2 P4 Instances?

Amazon Elastic Compute Cloud (Amazon EC2) P4d instances deliver high performance for machine learning (ML) training and high performance computing (HPC) applications in the cloud. P4d instances are powered by NVIDIA A100 Tensor Core GPUs and deliver industry-leading high throughput and low-latency networking. These instances support 400 Gbps instance networking. P4d instances provide up to 60% lower cost to train ML models, including an average of 2.5x better performance for deep learning models compared to previous-generation P3 and P3dn instances.

P4d instances are deployed in hyperscale clusters called Amazon EC2 UltraClusters that comprise high performance compute, networking, and storage in the cloud. Each EC2 UltraCluster is one of the most powerful supercomputers in the world, helping you run your most complex multinode ML training and distributed HPC workloads. You can easily scale from a few to thousands of NVIDIA A100 GPUs in the EC2 UltraClusters based on your ML or HPC project needs.

Researchers, data scientists, and developers can use P4d instances to train ML models for use cases such as natural language processing, object detection and classification, and recommendation engines. They can also use it to run HPC applications like pharmaceutical discovery, seismic analysis, and financial modeling. Unlike on-premises systems, you can access virtually unlimited compute and storage capacity, scale your infrastructure based on business needs, and spin up a multinode ML training job or a tightly coupled distributed HPC application in minutes, without any setup or maintenance costs.

Announcing the new Amazon EC2 P4d Instances

Benefits

With the latest-generation NVIDIA A100 Tensor Core GPUs, each P4d instance delivers on average 2.5x better DL performance compared to previous-generation P3 instances. EC2 UltraClusters of P4d instances help everyday developers, data scientists, and researchers run their most complex ML and HPC workloads by giving access to supercomputing-class performance without any upfront costs or long-term commitments. The reduced training time with P4d instances boosts productivity, helping developers focus on their core mission of building ML intelligence into business applications.

Developers can seamlessly scale to up to thousands of GPUs with EC2 UltraClusters of P4d instances. High-throughput, low-latency networking with support for 400 Gbps instance networking, Elastic Fabric Adapter (EFA), and GPUDirect RDMA technology help rapidly train ML models using scale-out/distributed techniques. EFA uses the NVIDIA Collective Communications Library (NCCL) to scale to thousands of GPUs, and GPUDirect RDMA technology enables low-latency GPU-to-GPU communication between P4d instances.

P4d instances deliver up to 60% lower cost to train ML models compared to P3 instances. Additionally, P4d instances are available for purchase as Spot Instances. Spot Instances take advantage of unused EC2 instance capacity and can lower your EC2 costs significantly with up to a 90% discount from On-Demand prices. With the lower cost of ML training with P4d instances, budgets can be reallocated to build more ML intelligence into business applications.

AWS Deep Learning AMIs (DLAMIs) and Amazon Deep Learning Containers make it easier to deploy P4d DL environments in minutes as they contain the required DL framework libraries and tools. You can also more easily add your own libraries and tools to these images. P4d instances support popular ML frameworks, such as TensorFlow, PyTorch, and MXNet. Additionally, P4d instances are supported by major AWS services for ML, management, and orchestration, such as Amazon SageMaker, Amazon Elastic Kubernetes Service (Amazon EKS), Amazon Elastic Container Service (Amazon ECS), AWS Batch, and AWS ParallelCluster.

Features

NVIDIA A100 Tensor Core GPUs deliver unprecedented acceleration at scale for ML and HPC. NVIDIA A100’s third-generation Tensor Cores accelerate every precision workload, speeding time to insight and time to market. Each A100 GPU offers over 2.5x the compute performance compared to the previous-generation V100 GPU and comes with 40 GB HBM2 (in P4d instances) or 80 GB HBM2e (in P4de instances) of high-performance GPU memory. Higher GPU memory particularly benefits those workloads training on large datasets of high-resolution data. NVIDIA A100 GPUs use NVSwitch GPU interconnect throughput so each GPU can communicate with every other GPU in the same instance at the same 600 GB/s bidirectional throughput and with single-hop latency.

P4d instances provide 400 Gbps networking to help customers better scale out their distributed workloads such as multinode training more efficiently with high-throughput networking between P4d instances as well as between a P4d instance and storage services such as Amazon Simple Storage Service (Amazon S3) and FSx for Lustre. EFA is a custom network interface designed by AWS to help scale ML and HPC applications to thousands of GPUs. To further reduce latency, EFA is coupled with NVIDIA GPUDirect RDMA to enable low-latency GPU-to-GPU communication between servers with OS bypass.

Access petabyte-scale high-throughput, low-latency storage with FSx for Lustre or virtually unlimited cost-effective storage with Amazon S3 at 400 Gbps speeds. For workloads that need fast access to large datasets, each P4d instance also includes 8 TB NVMe-based SSD storage with 16 GB/sec read throughput.

The P4d instances are built on the AWS Nitro System, which is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software to deliver high performance, high availability, and high security while also reducing virtualization overhead.

Customer testimonials

Here are some examples of how customers and partners have achieved their business goals with Amazon EC2 P4 instances.

  • Toyota Research Institute (TRI)

    Toyota Research Institute (TRI), founded in 2015, is working to develop automated driving, robotics, and other human amplification technology for Toyota.

    At TRI, we're working to build a future where everyone has the freedom to move. The previous-generation P3 instances helped us reduce our time to train ML models from days to hours, and we are looking forward to utilizing P4d instances, as the additional GPU memory and more efficient float formats will allow our machine learning team to train with more complex models at an even faster speed.

    Mike Garrison, Technical Lead, Infrastructure Engineering, TRI
  • TRI-AD

    At TRI-AD, we're working to build a future where everyone has the freedom to move and explore with a focus on reducing vehicle injuries and fatalities using adaptive driving and smart city. Through the use of Amazon EC2 P4d instances, we were able to reduce our training time for object recognition by 40% compared to previous-generation GPU instances without any modification to existing codes.

    Junya Inada, Director of Automated Driving (Recognition), TRI-AD
  • TRI-AD

    Through the use of Amazon EC2 P4d instances, we were able to instantly reduce our cost to train compared to previous-generation GPU instances, enabling us to increase the number of teams working on model training. The networking improvements in P4d allowed us to efficiently scale to dozens of instances, which gave us significant agility to rapidly optimize, retrain, and deploy models in test cars or simulation environments for further testing.

    Jack Yan, Senior Director of Infrastructure Engineering, TRI-AD
  • GE Healthcare

    GE Healthcare is a leading global medical technology and digital solutions innovator. GE Healthcare enables clinicians to make faster, more informed decisions through intelligent devices, data analytics, applications and services, supported by its Edison intelligence platform.

    At GE Healthcare, we provide clinicians with tools that help them aggregate data, apply AI and analytics to that data and uncover insights that improve patient outcomes, drive efficiency, and eliminate errors. Our medical imaging devices generate massive amounts of data that need to be processed by our data scientists. With previous GPU clusters, it would take days to train complex AI models, such as Progressive GANs, for simulations and view the results. Using the new P4d instances reduced processing time from days to hours. We saw two to three times greater speed on training models with various image sizes, while achieving better performance with increased batch size and higher productivity with a faster model development cycle.

    Karley Yoder, VP & GM, Artificial Intelligence, GM Healthcare
  • HEAVY.AI

    HEAVY.AI is a pioneer in accelerated analytics. The HEAVY.AI platform is used in business and government to find insights in data beyond the limits of mainstream analytics tools.

    At HEAVY.AI, we’re working to build a future where data science and analytics converge to break down and fuse data silos. Customers are leveraging their massive amounts of data that may include location and time to build a full picture of not only what is happening, but when and where through granular visualization of spatial temporal data. Our technology enables seeing both the forest and the trees. Through the use of Amazon EC2 P4d instances, we were able to reduce the cost to deploy our platform significantly compared to previous-generation GPU instances, thus enabling us to cost-effectively scale massive datasets. The networking improvements on A100 has increased our efficiencies in how we scale to billions of rows of data and enabled our customers to glean insights even faster.

    Ray Falcione, VP of US Public Sector, HEAVY.AI
  • Zenotech Ltd.

    Zenotech Ltd. is redefining engineering online through the use of HPC Clouds delivering on demand licensing models together with extreme performance benefits by leveraging GPUs.

    At Zenotech, we are developing the tools to enable designers to create more efficient and environmentally friendly products. We work across industries, and our tools provide greater product performance insight through the use of large scale simulation. The use of AWS P4d instances enables us to run our simulations 3.5x faster compared to the previous generation of GPUs. This speed-up cuts our time to solve significantly, allowing our customers to get designs to market faster or to do higher fidelity simulations than were previously possible.

    Jamil Appa, Director and Cofounder, Zenotech
  • Aon

    Aon is a leading global professional services firm providing a broad range of risk, retirement and health solutions. Aon PathWise is a GPU-based and scalable HPC risk management solution that insurers and reinsurers, banks, and pension funds can use to address today’s key challenges such as hedge strategy testing, regulatory and economic forecasting, and budgeting. 

    At PathWise Solutions Group LLC, our product allows insurance companies, reinsurers, and pension funds to access next-generation technology to rapidly solve today’s key insurance challenges, such as machine learning, hedge strategy testing, regulatory and financial reporting, business planning and economic forecasting, and new product development and pricing. Through the use of Amazon EC2 P4d instances, we are able to deliver amazing improvements in speed for single- and double-precision calculations over previous-generation GPU instances for the most demanding calculations, allowing new range of calculations and forecasting to be done by clients for the very first time. Speed matters, and we continue to deliver meaningful value and the latest technology to our customers thanks to the new instances from AWS.

    Van Beach, Global Head of Life Solutions, Aon Pathwise Strategy and Technology Group
  • Rad AI

    Comprising radiology and AI experts, Rad AI builds products that maximize radiologist productivity, ultimately making healthcare more widely accessible and improving patient outcomes. Read case study to learn more

    At Rad AI, our mission is to increase access to and quality of healthcare, for everyone. With a focus on medical imaging workflow, Rad AI saves radiologists time, reduces burnout, and enhances accuracy. We use AI to automate radiology workflows and help streamline radiology reporting. With the new EC2 P4d instances, we’ve seen faster inference and the ability to train models 2.4x faster, with higher accuracy than on previous-generation P3 instances. This allows faster, more accurate diagnosis, and greater access to high-quality radiology services provided by our customers across the US.

    Doktor Gurson, Cofounder, Rad AI

Product details

Instance Size vCPUs Instance Memory (GiB) GPU – A100 GPU memory Network Bandwidth (Gbps) GPUDirect RDMA GPU Peer to Peer Instance Storage (GB) EBS Bandwidth (Gbps) On-demand Price/hr 1-yr Reserved Instance Effective Hourly * 3-yr Reserved Instance Effective Hourly *
p4d.24xlarge 96 1152 8 320 GB
HBM2
400 ENA and EFA Yes 600 GB/s NVSwitch 8 x 1000 NVMe SSD 19 $32.77 $19.22 $11.57
p4de.24xlarge (preview) 96 1152 8 640 GB
HBM2e
400 ENA and EFA Yes 600 GB/s NVSwitch 8 x 1000 NVMe SSD 19 $40.96 $24.01 $14.46
* Prices shown are for Linux/Unix in the US East (N. Virginia) AWS Region and rounded to the nearest cent. For full pricing details, see Amazon EC2 Pricing.

P4d instances are available in the US East (N. Virginia and Ohio), US West (Oregon), Asia Pacific (Seoul and Tokyo), and Europe (Frankfurt and Ireland) Regions. P4de instances are available in the US East (N. Virginia) and US West (Oregon) Regions.

Customers can purchase P4d and P4de instances as On-Demand Instances, Reserved Instances, Spot Instances, Dedicated Hosts, or as part of Savings Plan.

Getting started with P4d instances for ML

Amazon SageMaker is a fully managed service for building, training, and deploying ML models. When used together with P4d instances, customers can easily scale to tens, hundreds, or thousands of GPUs to train a model quickly at any scale without worrying about setting up clusters and data pipelines.

DLAMI provides ML practitioners and researchers with the infrastructure and tools to accelerate DL in the cloud, at any scale. Deep Learning Containers are Docker images preinstalled with DL frameworks to make it easier to deploy custom ML environments quickly by letting you skip the complicated process of building and optimizing your environments from scratch.

If you prefer to manage your own containerized workloads through container orchestration services, you can deploy P4d instances with Amazon EKS or Amazon ECS.

Getting started with P4d instances for HPC

P4d instances are ideal to run engineering simulations, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other GPU-based HPC workloads. HPC applications often require high network performance, fast storage, large amounts of memory, high compute capabilities, or all of the above. P4d instances support EFA that enables HPC applications using the Message Passing Interface (MPI) to scale to thousands of GPUs. AWS Batch and AWS ParallelCluster help HPC developers quickly build and scale distributed HPC applications.

Learn more