Why Amazon EC2 R7iz Instances?
Amazon Elastic Compute Cloud (EC2) R7iz instances are memory-optimized, high CPU performance instances. They are the fastest 4th Generation Intel Xeon Scalable-based (Sapphire Rapids) instances in the cloud with 3.9 GHz sustained all-core turbo frequency. R7iz instances deliver up to 20% better performance than previous generation z1d instances. They use DDR5 memory and deliver up to 2.4x higher memory bandwidth than z1d instances. R7iz instances feature an 8:1 ratio of memory to vCPU with up to 128 vCPUs and up to 1,024 GiB of memory. The combination of high CPU performance and high memory footprint makes R7iz instances ideal for frontend Electronic Design Automation (EDA), relational database workloads with high per-core licensing fees, and financial, actuarial, and data analytics simulation workloads.
AWS and Intel Partnership
AWS and Intel continue to collaborate to provide cloud services that are designed to meet current and future computing requirements. For more information, see the AWS and Intel partner page.
Benefits
Features
Product details
Amazon EC2 R7iz instances are powered by 4th Generation Intel Xeon Scalable processors and are an ideal fit for high CPU and memory-intensive workloads.
Instance Size | vCPU | Memory (GiB) | Instance Storage (GB) | Network Bandwidth (Gbps) | EBS Bandwidth (Gbps) |
r7iz.large |
2 |
16 |
EBS-Only |
Up to 12.5 |
Up to 10 |
r7iz.xlarge |
4 |
32 |
EBS-Only |
Up to 12.5 |
Up to 10 |
r7iz.2xlarge |
8 |
64 |
EBS-Only |
Up to 12.5 |
Up to 10 |
r7iz.4xlarge |
16 |
128 |
EBS-Only |
Up to 12.5 |
Up to 10 |
r7iz.8xlarge |
32 |
256 |
EBS-Only |
12.5 |
10 |
r7iz.12xlarge |
48 |
384 |
EBS-Only |
25 |
19 |
r7iz.16xlarge |
64 |
512 |
EBS-Only |
25 |
20 |
r7iz.32xlarge |
128 |
1,024 |
EBS-Only |
50 |
40 |
r7iz.metal-16xl |
64 |
512 |
EBS-Only |
25 |
20 |
r7iz.metal-32xl |
128 |
1,024 |
EBS-Only |
50 |
40 |
Customer testimonials
Here are examples of how customers and partners have achieved their business agility, price performance, cost savings, and sustainability goals with Amazon EC2 R7iz instances.
-
Aiven
Aiven provides an open source cloud data platform for organization to build a modern data infrastructure.
We help our customers simplify their data infrastructure to drive cost efficiencies and increase software agility. Throughput and latency are critical factors that our customers evaluate when selecting cloud compute options for our data platform. We are excited to push the performance limits on the latest Amazon EC2 R7iz instances to achieve 170% higher throughput and 41% lower average latency than prior generation R6i instances.
Heikki Nousiainen, CTO, Aiven -
Astera Labs
Astera Labs is a leader in purpose-built data and memory connectivity solutions that remove performance bottlenecks throughout the data center.
We build our solutions 100% in the cloud for the cloud. This is why we are excited about the potential for using the new Amazon EC2 R7iz instance to provide us access to increased single threaded performance vs R6i instances. During our testing of the new R7iz instances, we were able to realize performance gains up to 25% compared to R6i instances. With access to increased performance, we’ll be able to accelerate our ability to deliver silicon, software, and system-level connectivity solutions that realize the vision of artificial intelligence and machine learning in the cloud.
Jitendra Mohan, CEO, Astera Labs -
Nasdaq
Nasdaq is a global electronic exchange for buying and selling securities and other instruments, and a market infrastructure technology provider to 130 other exchanges, regulators, and post-trade organizations in over 50 countries.
We leverage Amazon EC2 high frequency instances to provide reliable, ultra-low latency and high performance at scale to our clients. Amazon EC2 R7iz instances have a new smaller bare metal size with better NUMA affinity that provides excellent throughput for our workloads, simplifies system architecture, and improves determinism by reducing latency. This innovation is a critical component of AWS and Nasdaq’s partnership to build the next generation of cloud-enabled infrastructure for the world's capital markets.
Nikolai Larbalestier, Senior Vice President, NASDAQ -
Akami
Noname Security (Akami) creates powerful, complete, and easy-to-use API security platform that helps enterprises discover, analyze, remediate, and test all legacy and modern APIs.
We conduct traffic analysis with AI and machine learning to automatically detect API threats, and it is important for us to deliver low latency and high bandwidth security to our customers. During benchmarking, Amazon EC2 R7iz instances offered near real-time security with 3x faster response times and higher throughput compared to C6i instances. We are also excited to leverage the new Advanced Matrix Extensions (AMX) to accelerate the performance of our machine learning workloads to reduce the risk of API security vulnerabilities and cyberattacks around the world.
Shay Levi, CTO, Noname Security -
SingleStoreDB
SingleStoreDB is a cloud-native database built for speed and scale to power real-time applications.
Leading companies across nearly every vertical around the world use SingleStoreDB to enhance customer experience and to improve operations and security. Optimizing the compute performance of the underlying infrastructure is necessary to support constantly growing workloads. When testing Amazon EC2 R7iz instances, our engineering teams saw a 19% improvement on database performance versus prior generation Ice Lake based instances. We look forward to leveraging the Amazon EC2’s latest Sapphire Rapids instances to deliver exceptional performance for transactions and analytics.
Rob Weidner, Director of Cloud Partnerships, SingleStoreDB -
TotalCAE
TotalCAE's platform supports hundreds of engineering applications and makes it simple for customers to adopt High Performance Computing (HPC) applications in the cloud.
Amazon EC2 R7iz instances combine 1 TB of the newest DDR5 memory and the latest 4th Generation Intel Xeon Scalable processors running at 3.9 GHz to offer next generation performance for applications such as Finite Element Analysis (FEA). We tested several flagship licensed FEA applications on R7iz instances and observed performance gains of up to 19% for the same license cost over previous generation R6id instances. Our clients invest heavily in their FEA application licenses, and we are eager to help them maximize their license investments and accelerate their time to market.
Rod Mach, President, TotalCAE -
Amazon Relational Database Service (RDS)
Amazon Relational Database Service (RDS) is a collection of managed services that makes it simple to set up, operate, and scale databases in the cloud.
Amazon EC2 R7iz instances are ideal for relational database workloads that typically have high per-core licensing costs. Our Airline and Banking customers running demanding workloads currently use z1d instances. R7iz's 20% higher compute performance, larger sizes (up to 32xlarge), and 2.4x memory throughput (using the latest DDR5) versus z1d will help these customers achieve superior performance as they continue to scale.
Kambiz Aghili, GM, RDS, DBS Managed Commercial Engines, AWS -
Hugging Face
The Hugging Face Hub works as a central place where anyone can share, explore, discover, and experiment with open-source ML.
At Hugging Face, we are proud of our work with Intel to accelerate the latest models on the latest generation of hardware, from Intel Xeon CPUs to Habana Gaudi AI accelerators, for the millions of AI builders using Hugging Face.
The new acceleration capabilities of Intel Xeon 4th Gen, readily available on Amazon EC2, introduce bfloat16 and INT8 support for transformers training and inference, thanks to Advanced Matrix Extensions (AMX).
By integrating Intel Extension for Pytorch (IPEX) into our Optimum-Intel library, we make it super easy for Hugging Face users to get the acceleration benefits with minimal code changes. Using the custom EC2 Gen 7 instances (such as Amazon EC2 R7iz and other instances), we reached an 8x speedup fine-tuning DistilBERT and were able to run inference 3x faster on the same transformers model. Likewise, we achieved a 6.5x speedup when generating images with a Stable Diffusion model.”Ella Charlaix, ML Engineer, Hugging Face