With Amazon Data Firehose, you pay for the volume of data you ingest into the service. There are no set up fees or upfront commitments. There are four types of on demand usage with Data Firehose: ingestion, format conversion, VPC delivery, and Dynamic Partitioning. Additional data transfer charges can apply.
Direct PUT and KDS as a Source Ingestion
The base function of a Firehose stream is ingestion and delivery. Ingestion pricing is tiered and billed per GB ingested in 5KB increments (a 3KB record is billed as 5KB, a 12KB record is billed as 15KB, etc.). There are no additional Data Firehose charges for delivery unless optional features are used.
MSK as a Source Ingestion
The base function of a Firehose stream is ingestion and delivery. Ingestion pricing is tiered and billed per GB ingested with no 5KB increments. Pricing is tiered and billed per GB based on the higher value between ingested bytes and delivered bytes. Billing is based on data volume, there is no record size rounding.
Vended Logs as a Source Ingestion
For records originating from Vended Logs, the Ingestion pricing is tiered and billed per GB ingested with no 5KB increments.
Format Conversion (optional)
You can enable JSON to Apache Parquet or Apache ORC format conversion at a per-GB rate based on GBs ingested in 5KB increments.
VPC Delivery (optional)
For Firehose Streams with a destination that resides in an Amazon VPC, you will be billed for the amount of data delivered to the destination in VPC and every hour that your Firehose Stream is active in each AZ. Each partial hour is billed as a full hour.
Dynamic Partitioning for Amazon S3 delivery (optional)
You can enable Dynamic Partitioning to continuously group data by partitioning keys in your records (such as “customer_id”), and deliver the data grouped by the partitioning keys to corresponding Amazon S3 prefixes. With Dynamic Partitioning, you pay based on the amount of data processed through Dynamic Partitioning, and per object delivered to Amazon S3. If you use the JQ parser for Dynamic Partitioning, you pay per hour of processing for the JQ parsing.
Decompression of CloudWatch Logs (optional)
For records originating from CloudWatch Logs, if decompression is enabled, the decompression pricing is billed per GB decompressed.
Snowflake as a Destination
For Firehose streams that is configured with Snowflake as a destination, you will be billed for the amount of data processed to the destination. Pricing is billed per GB ingested with no 5KB increments. Pricing is based on the higher value between ingested bytes and delivered bytes.
Apache Iceberg Tables as a destination
For Firehose streams that is configured with Apache Iceberg Tables as a destination, you will be billed for the amount of data processed to the destination. Pricing is billed per GB ingested with no 5KB increments. If data processed bytes before delivery is more than the ingested bytes due to custom Lambda processing, then the additional bytes are also billed. Additional bytes are billed at the same rate as shown in Kinesis Data Streams as a source to Apache Iceberg tables as a destination for all sources of ingestion including Direct PUT.
-
Direct PUT
-
Kinesis Data Stream as a source
-
Vended Logs as a source
-
MSK as a source
-
Direct PUT
-
-
Apache Iceberg Tables as a destination
-
Snowflake as a destination
-
Other destinations
-
Apache Iceberg Tables as a destination
-
-
Snowflake as a destination
-
-
Other destinations
-
-
-
Kinesis Data Stream as a source
-
-
Apache Iceberg Tables as a destination
-
Snowflake as a destination
-
Other destinations
-
Apache Iceberg Tables as a destination
-
-
Snowflake as a destination
-
-
Other destinations
-
-
-
Vended Logs as a source
-
-
Apache Iceberg Tables as a destination
-
Other destinations
-
Apache Iceberg Tables as a destination
-
-
Other destinations
-
-
-
MSK as a source
-
-
Apache Iceberg Tables as a destination
-
Other destinations
-
Apache Iceberg Tables as a destination
-
-
Other destinations
-
-
Pricing examples
Ingestion Pricing for Direct PUT and KDS as a source
Record size of 3KB rounded up to the nearest 5KB ingested = 5KB
Price for first 500 TB / month = $0.029 per GB
GB billed for ingestion data = (100 records/sec * 5 KB/record) / 1,048,576 KB/GB * 30 days / month * 86,400 sec/day = 1,235.96 GB
Monthly ingestion charges = 1,235.96 GB * $0.029/GB = $35.84
Ingestion Pricing for MSK as a source
Record size of 2KB (no 5KB increments)
Price for first 500 TB / month = $0.055 per GB
GB billed for ingestion data (assuming same data volume as delivery data) = (100 records/sec * 2 KB/record) / 1,048,576 KB/GB * 30 days / month * 86,400 sec/day = 494.38 GB
Monthly data volume charges = 494.38 GB * $0.055/GB = $27.19
Ingestion Pricing for Vended Logs as a source
Record size of 0.5KB (500 Bytes) =0.5KB (no 5KB increments)
Price for first 500 TB / month = $0.13 per GB
GB billed for ingestion = (100 records/sec * 0.5KB/record) / 1,048,576 KB/GB * 30 days / month *86,400 sec/day = 123.59 GB
Monthly ingestion charges = 123.59 GB * $0.13/GB = $16.06
Pricing for Snowflake as a destination
Record size of 0.5KB (500 Bytes) =0.5KB (no 5KB increments)
Price per GB delivered to Snowflake = $0.071 per GB
GB billed for ingestion = (100 records/sec * 0.5KB/record) / 1,048,576 KB/GB * 30 days / month *86,400 sec/day = 123.59 GB
Monthly ingestion charges = 123.59 GB * $0.13/GB = $8.77
Pricing for Apache Iceberg Tables as a destination
Record size of 0.5KB (500 Bytes) =0.5KB (no 5KB increments)
Price per GB delivered to Apache Iceberg Tables from Kinesis Data Streams as a source = $0.045 per GB
GB billed for ingestion = (100 records/sec * 0.5KB/record) / 1,048,576 KB/GB * 30 days / month *86,400 sec/day = 123.59 GB
Monthly ingestion charges = 123.59 GB * $0.045/GB = $5.56
Format conversion pricing: JSON to Parquet or ORC (optional)
Record size of 0.5KB (500 Bytes) =0.5KB (no 5KB increments)
Price for first 500 TB / month = $0.13 per GB
GB billed for ingestion = (100 records/sec * 0.5KB/record) / 1,048,576 KB/GB * 30 days / month *86,400 sec/day = 123.59 GB
Monthly ingestion charges = 123.59 GB * $0.13/GB = $16.06
Data format conversion is an optional add-on to data ingestion and uses GB’s billed for ingestion to compute costs.
Price per ingested GB converted = $0.018
Monthly format conversion charges = 1,235.96 GB * $0.018 / GB converted = $22.25
VPC delivery pricing (optional)
Delivery into a VPC is an optional add-on to data ingestion and uses GBs delivered to the destination in VPC to compute costs.
Price per GB delivered to the destination in VPC = $0.01
Price per AZ hour for VPC delivery = $0.01
Monthly VPC processing charges = 1,235.96 GB * $0.01 / GB processed = $12.35
Monthly VPC hourly charges = 24 hours * 30 days/month * 3 AZs = 2,160 hours * $0.01 / hour = $21.60 Total monthly VPC charges = $33.95
Dynamic partitioning pricing (optional)
Dynamic partitioning is an optional add-on to data ingestion, and uses GB processed through Dynamic Partitioning, the number of objects delivered to S3, and optionally JQ processing hours to compute costs. In this example, we assume 64MB objects are delivered as a result of the Firehose Stream buffer hint configuration.
If you use optional features – such as data transformation using Lambda, format conversion, or compression – in your Firehose Stream, the amount of data processed through dynamic partitioning may be different from the amount of data ingested from the source or the amount of data delivered to the destination. Those additional data processing steps performed before and after dynamic partitioning could make the difference.
Price per GB processed through Dynamic Partitioning = $0.020
Price per 1,000 S3 objects delivered $0.005 = $0.005
Price per JQ processing hour = $0.07
Monthly GB processed through Dynamic Partitioning = (3KB * 100 records / second) / 1,048,576 KB/GB * 86,400 seconds/day * 30 days / month = 741.58 GB
Monthly charges for GB processed through Dynamic Partitioning = 741.58 GB $0.02 per GB processed through Dynamic Partitioning = $14.83
Number of objects delivered = 741.58 GB * 1024 MB/GB / 64MB object size = 11,866 objects
Monthly charges for objects delivered to S3 = 11,866 objects * $0.005 / 1000 objects = $0.06
Monthly charges for JQ (if enabled) = 70 JQ hours consumed / month * $0.07/ JQ processing hr = $4.90
Decompression of CloudWatch Logs
CloudWatch Logs sends data in gzip compressed format. Data Firehose decompression feature decompress the data and charges on per GB of decompressed data.
Monthly usage = 10 TB of CloudWatch Logs data decompressed
Price per GB decompressed = $0.00325/GB in IAD
Monthly decompression charges = 10240 GB *$0.00325/GB = $33.28
Service Level Agreement
Learn about the Amazon Data Firehose Service Level Agreement by visiting our FAQs.
Discover more Amazon Data Firehose resources