Amazon EventBridge FAQs

Overview

Amazon EventBridge is a service that provides real-time access to changes in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code.

To get started, you can choose an event source on the EventBridge console. You can then select a target from AWS services including AWS Lambda, Amazon Simple Notification Service (SNS), and Amazon Kinesis Data Firehose. EventBridge will automatically deliver the events in near real-time.

To start using Amazon EventBridge, follow the six steps below:

1. Log in to your AWS account.

2. Navigate to the EventBridge console.

3. Choose an event source from a list of partner SaaS applications and AWS services. If you are using a partner application, verify that you have configured your SaaS account to emit events and accept it in the offered event sources section of the EventBridge console.

4. EventBridge will automatically create an event bus for you to which events will be routed. Alternatively, you can use AWS SDK to instrument your application to start emitting events to your event bus.

5. Optionally configure a filtering rule and attach a target for your events; for example, this can be a Lambda function.

6. EventBridge will automatically ingest, filter, and send the events to the configured target in a secure and highly available way.

Yes. You can generate custom application-level events and publish them to EventBridge through the service’s API operations. You can also set up scheduled events that are generated on a periodic basis, and can process these events in any of the EventBridge supported targets.

Events use a specific JSON structure. Every event has the same top-level envelope fields, such as the source of the event, timestamp, and Region. This is followed by a detail field, which is the body of the event.

For example, when an Amazon Elastic Compute Cloud (EC2) Auto Scaling group creates a new Amazon EC2 instance, it emits an event with source: “aws.autoscaling” and detail: “EC2 instance created successfully”.

You can filter events with rules. A rule matches incoming events for a given event bus and routes them to targets for processing. A single rule can route to multiple targets, all of which are processed in parallel. Rules help different application components look for and process the events that are of interest to them.

A rule can customize an event before it is sent to the target, by passing only certain parts or by overwriting it with a constant. For the example given in the previous question, you can create an event rule that matches on source: “aws.autoscaling” and detail: “EC2 instance created successfully” to be notified anytime an Auto Scaling group successfully creates an EC2 instance.

EventBridge integrates with AWS Identity and Access Management (IAM) so that you can specify which actions a user in your AWS account can perform. For example, you can create an IAM policy that gives only certain users in your organization permission to create event buses or attach event targets.

There are over 90 AWS services available as event sources for EventBridge, including AWS Lambda , Amazon Kinesis, AWS Fargate, and Amazon Simple Storage Service (S3). For a full list of AWS service integrations, see the EventBridge documentation.

There are over 15 AWS services available as event targets for EventBridge including Lambda, Amazon Simple Queue Service (SQS), Amazon SNS, Amazon Kinesis Streams, and Kinesis Data Firehose. For a full list of AWS service integrations, see the EventBridge documentation.

Event Replay is a new feature for EventBridge that helps you reprocess past events back to an event bus or a specific EventBridge rule. This feature helps developers debug their applications more easily, extend them by hydrating targets with historic events, and recover from errors. Event Replay gives developers peace of mind that they will always have access to any event published to EventBridge.

API Destinations helps developers send events back to any on-premises or SaaS applications with the ability to control throughput and authentication. You can configure rules with input transformations that will map the format of the event to the format of the receiving service, and EventBridge will take care of security and delivery.

When a rule is initiated, EventBridge will transform the event based on the conditions specified. It will then send it to the configured web service with authentication information that was provided when the rule was set up. Security is built in so that developers no longer need to write authentication components for the service that they want to use.

Each API destination uses a Connection that defines the authorization method and credentials to use to connect to the HTTP endpoint. When you configure the authorization settings and create a connection, it creates a secret on AWS Secrets Manager to store the authorization information securely. You can also add further parameters to include the connection as appropriate for your application.

To set up an API destination, you will need to provide an API destination endpoint, which is an HTTP invocation endpoint target for events. You will need to create a Connection to authorize against this endpoint. You can also optionally define the invocation rate limit, which is the maximum number of invocations per second to send to the API destination endpoint. Learn more about Connections and API destinations.

Limits and performance

EventBridge has default quotas on the rate at which you can publish events, the number of rules that can be created on an event bus, and the rate at which targets can be invoked. See the service quotas page for a full list of quotas and how they can be increased.

Typical latency is about half a second. Note that this can vary.

Yes, you can tag rules and event buses.

Default EventBridge quotas can be increased to process hundreds of thousands of events per second. Event bus throughput limits are given in the AWS service quotas page. If you require higher throughput, please request a service limit increase through AWS Support Center by choosing 'Create Case' and then choosing 'Service Limit Increase.'

Yes. AWS will use commercially reasonable efforts to make EventBridge available with a Monthly Uptime Percentage for each AWS Region, during any monthly billing cycle, of at least 99.99%. For details, review the full EventBridge Service Level Agreement.

Schema Registry

A schema represents the structure of an event, and commonly includes information such as the title and format of each piece of data included in the event.

For example, a schema might include fields such as name and phone number, and the fact that the name is a text string, and the phone number is an integer. The schema can also include information on patterns, such as a requirement that the phone number be 10 digits in length. The schema of an event is important because it shows what information is contained in the event, and helps you write code based on that data.

A schema registry stores a searchable collection of schemas so any developer in your organization can more easily access schemas generated by the application. This is in contrast to looking through documentation or finding the schema’s author for this information. You can add a schema to the registry manually or automate this process by turning on the EventBridge schema discovery feature.

Schema discovery automates the processes of finding schemas and adding them to your registry. When schema discovery is enabled for an EventBridge event bus, the schema of each event sent to the event bus is automatically added to the registry. If the schema of an event changes, schema discovery will automatically create a new version of the schema in the registry.

Once a schema is added to the registry, you can generate a code binding for the schema either in the EventBridge console or directly in your integrated development environment (IDE). This helps you represent the event as a strongly typed object in your code. You can then take advantage of IDE features such as validation and autocomplete.

Yes, within Schema discovery you can discover events across accounts, so that you can have full visibility of the schema of events published to your event buses.

There is no cost to use the schema registry; however, there is a cost per ingested event when you turn on schema discovery.

Schema discovery has a free tier of 5 M ingested events per month, which should cover most development usage. There is a fee of $0.10 per million ingested events for additional usage outside of the free tier. For more info on pricing, see the EventBridge pricing page.

The Schema registry reduces the amount of code by enabling you to do the following:

  • Identify schema automatically for any events sent to your EventBridge event bus, and store them in the registry, saving you from having to manage your event schema manually.
  • Write applications that handle events on your bus, generate and download code bindings for schema to use strong-typed objects directly in your code.

Code bindings reduce the overhead for de-seralization, validation, and guesswork for your event handler.

You should use schema registry to build event-driven applications faster. The schema registry eliminates the time spent coordinating between development teams by automatically finding the available events from any supported event source, including AWS services, third-party, and custom applications, and detect their schema. It was built to allow developers to solely focus on their application code, instead of wasting valuable time searching for available events, their structure, and writing code to interpret and translate events.

The schema registry is available through the AWS Toolkit for JetBrains (IntelliJ IDEA, PyCharm, WebStorm, Rider) and Visual Studio Code, as well as in the EventBridge console and APIs. Learn more about using the EventBridge schema registry within your IDE.

Yes, the latest version of the AWS SAM CLI includes an interactive mode that helps you create new serverless applications on EventBridge for any schema as an event type.

Choose the EventBridge Starter App template, and the schema of your event, and SAM will automatically generate an application with a Lambda function invoked by EventBridge, with handling code of the event. This means that you can treat an event trigger like a normal object in your code, and use features such as validation and autocomplete in your IDE.

The AWS Toolkit for Jetbrains (Intellij IDEA, PyCharm, Webstorm, Rider) plugin and AWS Toolkit for Visual Studio Code also provide functionality to generate serverless applications from this template with a schema as a trigger, directly from these IDEs.

EventBridge offers code generation is available in Java (8+), Python (3.6+), and TypeScript (3.0+), and Go (1+).

The EventBridge schema registry is available in the following Regions:

  • US East (Ohio and N. Virginia)
  • US West (N. California and Oregon)
  • Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, and Tokyo)
  • Canada (Central)
  • Europe (Frankfurt, Ireland, London, Paris, and Stockholm)
  • South America (São Paulo)

Pipes

EventBridge Pipes provides a simpler, consistent, and cost-effective way to create point-to-point integration between event producers and consumers. Creating a pipe is as simple as selecting a source and a target with the ability to customize batching, starting position, concurrency, and more. An optional filtering step allows only specific source events to flow into the pipe and an optional enrichment step using AWS Lambda, AWS Step Functions, API Destinations, or Amazon API Gateway can be used to enrich or transform events before they reach the target. By removing the need to write manage, and scale undifferentiated integration code, EventBridge Pipes allows you to spend your time building applications rather than connecting them.

You can get started by visiting the EventBridge console, select the Pipes tab, and choose Create Pipe. From there, you can choose from a list of available sources and provide an optional filtering pattern that will be used to transfer only the events you require. For the optional transformation and enrichment step of a pipe, you can provide an API endpoint, such as a SaaS application API or container cluster, Lambda function, or AWS Step Function. The pipe will then make the API request and capture the response once processing is completed. Finally, set a destination service to which the events are delivered, and specify whether you require archiving or DLQ capabilities to be enabled on the pipe. You can also create a pipe using the AWS CLI, CloudFormation, or AWS Cloud Development Kit (CDK).

EventBridge Pipes introduces Amazon SQS, Amazon Kinesis, Amazon DynamoDB, Amazon Managed Streaming Kafka, self-managed Kafka, and Amazon MQ as sources to the EventBridge product suite. EventBridge Pipes supports the same target services as event buses, such as Amazon SQS, AWS Step Functions, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Amazon SNS, Amazon ECS, and event buses themselves.

EventBridge Pipes support basic transformations using Velocity Template Language (VTL). For more powerful transformations, EventBridge Pipes helps you specify a Lambda function or Step Functions workflow to transform your event. If you’d prefer to use a container service such as Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Service (EKS), you can specify the API endpoint and authentication scheme for your container cluster. EventBridge will then take care of delivering the event for transformation.

No, EventBridge Pipes can be used independently of existing EventBridge features, helping you receive events from other event producers such as Kinesis, SQS, or Amazon MSK, without needing to use an EventBridge event bus. It is also used for point-to-point integrations, where an event bus is used for many-to-many integrations. If you are already using an EventBridge event bus to route events, you can use EventBridge Pipes to connect to a supported source and set your event bus as the source of a pipe.

EventBridge event buses are well suited for many-to-many routing of events between event-driven services. EventBridge Pipes is intended for point-to-point integrations between event publishers and consumers, with support for advanced transformations and enrichments. EventBridge Pipes can use an EventBridge event bus as a target. Migrating from an EventBridge event bus rule to a pipe is  easier, as filtering and targets remain the same between the two resources.

AWS Lambda’s Event Source Mapping (ESM) and Amazon EventBridge Pipes use the same polling infrastructure to select and send events. ESM is ideal for customers who want to use Lambda as a target to process the received events. Pipes is ideal for customers who would rather not worry about creating, maintaining, and scaling Lambda code and instead prefer a simple, managed resources to connect their source to one of over 14 targets.

Yes, EventBridge Pipes will maintain the order of events received from an event source when sending those events to a destination service.

Yes, for services that support batching events, you can configure your desired batch size when creating a pipe. For sources and targets that don’t support batching, you can still choose to batch events for your enrichment and transformation step. This helps you save compute costs and still helps you deliver events individually to your chosen target.

Yes, to receive a history of EventBridge Pipes API calls made on your account, you need to turn on CloudTrail in the AWS Management Console.

To see the full details of pricing for Amazon EventBridge Pipes, visit the pricing page.

Scheduler

Amazon EventBridge Scheduler is a serverless task scheduler that simplifies creating, executing, and managing millions of schedules across AWS services without provisioning or managing underlying infrastructure.

Log in to your AWS account, navigate to the EventBridge console and select the Create Schedule button. Follow the step-by-step workflow and fill in required fields. Select a scheduling format including a time window for the task to implement, fixed rate, cron, or a specific date and time. Select your target from a list of AWS services and configure retry policies for maximum control of your schedule implementation. Review your schedule and select Create.

EventBridge Scheduler builds upon the scheduling functionality offered within Scheduled Rules. EventBridge Scheduler includes support for time zones, increased scale, customized target payloads, added time expressions, and a dashboard for monitoring schedules. Schedules can be created independently without the need to create an event bus with a scheduled rule.

Scheduled rules will continue to be available, however EventBridge Scheduler offers a richer feature set providing more flexibility when creating, executing, and managing your schedules. You can also get started for free, see pricing page for more details.

EventBridge Scheduler has deep integrations with AWS services and can create schedules for any service with an AWS API action. Configurations for time patterns and retries are uniform across AWS for a consistent scheduling experience. Monitoring schedules is easier through the EventBridge Scheduler console delivering a view of your schedules in a dashboard or with a “ListSchedule” API request. You will be able to see critical information on your schedules such as start time, last run, and the assigned AWS target. For more granular details, you can review execution logs available in CloudWatch Logs or they can be sent to S3 or Kinesis Firehose.

You can update your schedules in the EventBridge Scheduler console by selecting the schedule to modify. A new panel will display your options.

Yes, with EventBridge Scheduler you can select what time zone a schedule will operate. These schedules will automatically adjust to Daylight Savings Time (DST) and back to standard time.

EventBridge Scheduler provides at-least-once event delivery to targets, meaning that at least one delivery succeeds with a response from the target. Options to set retries, time windows, and timeouts are available to meet your business requirements.

Delete upon completion is available for all currently supported scheduling patterns: cron, rate, and one-time schedules.

Yes, you can update your schedule to configure delete upon completion at any time before the schedule is invoked. After the last schedule invocation time you will not be able to make changes.

If you disable a schedule with delete upon completion prior to the schedule's last invocation the schedule will remain in your account in a disabled state.

The schedule will continue invoking it's target and will not automatically delete until an end date is configured.

EventBridge Scheduler does not support non-AWS targets directly. However, you can invoke non-AWS targets using Lambda, ECS and Fargate, or with EventBridge via the API destinations feature.

To see the full details of pricing for Amazon EventBridge scheduler, visit the pricing page.

Global endpoints

Global endpoints makes it easier for you to build highly available event-driven applications using AWS. You can replicate your events across primary and secondary Regions to implement failover with minimum data loss. You can also implement the ability to failover automatically to a backup Region in case of any service disruptions. This simplifies adoption of multi-Region architectures and helps you incorporate resiliency in your event-driven applications.

Global endpoints help provide a better experience for your end-customers by minimizing the amount of data at risk during service disruptions.

You can make your event-driven applications more robust and resilient by having the ability to failover your event ingestion to a secondary Region automatically and without the need for manual intervention. You have the flexibility to configure failover criteria using Amazon CloudWatch Alarms (through Amazon Route 53 health checks) to determine when to failover and when to route events back to the primary Region.

Once you publish events in the global endpoint, the events are routed to the event bus in your primary Region. If errors are detected in the primary Region, your health check is marked as unhealthy and incoming events are routed to the secondary Region. Errors can be detected more easily using CloudWatch Alarms (through Route 53 health checks) that you specify. When the issue is mitigated, we route new events back to the primary Region and continue processing the events.

Global endpoints are well suited for applications that do not require idempotency or can handle idempotency across Regions. They are also well suited for applications that are tolerant of having up to 420 seconds of events not being replicated. Thus, they would be stuck in the primary Region until the service or Region recovers (called the Recovery Point Objective).

We have added a new metric that reports the entire latency of EventBridge that helps you more easily determine if there are errors within EventBridge that require you to failover your event ingestion to the secondary Region.

It’s easier for you to get started in the console by providing a pre-populated CloudFormation stack (that you can customize if you choose to) for creating a CloudWatch Alarm and Route 53 health checks. For more details on how to set up the alarms and the health checks, check out our launch blog and documentation.

We recommend against including subscriber metrics in your health check. This could cause your publisher to failover to the backup Region if a single subscriber encounters an issue, despite all other subscribers being healthy in the primary Region.

If one of your subscribers is failing to process events in the primary Region, you should turn on replication to verify that your subscriber in the secondary Region can process events successfully.

The Recovery Time Objective (RTO) is the time in which the backup Region or target will start receiving new events after a failure. The Recovery Point Objective (RPO) is the measure of the data that will remain unprocessed during a failure. With global endpoints, if you are following our prescriptive guidance for alarm configuration, the RTO and RPO will be 360 seconds (with a maximum of 420). For RTO, the time includes the time period for initiating CloudWatch Alarms and updating statuses for Route 53 health checks. For RPO, the time includes events that are not replicated to the secondary Region and are stuck in the primary Region until the service or Region recovers.

Yes. Turn on replication to minimize the data at risk during a service disruption. Once you set up your custom buses in both Regions and create the global endpoint, you can update your applications to publish your events to the global endpoint. By doing so, your incoming events will be replicated back to the primary Region once the issue is mitigated. You can archive your events in the secondary Region to verify that none of your events is lost during a disruption. To recover quickly from disruptions, you can replicate your architecture in the secondary Region to continue processing your events. You must also turn on replication to verify automatic recovery after the issue has been mitigated.

You should verify that the same quotas have been set up in your primary and secondary Regions. You should turn on replication and process your events in the secondary Region as this verifies not only that you have the right quotas, but also that your application in the secondary Region is configured correctly.

You can use AWS CloudFormation StackSets, which makes it easier to replicate your architecture across AWS Regions. For an example, refer to our documentation.

In the first iteration of the launch, Opt-in, China, or GovCloud Regions are not supported. For the list of Regions supported in this launch, see this question below. We also support failover and recovery between the same account and buses with the same name across Regions.

Global endpoints are available for custom events only. We will be adding support for events from AWS services, opt-in events from S3 (Amazon S3 Event Notifications), and third-party events in the future.

No, we are not supporting latency-based routing in the first iteration of the launch.

Global endpoints are available at no additional charge. Global endpoints are currently available for custom events only and custom events published to the global endpoint are billed as per custom event. To learn about pricing, visit the EventBridge pricing page.

Yes, you will be charged $1 per million events for replication, which EventBridge charges for cross Region events.

Global endpoints are available in the following Regions:

  • US East (Ohio and N. Virginia)
  • US West (N. California and Oregon),
  • Asia Pacific (Mumbai, Osaka, Seoul, Singapore, Sydney and Tokyo)
  • Canada (Central)
  • Europe (Frankfurt, Ireland, London, Paris and Stockholm)
  • South America (São Paulo)

Cost and billing

Amazon EventBridge offers flexible pricing with its pay per use model. You only pay for events published by your event bus, events ingested for Schema Discovery, Event Replay, and API Destinations. To see examples and more pricing details for EventBridge visit our pricing page.

No.

Architecture and design

Yes. These are called cross-account events, and you can have a target that is either the default event bus or any other event bus in another account. This can be used to centralize events from multiple accounts into a single event bus to more easily monitor and audit your events, as well as to keep data in sync between accounts.

Yes. CloudFormation support is available in all Regions where Amazon EventBridge is available. To learn more about how to use CloudFormation to provision and manage EventBridge resources, visit our documentation.

Both EventBridge and SNS can be used to develop event-driven applications, and your choice will depend on your specific needs.

Amazon EventBridge is recommended when you want to build an application that reacts to events from your own applications, SaaS applications, and AWS services. EventBridge is the only event-based service that integrates directly with third-party SaaS partners. EventBridge also automatically ingests events from over 200 AWS services without requiring developers to create any resources in their account.

EventBridge uses a defined JSON-based structure for events, and helps you create rules that are applied across the entire event body to select events to forward to a target. EventBridge currently supports over 20 AWS services as targets, including Lambda, SQS, SNS, and Amazon Kinesis Data Streams, and Data Firehose.

Amazon SNS is recommended for applications that need high fan out (thousands or millions of endpoints). A common pattern that we see is that customers use SNS as a target for their rule to filter the events that they need and fan out to multiple endpoints.

Messages are unstructured and can be in any format. SNS supports forwarding messages to six different types of targets, including Lambda, SQS, HTTP/S endpoints, SMS, mobile push, and email. Amazon SNS typical latency is under 30 milliseconds. A wide range of AWS services send SNS messages by configuring the service to do so (more than 30, including Amazon EC2, Amazon S3, and Amazon RDS).

AWS AppFabric, a no-code service that enhances companies’ existing investment in software as a service (SaaS) applications with improved security, management, and productivity. Use AppFabric to aggregate and normalize SaaS log data from apps like Asana, Slack, and Zoom, as well as productivity suites such as Microsoft 365 and Google Workspace, to increase application observability and reduce operational costs associated with building and maintaining point-to-point integrations. EventBridge is a serverless integration service that uses events to connect application components together, making it easier for developers to build scalable event-driven applications. Use EventBridge to route events from sources such as custom applications, AWS services and third-party SaaS applications to consumer applications across the organization. EventBridge provides a simple and consistent way to ingest, filter, transform, and deliver events.

Integrations

Amazon EventBridge makes it easier for SaaS vendors to integrate their service into their customers’ event-driven architectures built on AWS.

EventBridge makes your product directly accessible to millions of AWS developers, unlocking new use cases. It offers a fully auditable, secure, and scalable pathway to send events without the SaaS vendor managing any event infrastructure.

SaaS vendors interested in becoming an EventBridge partner should follow self-service instructions at the Amazon EventBridge integrations page to begin publishing events in EventBridge.

SaaS vendors that already support a webhook or other push-based integration modes might take less than five days to integrate with EventBridge.

We support over 45 SaaS integrations, see a full list of supported of SaaS integrations for Amazon EventBridge.

Amazon EventBridge Integrations
Learn more about the Amazon EventBridge integrations

Visit the Amazon EventBridge integrations page.

Learn more 
Start building in the console
Start building in the console

Get started building with Amazon EventBridge in the AWS Management Console.

Sign in 
Read the documentation
Learn more in documentation

Get a deeper understanding of EventBridge in the Developer Guide.

Learn more