How can I push CloudWatch logs across accounts to Kinesis Data Firehose?

7 minute read
0

I want to stream Amazon CloudWatch logs from Amazon Kinesis Data Firehose into another AWS account in a different AWS Region.

Short description

To send CloudWatch logs to a Kinesis Data Firehose stream in a different Region, the Region must support Kinesis Data Firehose. Confirm that your Region supports Kinesis Data Firehose.

To use Kinesis Data Firehose to stream logs in other accounts and supported Regions, complete the following steps:

  1. Create an Amazon Simple Storage Service (Amazon S3) bucket in the destination account. Create an AWS Identity and Access Management (IAM) role. Then, attach the required permissions for Kinesis Data Firehose to push data to Amazon S3.
  2. Create a destination for Kinesis Data Firehose in the destination account. Create an IAM role for CloudWatch Logs to push data to Kinesis Data Firehose. Then, create a destination delivery stream to push the logs to. 
  3. Turn on Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, and then push the logs to CloudWatch for the source account.
  4. Create a subscription filter in the source account that points to the destination account.
  5. Validate the flow of log events in the S3 bucket that's in the destination account.

Resolution

Note: If you receive errors when running AWS Command Line Interface (AWS CLI) commands, make sure that you're using the most recent AWS CLI version.

This resolution uses the following example values that you must replace with your own values:

  • Destination account: 111111111111
  • Kinesis Data Firehose Region: us-east-1
  • S3 bucket Region: us-west-2
  • Destination Region (receiving logs from source account): us-east-2
  • Source account (where the VPC flow logs are located): 222222222222
  • Amazon CloudWatch log group Region: us-east2
  • VPC Flow Logs Region: us-east-2

Set up the destination account

1.    Create an S3 bucket.

aws s3api create-bucket --bucket my-bucket --create-bucket-configuration LocationConstraint=us-west-2 --region us-west-2

The location constraint indicates that the bucket is created in the us-west-2 Region.

2.    Create the IAM role and trust policy that grant Kinesis Data Firehose the required permissions:

{
  "Statement": {
    "Effect": "Allow",
    "Principal": {
      "Service": "firehose.amazonaws.com"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
      "StringEquals": {
        "sts:ExternalId": "111111111111"
      }
    }
  }
}

The permissions settings must allow Kinesis Data Firehose to put data into the S3 bucket that you created. 

3.    Create the IAM role, and specify the trust policy file:

aws iam create-role \
    --role-name FirehosetoS3Role \
    --assume-role-policy-document file://~/TrustPolicyForFirehose.json

Note the Role_Arn value to use in a later step.

4.    Create a permissions policy in a JSON file to define the actions that Kinesis Data Firehose can perform in the destination account:

{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "s3:AbortMultipartUpload",
          "s3:GetBucketLocation",
          "s3:GetObject",
          "s3:ListBucket",
          "s3:ListBucketMultipartUploads",
          "s3:PutObject"
        ],
        "Resource": [
          "arn:aws:s3:::my-bucket",
          "arn:aws:s3:::my-bucket/*"
        ]
      }
    ]
  }

5.    Associate the permissions policy with the IAM role:

aws iam put-role-policy --role-name FirehosetoS3Role --policy-name Permissions-Policy-For-Firehose --policy-document file://~/PermissionsForFirehose.json

6.    Create a destination delivery stream for Kinesis Data Firehose:

aws firehose create-delivery-stream --delivery-stream-name 'my-delivery-stream' --s3-destination-configuration RoleARN='arn:aws:iam::111111111111:role/FirehosetoS3Role',BucketARN='arn:aws:s3:::my-bucket' --region us-east-1

Replace RoleARN and BucketARN with the role and bucket ARNs that you created.

Note: When you deliver an S3 object to Kinesis Data Firehose, a custom prefix is used in the timestamp namespace expression. You can add and specify an extra prefix at the front of the time format prefix (yyyy/MM/dd/HH/). If the prefix ends with a forward slash (/), then it appears as a folder in the S3 bucket.

7.     Run the describe-delivery-stream command to check the DeliveryStreamDescription.DeliveryStreamStatus property:

aws firehose describe-delivery-stream --delivery-stream-name "my-delivery-stream" --region us-east-1

Check the describe-delivery-stream command output to confirm that the stream is active:

{
  "DeliveryStreamDescription": {
    "DeliveryStreamType": "DirectPut",
    "HasMoreDestinations": false,
    "DeliveryStreamEncryptionConfiguration": {
      "Status": "DISABLED"
    },
    "VersionId": "1",
    "CreateTimestamp": 1604484348.804,
    "DeliveryStreamARN": "arn:aws:firehose:us-east-1:111111111111:deliverystream/my-delivery-stream",
    "DeliveryStreamStatus": "ACTIVE",
    "DeliveryStreamName": "my-delivery-stream",
    "Destinations": [
      {
        "DestinationId": "destinationId-000000000001",
        "ExtendedS3DestinationDescription": {
          "RoleARN": "arn:aws:iam::111111111111:role/FirehosetoS3Role2test",
          "BufferingHints": {
            "IntervalInSeconds": 300,
            "SizeInMBs": 5
          },
          "EncryptionConfiguration": {
            "NoEncryptionConfig": "NoEncryption"
          },
          "CompressionFormat": "UNCOMPRESSED",
          "S3BackupMode": "Disabled",
          "CloudWatchLoggingOptions": {
            "Enabled": false
          },
          "BucketARN": "arn:aws:s3:::my-bucket"
        },
        "S3DestinationDescription": {
          "RoleARN": "arn:aws:iam::111111111111:role/FirehosetoS3Role2test",
          "BufferingHints": {
            "IntervalInSeconds": 300,
            "SizeInMBs": 5
          },
          "EncryptionConfiguration": {
            "NoEncryptionConfig": "NoEncryption"
          },
          "CompressionFormat": "UNCOMPRESSED",
          "CloudWatchLoggingOptions": {
            "Enabled": false
          },
          "BucketARN": "arn:aws:s3:::my-bucket"
        }
      }
    ]
  }
}

Note the DeliveryStreamDescription.DeliveryStreamARN value to use in a later step.

8.    Create the IAM role and trust policy that grant CloudWatch Logs the permission to put data into the Kinesis Data Firehose stream. Make sure to add the Regions where the logs are pushed:

{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {
      "Service": "logs.us-east-1.amazonaws.com"
    },
    "Action": "sts:AssumeRole",
    "Condition": {
      "StringLike": {
        "aws:SourceArn": [
          "arn:aws:logs:region:sourceAccountId:*",
          "arn:aws:logs:region:recipientAccountId:*"
        ]
      }
    }
  }
}

9.    To create the IAM role and specify the trust policy file, run the create-role command:

aws iam create-role \
    --role-name CWLtoKinesisFirehoseRole \
    --assume-role-policy-document file://~/TrustPolicyForCWL.json

Note the returned Role_Arn value to use in a later step.

10.    Create a permissions policy to define the actions that CloudWatch Logs can perform in the destination account. Use the DeliveryStreamDescription.DeliveryStreamStatus and Role_Arn values that you noted from previous steps. 

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "firehose:ListDeliveryStreams",
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": "iam:PassRole",
            "Resource": "arn:aws:iam::111111111111:role/CWLtoKinesisFirehoseRole"
        },
        {
            "Effect": "Allow",
            "Action": [
                "firehose:DescribeDeliveryStream"
                "firehose:PutRecord",
                "firehose:PutRecordBatch"
            ],
            "Resource": "arn:aws:firehose:us-east-1:111111111111:deliverystream/my-delivery-stream"
        }
    ]
}

11.    Run the put-role-policy command to associate the permissions policy with the role:

aws iam put-role-policy --role-name CWLtoKinesisFirehoseRole --policy-name Permissions-Policy-For-CWL --policy-document file://~/PermissionsForCWL.json

12.    Use the put-destination API call to create a destination in the destination account. This is the destination that the source account is sending all the logs to:

aws logs put-destination --destination-name "myDestination" --target-arn "arn:aws:firehose:us-east-1:111111111111:deliverystream/my-delivery-stream"
--role-arn "arn:aws:iam::111111111111:role/CWLtoKinesisFirehoseRole" --region us-east-2

Note: You can create a destination for the delivery stream in any Region where Kinesis Data Firehose is supported. The Region where you create the destination must be same as the log source Region.

13.    Create an access policy for the CloudWatch destination:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "AWS": "222222222222"
      },
      "Action": "logs:PutSubscriptionFilter",
      "Resource": "arn:aws:logs:us-east-2:111111111111:destination:myDestination"
    }
  ]
}

14.    Associate the access policy with the CloudWatch destination:

aws logs put-destination-policy --destination-name "myDestination" --access-policy file://~/AccessPolicy.json --region us-east-2

15.    To verify the destination, run the following command:

aws logs describe-destinations --region us-east-2

Set up the source account

Note: You must be the IAM admin user or root user of the source account.

1.    Create an IAM role and trust policy to grant VPC Flow Logs the permissions to send data to the CloudWatch log group:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": "vpc-flow-logs.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

2.    To create the IAM role and specify the trust policy file that you created, run the following command:

aws iam create-role --role-name PublishFlowLogs --assume-role-policy-document file://~/TrustPolicyForVPCFlowLogs.json

Note the returned ARN value to pass on to VPC Flow Logs in a later step.

3.    Create a permissions policy to define the actions that VPC Flow Logs can perform in the source account:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents",
        "logs:DescribeLogGroups",
        "logs:DescribeLogStreams"
      ],
      "Effect": "Allow",
      "Resource": "*"
    }
  ]
}

4.    To associate the permissions policy with the IAM role, run the following command:

aws iam put-role-policy --role-name PublishFlowLogs --policy-name Permissions-Policy-For-VPCFlowLogs --policy-document file://~/PermissionsForVPCFlowLogs.json

5.    Create a CloudWatch log group to configure the destination for the VPC flow logs:

aws logs create-log-group --log-group-name vpc-flow-logs --region us-east-2

6.    To turn on VPC Flow Logs, run the following command:

aws ec2 create-flow-logs --resource-type VPC --resource-ids vpc-12345678 --traffic-type ALL --log-group-name vpc-flow-logs --deliver-logs-permission-arn arn:aws:iam::222222222222:role/PublishFlowLogs --region us-east-2

Note: Replace the --resource-ids and --deliver-logs-permission-arn values with your VPC ID and VPC Flow Logs role.

7.    Subscribe the CloudWatch log group to Kinesis Data Firehose in the destination account:

aws logs put-subscription-filter --log-group-name "vpc-flow-logs" --filter-name "AllTraffic" --filter-pattern "" --destination-arn
"arn:aws:logs:us-east-2:111111111111:destination:myDestination" --region us-east-2

Update the --destination ARN value, and replace 111111111111 with the destination account number.

8.    Check the S3 bucket to confirm that the logs are published.

Related information

DeliveryStreamDescription

AWS OFFICIAL
AWS OFFICIALUpdated a year ago