How can I access other AWS services from my Amazon ECS tasks on Fargate?

5 minute read
1

I want to access other AWS services from my Amazon Elastic Container Service (Amazon ECS) tasks on AWS Fargate.

Short description

When you call AWS APIs, containerized applications must sign AWS API requests with AWS credentials. For an Amazon ECS task, use the AWS Identity and Access Management (IAM) task role to sign API requests with AWS credentials. Then, associate an IAM role with an Amazon ECS task definition or a RunTask API operation. After you do this, your containers can use the AWS SDK or AWS Command Line Interface (AWS CLI) to make API requests to authorized AWS services.

Note: If you receive errors you run AWS CLI commands, make sure that you use the most recent version of the AWS CLI.
In this article, the example resolution is for an application that runs on Fargate and that must access Amazon Simple Storage Service (Amazon S3).

Resolution

Prerequisites

  • Identify the AWS service that your Fargate tasks must access. Then, create an IAM role and specify the policy with the required actions to make the API calls inside the containers.
  • Create a task definition for your application containers, and then use the taskRoleArn IAM parameter to specify the IAM role for your tasks.

Create an IAM policy and role for your tasks

1.    Create an Amazon S3 bucket to store your data. The bucket name must be unique and follow Amazon S3 bucket requirements for bucket names. For more information, see Bucket naming rules.

2.    Create an IAM policy and role for your tasks. In this example, the application is required to put objects into an S3 bucket and then list those objects:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "S3PutGEList",
    "Effect": "Allow",
    "Action": ["s3:PutObject", "s3:GetObject", "s3:ListBucketMultipartUploads", "s3:ListBucketVersions", "s3:ListBucket", "s3:ListMultipartUploadParts"],
    "Resource": ["arn:aws:s3:::*/*", "arn:aws:s3:::kc-test-fargate-app-bucket"]
  }]
}

Note: Replace fargate-app-bucket with the name of your S3 bucket.

Create a task definition for your application and specify the IAM role for your tasks

To assign the role when you create a task definition, use the taskRoleArn section:

{
  "containerDefinitions": [{
    "name": "sample-s3-access",
    "image": "public.ecr.aws/aws-cli/aws-cli:latest",
    "memory": 1024,
    "cpu": 512,
    "command": ["s3api", "put-object", "--bucket", "fargate-app-bucket", "--key", "/usr/local/bin/aws"],
    "essential": true
  }],
  "memory": "1024",
  "cpu": "512",
  "requiresCompatibilities": ["FARGATE"],
  "networkMode": "awsvpc",
  "runtimePlatform": {
    "operatingSystemFamily": "LINUX"
  },
  "family": "s3_access-WITH-ROLE",
  "taskRoleArn": "arn:aws:iam::aws_account_id:role/s3-access-role"
}

Note: Because the base image includes the aws-cli install (public.ecr.aws/aws-cli/aws-cli:latest), this application can make the API call.

Save the configuration information into a file, and then use the register-task-definition command to register the task definition:

aws ecs register-task-definition --cli-input-json file://task-def1.json --region eu-west-1

Create and run a standalone task

To run a standalone task, use a Fargate launch type. In this example, the container runs on commands and exits.

After the container runs the command, the task returns an ExitCode=0 if the taskRoleArn has the required permissions to run the API calls. If the taskRoleArn is missing or has insufficient permissions, then the task returns a none 0 exit code.

Create a Service

Note: For the service to reach a steady state, your task process can't exit upon entry. In the previous example, the container exits after the command completes. This makes the example unsuitable for running as part of the service.

1.    Create a bash script that runs a loop and prints the creation date and hostname of the file onto a file. Then, push the file to the Amazon S3 bucket.

In the example below, the bash script is named "run-s3-script.sh."

#!/bin/bash

while true; do
TODAY=$(date)
echo "-----------------------------------------------------"
echo "Date: $TODAY Host:$HOST"
echo "File was added and active on these Dates: $TODAY" from Host:$HOSTNAME>> checkfile.txt
echo "--------------------Add to S3------------------------"
aws s3 cp checkfile.txt s3://kc-test-fargate-app-bucket
status_code=$?
echo "------------Get upload StatusCode=$status_code ([ $status_code -eq 0 ] means failed)---------------"
#echo "------------Get upload StatusCode=$status_code and exit if upload failed.---------------"
#[ $status_code -eq 0 ] || exit 1
echo "------------Also list the files in the S3 bucket---------------"
aws s3 ls s3://kc-test-fargate-app-bucket
status_code=$?
echo "------------Get the status_code=$status_code after listing objects in bucket---------------"
#[ $status_code -eq 0 ] || exit 1 #uncomment this is you want the task to stop upon failed attempt
echo "============================================================================="
sleep 5

#check the user or role that made the call
aws sts get-caller-identity
echo "*****End of loop, restarting"
sleep 10

done

2.    To build a new image that adds the script and runs it, create a Dockerfile:

FROM public.ecr.aws/amazonlinux/amazonlinux:latest
Run yum -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN /aws/install
RUN rm -rf aws*
COPY run-s3-test.sh /
CMD ./run-s3-test.sh

3.    To build the image locally, run the following command:

$ docker build -t test-awscli:amz-build-scripts 

4.    Push the image to Amazon Elastic Container Registry (Amazon ECR). Add the image to the task definition that you use to create the service. For more information, see Pushing an image.

{
  "containerDefinitions": [{
    "name": "add-files-to-s3",
    "image": "aws_account_id.dkr.ecr.eu-central-1.amazonaws.com/test-s3-ecs:amzlin-build-scripts",
    "memory": 1024,
    "cpu": 512,
    "healthCheck": {
      "retries": 3,
      "command": ["CMD-SHELL", "aws s3 ls s3://kc-test-fargate-app-bucket || exit 1"],
      "timeout": 5,
      "interval": 10,
      "startPeriod": 5
    },
    "logConfiguration": {
      "logDriver": "awslogs",
      "options": {
        "awslogs-group": "/ecs/test-s3-script",
        "awslogs-region": "eu-central-1",
        "awslogs-create-group": "true",
        "awslogs-stream-prefix": "ecs"
      }
    },
    "essential": true
  }],
  "memory": "1024",
  "cpu": "512",
  "requiresCompatibilities": ["FARGATE"],
  "networkMode": "awsvpc",
  "runtimePlatform": {
    "operatingSystemFamily": "LINUX"
  },
  "family": "test-s3-script",
  "taskRoleArn": "arn:aws:iam::aws_account_id:role/s3-access-role",
  "executionRoleArn": "arn:aws:iam::aws_account_id:role/ecsTaskExecutionRole"
}

Note: You might receive "Access Denied" errors when you use IAM task roles for your containers. For more information, see How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?

Related information

Creating a service using the console

AWS OFFICIAL
AWS OFFICIALUpdated 8 months ago