Wednesday, September 28, 2022
HomeWordPress DevelopmentCI/CD pipeline for ECS utility

CI/CD pipeline for ECS utility


Any trendy utility today must have an automatic deployment course of. Normally it’s setup by way of a webhook, different occasions we have to manually set off the deployment, typically even requiring multiple individual to approve.
On this article we’ll discover ways to construct a CI/CD pipeline for an ECS Fargate utility utilising AWS Code Pipeline, Code Construct and Code Deploy companies.
By the tip of this text, we will routinely deploy a brand new model of our utility by merely pushing new commits to the grasp department of a Github repository.



Start line

loadbalanced ecs fargate architecture diagram
As you may see within the diagram, we’re beginning off with a Load Balanced Fargate Cluster inside a VPC. The applying is secured with an SSL certificates on a customized area.
Obtain code from the earlier article about including SSL certificates to a Fargate app if you would like to observe alongside.
Take into accout you’ll need a SSL certificates and a hosted zone to get this to work.
If you would like to see the completed code for this text, yow will discover it on github.



Configuring Github Credentials

Beginning off, we have to outline the supply of our utility’s sourcecode. On this case, we will use github.

// lib/pipeline.stack.ts

interface Props extends StackProps {}
const secretConfig = {
  arn: "arn:aws:secretsmanager:eu-central-1:145719986153:secret:github/token",
  id: "github/token",
}
export class PipelineStack extends Stack {
  constructor(scope: Assemble, id: string, personal readonly props: Props) {
    tremendous(scope, id, props)
    new GitHubSourceCredentials(this, "code-build-credentials", {
      accessToken: SecretValue.secretsManager(secretConfig.id),
    })
  }
}
Enter fullscreen mode

Exit fullscreen mode

As you may see, we’re loading up a github token from the Secrets and techniques Supervisor. Here is a fast information on generate and retailer a private entry token.
This piece of code tells Code Construct what credentials to make use of when speaking with Github API. Essential to notice that AWS permits just one Github credential per account per area.



Code Supply

Now that Code Construct is allowed to speak with GitHub in our title, we will outline the place it ought to take the code from

// lib/pipeline.stack.ts
const githubConfig = {
  proprietor: "exanubes",
  repo: "ecs-fargate-ci-cd-pipeline",
  department: "grasp",
}

const supply = Supply.gitHub({
  proprietor: githubConfig.proprietor,
  repo: githubConfig.repo,
  webhook: true,
  webhookFilters: [
    FilterGroup.inEventOf(EventAction.PUSH).andBranchIs(githubConfig.branch),
  ],
})
Enter fullscreen mode

Exit fullscreen mode

That is how we will subscribe to a Github Webhook that may set off an occasion each time somebody pushes to a grasp department within the ecs-fargate-ci-cd-pipeline repo that belongs to exanubes.



Construct spec

// lib/pipeline.stack.ts

personal getBuildSpec() {
    return BuildSpec.fromObject({
        model: '0.2',
        env: {
            shell: 'bash'
        },
        phases: {
            pre_build: {
                instructions: [
                    'echo logging in to AWS ECR',
                    'aws --version',
                    'echo $AWS_STACK_REGION',
                    'echo $CONTAINER_NAME',
                    'aws ecr get-login-password --region ${AWS_STACK_REGION} | docker login --username AWS --password-stdin ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_STACK_REGION}.amazonaws.com',
                    'COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)',
                    'echo $COMMIT_HASH',
                    'IMAGE_TAG=${COMMIT_HASH:=latest}',
                    'echo $IMAGE_TAG'
                ],
            },
            construct: {
                instructions: [
                    'echo Build started on `date`',
                    'echo Build Docker image',
                    'docker build -f ${CODEBUILD_SRC_DIR}/backend/Dockerfile -t ${REPOSITORY_URI}:latest ./backend',
                    'echo Running "docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}"',
                    'docker tag ${REPOSITORY_URI}:latest ${REPOSITORY_URI}:${IMAGE_TAG}'
                ],
            },
            post_build: {
                instructions: [
                    'echo Build completed on `date`',
                    'echo Push Docker image',
                    'docker push ${REPOSITORY_URI}:latest',
                    'docker push ${REPOSITORY_URI}:${IMAGE_TAG}',
                    'printf "[{"name": "$CONTAINER_NAME", "imageUri": "$REPOSITORY_URI:$IMAGE_TAG"}]" > docker_image_definition.json'
                ]
            }
        },
        artifacts: {
            information: ['docker_image_definition.json']
        },
    })
}
Enter fullscreen mode

Exit fullscreen mode

Plenty of cool issues occurring right here. To begin with, the construct spec are directions for Code Construct on the way it ought to deal with constructing our tasks. AWS offers us with some life cycle hooks so we will resolve at what level within the life cycle of the construct course of sure actions ought to occur. The model key/pair defines the model of the construct spec API, 0.2 is the newest model on the time of writing.

Beginning out, we now have a pre_build life cycle hook, right here we’re checking the environment, logging into docker with ECR credentials and creating a picture tag from a commit sha.

Transferring over to the construct stage of the life cycle, it is time to construct the Dockerfile picture. The repository I am utilizing has each backend and infrastructure code in it so we now have to outline the trail to the Dockerfile in addition to the execution context each of that are contained in the /backend listing.
Then we simply tag it and transfer over to our post_build instructions.

Now we take the REPOSITORY_URI variable and the IMAGE_TAG generated within the pre_build step and we push our picture to each the newest tag and our personal generated tag as a way to maintain monitor of our picture variations. Final however not least, we generate a picture definitions file that may maintain the title of our container and the picture uri.
That is essential to substitute our Activity Definition’s picture part with a brand new picture and this may also be the artifact/output of Code Construct.



Construct config

// lib/pipeline.stack.ts

const stack = Stack.of(this)
const buildSpec = this.getBuildSpec()

const challenge = new Mission(this, "challenge", {
  projectName: "pipeline-project",
  buildSpec,
  supply,
  surroundings: {
    buildImage: LinuxBuildImage.AMAZON_LINUX_2_ARM_2,
    privileged: true,
  },
  environmentVariables: {
    REPOSITORY_URI: {
      worth: props.repository.repositoryUri,
    },
    AWS_ACCOUNT_ID: {
      worth: stack.account,
    },
    AWS_STACK_REGION: {
      worth: stack.area,
    },
    GITHUB_AUTH_TOKEN: {
      sort: BuildEnvironmentVariableType.SECRETS_MANAGER,
      worth: secretConfig.arn,
    },
    CONTAINER_NAME: {
      worth: props.container.containerName,
    },
  },
})
Enter fullscreen mode

Exit fullscreen mode

Since we all know the place to search for utility sourcecode and we all know construct the app, we will now create a Code Construct challenge to place this all collectively.

Our Fargate utility is operating on ARM64 structure and sadly the model of docker in codebuild doesn’t help defining a platform when constructing a picture.
That is why we now have to explicitly set the buildImage to Linux ARM64 Platform within the surroundings settings.

Within the buildspec we’re utilizing fairly a couple of surroundings variables and that is the place we set it. A few of them will come from different stacks, we even have to make use of the github secret arn that we created at first but in addition use some stack config.
For this, we’re gonna use Stack.of(this) to get the scope of our present stack.

We additionally should replace our Props interface as we’re counting on parts from different stacks.

// lib/pipeline.stack.ts
interface Props extends StackProps {
  repository: IRepository
  service: IBaseService
  cluster: ICluster
  container: ContainerDefinition
}
Enter fullscreen mode

Exit fullscreen mode



Permissions

As every part in AWS, Code Construct must have related permissions to have the ability to interface with different companies.

// lib/pipeline.stack.ts

challenge.addToRolePolicy(
  new PolicyStatement({
    actions: ["secretsmanager:GetSecretValue"],
    assets: [secretConfig.arn],
  })
)
props.repository.grantPullPush(challenge.grantPrincipal)
Enter fullscreen mode

Exit fullscreen mode

First off, we gotta give the Code Construct Mission entry to our github entry token from the Secrets and techniques Supervisor.
It is also going to tug and push to and from the ECR repository.



Defining pipeline actions

We will lastly begin creating the totally different actions for our pipeline. We may have a supply motion – downloading code from Github – a construct motion for constructing the docker picture and pushing it to the ECR repository, and to complete it off we’ll have a deploy motion that may take the output of the construct motion – BuildOutput Artifact – and use it to deploy a brand new model of our utility on ECS.

// lib/pipeline.stack.ts
const artifacts = {
  supply: new Artifact("Supply"),
  construct: new Artifact("BuildOutput"),
}
const pipelineActions = {
  supply: new GitHubSourceAction({
    actionName: "Github",
    proprietor: githubConfig.proprietor,
    repo: githubConfig.repo,
    department: githubConfig.department,
    oauthToken: SecretValue.secretsManager("github/cdk-pipeline"),
    output: artifacts.supply,
    set off: GitHubTrigger.WEBHOOK,
  }),
  construct: new CodeBuildAction({
    actionName: "CodeBuild",
    challenge,
    enter: artifacts.supply,
    outputs: [artifacts.build],
  }),
  deploy: new EcsDeployAction({
    actionName: "ECSDeploy",
    service: props.service,
    imageFile: new ArtifactPath(
      artifacts.construct,
      "docker_image_definition.json"
    ),
  }),
}
Enter fullscreen mode

Exit fullscreen mode

Let’s begin with defining our artifacts – the output of a selected motion within the pipeline – which will likely be saved in CodePipeline’s S3 Bucket. Every subsequent motion will use the earlier motion’s artifact so this is essential. On this pipeline we solely have two artifacts, first one would be the code downloaded from the repository.
The supply Artifact would be the enter for the construct motion and as per the construct spec we will output the picture definition file – docker_image_definition.json – as construct output Artifact.
That is then used within the deploy motion and we’re passing that file for use throughout deployment.



Defining pipeline phases

Now that we now have every part setup, we will lastly outline the pipeline phases and assign applicable actions to them.

// lib/pipeline.stack.ts
const pipeline = new Pipeline(this, "DeployPipeline", {
  pipelineName: `exanubes-pipeline`,
  phases: [
    { stageName: "Source", actions: [pipelineActions.source] },
    { stageName: "Construct", actions: [pipelineActions.build] },
    { stageName: "Deploy", actions: [pipelineActions.deploy] },
  ],
})
Enter fullscreen mode

Exit fullscreen mode

On this easy instance we now have just one motion per stage however nothing’s stopping us from including one other motion that may run integration assessments within the Construct stage, for instance.



Deployment

Now that our pipeline is prepared, there’s nonetheless a couple of issues we have to kind out.
As a result of our pipeline wants surroundings variables resembling account id and area, we truly should explicitly move these to our stacks.

// bin/infrastructure.stack.ts
import {
  getAccountId,
  getRegion,
  resolveCurrentUserOwnerName,
} from "@exanubes/cdk-utils"

async operate begin(): Promise<void> {
  const proprietor = await resolveCurrentUserOwnerName()
  const account = await getAccountId()
  const area = await getRegion()
  const env: Setting = { account, area }
  const app = new cdk.App()
  const ecr = new EcrStack(app, EcrStack.title, { env })
  const vpc = new VpcStack(app, VpcStack.title, { env })
  const ecs = new ElasticContainerStack(app, ElasticContainerStack.title, {
    vpc: vpc.vpc,
    repository: ecr.repository,
    env,
  })
  new Route53Stack(app, Route53Stack.title, {
    loadBalancer: ecs.loadBalancer,
    env,
  })
  new PipelineStack(app, PipelineStack.title, {
    repository: ecr.repository,
    service: ecs.service,
    cluster: ecs.cluster,
    container: ecs.container,
    env,
  })
  Tags.of(app).add("proprietor", proprietor)
}

begin().catch(error => {
  console.log(error)
  course of.exit(1)
})
Enter fullscreen mode

Exit fullscreen mode

Right here we’re getting the account and area from utility capabilities I’ve imported from @exanubes/cdk-utils that use AWS SDKv3.
Then I move them to each stack because it typically occurs that there are errors when one stack has explicitly set env and others don’t so I like to recommend passing it to all stacks as these errors may be fairly troublesome to debug typically.

Essential to notice that we’re additionally passing repository, service, cluster and container to the pipeline stack because it depends on references to those assets.
Do ensure that they’re obtainable in your ElasticContainerStack occasion.

// lib/elastic-container.stack.ts
public readonly container: ContainerDefinition
public readonly service: FargateService
public readonly cluster: Cluster
Enter fullscreen mode

Exit fullscreen mode

I’ve additionally added an proprietor tag – your account title – to every useful resource for good measure.

Now we will construct and deploy

npm run construct && npm run cdk:deploy -- --all
Enter fullscreen mode

Exit fullscreen mode

Take into accout you’ll have to add your first picture to ECR in any other case deployment will cling on ElasticContainerStack. This may be completed through the deployment as stacks are deployed in a sequence and ECR is the primary one.
After deploying the infrastructure, try to make adjustments within the app.service.ts file and see if the automated deployment works once you push to your repository.

Do not forget to destroy the stack once you’re completed

npm run cdk:destroy -- --all
Enter fullscreen mode

Exit fullscreen mode



Abstract

Loadbalanced ecs fargate with ci/cd pipeline

On this article we have been in a position to take our present utility infrastructure and add a CI/CD pipeline to it. First we outlined how Code Construct ought to discuss to the Github API by defining credentials.
Then we wanted to create a buildspec which might instruct Code Construct on construct our utility by utilising totally different life cycle hooks of all the course of.
Final however not least we output a BuildOutput Artifact that may then be utilized by Code Deploy to replace our app.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments