Use Docker buildx in AWS CodeBuild to build multi-architecture Container Images
4 min read
buildspec.yaml can be found at the end of the post.
I’ve spent the last few days building multi-architecture containers in AWS Codebuild. There was no quick and easy guide, so I want to document my journey with this post.
Since I worked with
buildx before I wanted to use it so I can build both images on the same host machine. I knew that GitHub actions for example already have actions to easily integrate
buildx in one’s workflows.
I did not find a good introduction of using Docker
buildx in CodeBuild on AWS. The only thing I found is a blog post from November 2020  where AWS does recommend building multi-architecture builds using 3 CodeBuild environments. 2 are used to build the native container image on
arm64 architecture machines respectively.
In the last step, the Docker manifest will be created to have the possibility to use the same image tag on multiple processor architectures natively. It will be connected via a single CodePipeline which will only build the manifest after both images are created successfully
Buildx to the rescue
Since this was not something I would love to build especially when multiple different images are created this sounds like a huge configuration hassle to me. So I thought why not check how to install
buildx in the CodeBuild process and build it on the same machine even if it’s not (yet?) officially supported in the CodeBuild Linux images . As you might see in the provided
Dockerfile definitions the AWS images use
curl to get the Docker binaries to install. The
buildx package is included when Docker is installed as a DEB or RPM package.
Since this is not the case we will have to install it manually when starting the CodeBuild instance via the official
buildx releases on GitHub . Beware that the docs  of
buildx tell you that it is not officially supported to be installed in a production environment because of no automatic security updates. So make sure that you keep an eye on the releases to always use the most up to date version.
buildspec.yml provided below CodeBuild should always try to get the newest release of
You will also have to run a privileged Docker container with the
binfmt package. This image will install the required QEMU binaries to emulate the different processor architectures. Since CodeBuild only supports
arm64 architecture natively we will need the other to be emulated with QEMU. So you will have to install it with
--install arm64 or
--install amd64. It depends on the CodeBuild host you decide to use.
If you want to install emulators for every supported platform you can also use
Quick Tip: You might encounter the problem of Docker’s pull limits when starting your builds and pulling the
binfmt image from Docker Hub. So think about also implementing a Docker Hub login or hosting the images in your registry to make sure that your CodeBuild runs don’t fail because of this pull limit. This is annoying I know.
The following CodeBuild
buildspec.yaml will download and install
buildx in the install step before logging in to Amazon Elastic Container Registry (ECR). Afterwards in the main build step, we will build the container image for
amd64 respectively and push it into our ECR repository in the same step.
version: 0.2 phases: install: commands: - export BUILDX_VERSION=$(curl --silent "https://api.github.com/repos/docker/buildx/releases/latest" |jq -r .tag_name) - curl -JLO "https://github.com/docker/buildx/releases/download/$BUILDX_VERSION/buildx-$BUILDX_VERSION.linux-amd64" - mkdir -p ~/.docker/cli-plugins - mv "buildx-$BUILDX_VERSION.linux-amd64" ~/.docker/cli-plugins/docker-buildx - chmod +x ~/.docker/cli-plugins/docker-buildx - docker run --privileged --rm tonistiigi/binfmt --install arm64 # To install all the supported platforms: # - docker run --privileged --rm tonistiigi/binfmt --install all pre_build: commands: - echo Logging in to Amazon ECR... - aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com build: commands: - echo Build started on `date` - echo Building the Docker image... - docker buildx create --use --name multiarch - docker buildx build --push --platform=linux/amd64,linux/arm64 -t $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG .
Thank you for reading,
Creating multi-architecture Docker images to support Graviton2 using AWS CodeBuild and AWS CodePipeline ↩︎
aws-codebuild-docker-images/Dockerfile at master · aws/aws-codebuild-docker-images · GitHub ↩︎
aws-codebuild-docker-images/dockerfile at master · aws/aws-codebuild-docker-images · github ↩︎