Colin Wilson

09 July 2020, 7:21 AM

Using Google Container Registry with Kubernetes


So GitLab’s container registry went down yesterday. Even though I consider Gitlab’s service reliable, it reminded me that I really should practice what I preach and setup an additional private image registry for the sake of redundancy. Google’s Container Registry was the first one that came to mind.

However I discovered that pushing/pulling container images to/from Google’s Container Registry (GCR) isn’t as straight forward as other registries. Google doesn’t provide a simple password token for authentication with the docker login command. This tutorial covers how to configure authentication with GCR from Kubernetes or Docker.


Step 1 - Enable the Container Registry API

Run the following command to enable the Container Registry API on your project.

gcloud services enable
Operation "operations/acf.0d1c1d9e-2649-4adb-8eaa-8690626d7834" finished successfully.

Step 2 - Create a new Service Account

Google uses Service Accounts to allow applications to make API calls to services running on GCP.

Set some variables for your Google Project ID (not name) and choose a name for the Service Account you’ll create. These variables will be used in commands throughout the rest of this tutorial:

export PROJECT_ID=project-id
export ACCOUNT_NAME=service-account-name

Run the command below to create a new Service Account (For the purpose of this tutorial I set my $ACCOUNT_NAME to gcr-push-pull).

gcloud iam service-accounts create $ACCOUNT_NAME \
  --display-name="<account display name>" \
  --description="<account description>"
Created service account [gcr-push-pull].

The above Service Account creation command will generate an associated email which is required for the next step. So run the following command and store the email as a variable (e.g. $ACCOUNT_EMAIL)

gcloud iam service-accounts list
NAME           EMAIL                                                     DISABLED
gcr-push-pull  False

Step 3 - Grant the Service Account permissions

Google Container Registry actually leverages their Cloud Storage service by storing images in buckets. In order to push/pull images to/from the Registry, permissions need to be granted to the Service Account to access the bucket used by the Registry.

The following command grants admin permissions to the new Service Account (This is a temporary measure in order to initiate the Container Registry bucket [if it isn’t already] - See Note below for more details)

gcloud projects add-iam-policy-binding $PROJECT_ID \
  --member=serviceAccount:$ACCOUNT_EMAIL \
Updated IAM policy for project [arctic-goal-676703].
- members:
  role: roles/editor
- members:
  role: roles/owner
- members:
  role: roles/storage.admin
etag: BwWqBenQN7o=
version: 1

Note: The above role (roles/storage.admin) allows the service account to administer all buckets across the whole project. However unless the Container Registry bucket responsible for storing images (named or already exists, the storage.admin role is required to initiate this bucket by way of pushing an image to the Container Registry. See Container Registry - Granting Permissions for more details.

Step 4 - Create and Download the Service Account JSON key

A key associated with the Service Account is required by the Docker login command to authenticate with the GCR. Run the command below to create and download this key.

gcloud iam service-accounts keys create --iam-account $ACCOUNT_EMAIL key.json
created key [7f859197ceac92c118b5e4e4417d9ce86eb84ebc] of type [json] as [key.json] for []

Warning: Keep your generated Service Account key(s) secure. For all intents and purposes they don’t expire (Dec 31st 9999)! See Best practices for managing credentials

Step 5 - Login from Docker and push an image to the Container Registry

With the key downloaded you can now login to the Container Registry.

docker login -u _json_key --password-stdin <storage-region-url> < key.json

Where <storage-region-url> is the location the storage bucket will be created in when you push images:

e.g. To create the storage bucket in the EU region.

docker login -u _json_key --password-stdin < key.json
Login Succeeded

Using the Docker tag command, tag an existing local image according to the following syntax

docker tag <storage-region>/$PROJECT_ID/image-name:image-tag

e.g. Tag the busybox image

docker tag busybox

Now push this image to the GCR. This will automatically create the storage bucket in the specified region

docker push
The push refers to repository []
50761fe126b6: Layer already exists
latest: digest: sha256:2131f09e4044327fd101ca1fd4043e6f3ad921ae7ee901e9142e6e36b354a907 size: 527

The following command lists existing images in your Container Registry:

gcloud container images list
Only listing images in Use --repository to list images in other repositories.

You can see the recently pushed image is listed.

Step 6 - Update Service Account permissions

In Step 3 we added admin permissions to the Service Account. As mentioned prior, this grants the Service Account permission to edit/delete/create buckets across the entire project! Now that the Container Registry bucket has been created we can revoke this permission and replace it with a more fine-grained bucket specific role.

gcloud projects remove-iam-policy-binding $PROJECT_ID \
  --member=serviceAccount:$ACCOUNT_EMAIL \
Updated IAM policy for project [arctic-goal-676703].
- members:
  role: roles/editor
- members:
  role: roles/owner
etag: BwWqBqrA5oo=
version: 1

If you try to push to the Registry now you’ll receive the following error informing you you don’t have the necessary permissions to do so.

docker push
The push refers to repository []
50761fe126b6: Preparing
denied: Token exchange failed for project 'arctic-goal-676703'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at:

You can grant the Service Account permission to access only the bucket containing your images with the following commands.

To grant the Service Account Pull only permission to the Registry, run

gsutil iam ch serviceAccount:"$ACCOUNT_EMAIL":legacyBucketReader gs://artifacts.$

To grant the Service Account Push & Pull permissions to the Registry, run

gsutil iam ch serviceAccount:"$ACCOUNT_EMAIL":legacyBucketWriter gs://artifacts.$

Check your chosen role has been successfully added

gsutil iam get gs://artifacts.$
  "bindings": [
      "members": [
      "role": "roles/storage.legacyBucketOwner"
      "members": [
      "role": "roles/storage.legacyBucketReader"
      "members": [
      "role": "roles/storage.legacyBucketWriter"
  "etag": "CAQ="

Step 7 - Configure the Container Registry in Kubernetes

Now that the GCR has been setup we can configure Kubernetes to access it. Kubernetes deployments can pull images from private registries using the ImagePullSecrets field. This field allows you to set credentials allowing Pods to pull images from a private registry.

The first step is to create the secret (credentials) that the ImagePullSecrets field will reference in a deployment. This can be achieved a number of ways. The easiest method in my opinion is by creating a secret of type docker-registry with kubectl.

kubectl -n=example create secret docker-registry gcr-io \
  --docker-server \
  --docker-username _json_key \
  --docker-email \
  --docker-password="$(cat ~/key.json)"


Now that the secret has been successfully created it can be used to pull images from the GCR in a couple of ways.

First, the imagePullSecrets property can be explicitly specified in a deployment.

Example (based on the busybox image pushed to the GCR earlier):

apiVersion: v1
kind: Pod
  name: busybox-pod
  namespace: example
  - name: busybox-container
  - name: gcr-io

The image field references the image location in the GCR and the imagePullSecrets -name field references the secret we created for authenticating with Google’s Container Registry before pulling the image.

Alternatively imagePullSecrets can be configured on the default service account. This allows every pod in the defined namespace to access the private GCR.

The kubectl patch command patches the default service account with the imagePullSecrets configuration

kubectl -n=example patch serviceaccount default \
          -p '{"imagePullSecrets": [{"name": "gcr-io"}]}'

Now any deployments to the same namespace will be able to pull images from the GCR without having to specify the imagePullSecrets field in the deployment itself.

Copyright © Colin Wilson 2020