Introduction
So GitLab’s container registry went down yesterday. Even though I consider Gitlab’s service reliable, it reminded me that I really should practice what I preach and setup an additional private image registry for the sake of redundancy. Google’s Container Registry was the first one that came to mind.
However I discovered that pushing/pulling container images to/from Google’s Container Registry (GCR) isn’t as straight forward as other registries. Google doesn’t provide a simple password token for authentication with the docker login command. This tutorial covers how to configure authentication with GCR from Kubernetes or Docker.
Prerequisites
- Google Cloud Platform Account
- Knowledge of Kubernetes & Docker
- Basic Linux Knowledge
- Command Line tools installed (kubectl, Google Cloud SDK)
Step 1 - Enable the Container Registry API
Run the following command to enable the Container Registry API on your project.
gcloud services enable containerregistry.googleapis.com
Operation "operations/acf.0d1c1d9e-2649-4adb-8eaa-8690626d7834" finished successfully.
Step 2 - Create a new Service Account
Google uses Service Accounts to allow applications to make API calls to services running on GCP.
Set some variables for your Google Project ID (not name) and choose a name for the Service Account you’ll create. These variables will be used in commands throughout the rest of this tutorial:
export PROJECT_ID=project-id
export ACCOUNT_NAME=service-account-name
Run the command below to create a new Service Account (For the purpose of this tutorial I set my $ACCOUNT_NAME
to gcr-push-pull
).
gcloud iam service-accounts create $ACCOUNT_NAME \
--display-name="<account display name>" \
--description="<account description>"
Created service account [gcr-push-pull].
The above Service Account creation command will generate an associated email which is required for the next step. So run the following command and store the email as a variable (e.g. $ACCOUNT_EMAIL
)
gcloud iam service-accounts list
NAME EMAIL DISABLED
gcr-push-pull gcr-push-pull@arctic-goal-676703.iam.gserviceaccount.com False
Step 3 - Grant the Service Account permissions
Google Container Registry actually leverages their Cloud Storage service by storing images in buckets. In order to push/pull images to/from the Registry, permissions need to be granted to the Service Account to access the bucket used by the Registry.
The following command grants admin permissions to the new Service Account (This is a temporary measure in order to initiate the Container Registry bucket [if it isn’t already] - See Note below for more details)
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$ACCOUNT_EMAIL \
--role=roles/storage.admin
Updated IAM policy for project [arctic-goal-676703].
bindings:
- members:
- serviceAccount:service-271266176074@containerregistry.iam.gserviceaccount.com
role: roles/editor
- members:
- user:rick.sanchez@C-137.com
role: roles/owner
- members:
- serviceAccount:gcr-push-pull@arctic-goal-676703.iam.gserviceaccount.com
role: roles/storage.admin
etag: BwWqBenQN7o=
version: 1
Note: The above role (roles/storage.admin) allows the service account to administer all buckets across the whole project. However unless the Container Registry bucket responsible for storing images (named
artifacts.PROJECT-ID.appspot.com
orSTORAGE-REGION.artifacts.PROJECT-ID.appspot.com
) already exists, thestorage.admin
role is required to initiate this bucket by way of pushing an image to the Container Registry. See Container Registry - Granting Permissions for more details.
Step 4 - Create and Download the Service Account JSON key
A key associated with the Service Account is required by the Docker login command to authenticate with the GCR. Run the command below to create and download this key.
gcloud iam service-accounts keys create --iam-account $ACCOUNT_EMAIL key.json
created key [7f859197ceac92c118b5e4e4417d9ce86eb84ebc] of type [json] as [key.json] for [gcr-push-pull@arctic-goal-676703.iam.gserviceaccount.com]
Warning: Keep your generated Service Account key(s) secure. For all intents and purposes they don’t expire (Dec 31st 9999)! See Best practices for managing credentials
Step 5 - Login from Docker and push an image to the Container Registry
With the key downloaded you can now login to the Container Registry.
docker login -u _json_key --password-stdin <storage-region-url> < key.json
Where <storage-region-url>
is the location the storage bucket will be created in when you push images:
https://gcr.io
for registries in the US Region (May change in the future; hosted separate fromhttps://us.gcr.io
)https://us.gcr.io
for registries in the US Regionhttps://eu.gcr.io
for registries in the EU Regionhttps://asia.gcr.io
for registries in the ASIA Region
e.g. To create the storage bucket in the EU region.
docker login -u _json_key --password-stdin https://eu.gcr.io < key.json
Login Succeeded
Using the Docker tag
command, tag an existing local image according to the following syntax
docker tag <storage-region>/$PROJECT_ID/image-name:image-tag
e.g. Tag the busybox image
docker tag busybox gcr.io/arctic-goal-676703/busybox:latest
Now push this image to the GCR. This will automatically create the storage bucket in the specified region
docker push gcr.io/arctic-goal-676703/busybox:latest
The push refers to repository [gcr.io/arctic-goal-676703/busybox]
50761fe126b6: Layer already exists
latest: digest: sha256:2131f09e4044327fd101ca1fd4043e6f3ad921ae7ee901e9142e6e36b354a907 size: 527
The following command lists existing images in your Container Registry:
gcloud container images list
NAME
gcr.io/arctic-goal-676703/busybox
Only listing images in gcr.io/arctic-goal-676703. Use --repository to list images in other repositories.
You can see the recently pushed image is listed.
Step 6 - Update Service Account permissions
In Step 3 we added admin permissions to the Service Account. As mentioned prior, this grants the Service Account permission to edit/delete/create buckets across the entire project! Now that the Container Registry bucket has been created we can revoke this permission and replace it with a more fine-grained bucket specific role.
gcloud projects remove-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$ACCOUNT_EMAIL \
--role=roles/storage.admin
Updated IAM policy for project [arctic-goal-676703].
bindings:
- members:
- serviceAccount:service-271266176074@containerregistry.iam.gserviceaccount.com
role: roles/editor
- members:
- user:jane.doe@example.com
role: roles/owner
etag: BwWqBqrA5oo=
version: 1
If you try to push to the Registry now you’ll receive the following error informing you you don’t have the necessary permissions to do so.
docker push gcr.io/arctic-goal-676703/busybox:latest
The push refers to repository [gcr.io/arctic-goal-676703/busybox]
50761fe126b6: Preparing
denied: Token exchange failed for project 'arctic-goal-676703'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
You can grant the Service Account permission to access only the bucket containing your images with the following commands.
To grant the Service Account Pull only permission to the Registry, run
gsutil iam ch serviceAccount:"$ACCOUNT_EMAIL":legacyBucketReader gs://artifacts.$PROJECT_ID.appspot.com
To grant the Service Account Push & Pull permissions to the Registry, run
gsutil iam ch serviceAccount:"$ACCOUNT_EMAIL":legacyBucketWriter gs://artifacts.$PROJECT_ID.appspot.com
Check your chosen role has been successfully added
gsutil iam get gs://artifacts.$PROJECT_ID.appspot.com
{
"bindings": [
{
"members": [
"projectEditor:arctic-goal-676703",
"projectOwner:arctic-goal-676703"
],
"role": "roles/storage.legacyBucketOwner"
},
{
"members": [
"projectViewer:arctic-goal-676703"
],
"role": "roles/storage.legacyBucketReader"
},
{
"members": [
"serviceAccount:gcr-push-pull@arctic-goal-676703.iam.gserviceaccount.com"
],
"role": "roles/storage.legacyBucketWriter"
}
],
"etag": "CAQ="
}
Step 7 - Configure the Container Registry in Kubernetes
Now that the GCR has been setup we can configure Kubernetes to access it. Kubernetes deployments can pull images from private registries using the ImagePullSecrets field. This field allows you to set credentials allowing Pods to pull images from a private registry.
The first step is to create the secret (credentials) that the ImagePullSecrets field will reference in a deployment. This can be achieved a number of ways. The easiest method in my opinion is by creating a secret of type docker-registry
with kubectl
.
kubectl -n=example create secret docker-registry gcr-io \
--docker-server gcr.io \
--docker-username _json_key \
--docker-email not@val.id \
--docker-password="$(cat ~/key.json)"
Where
example
is the namespace in which the secret is createdgcr-io
is the chosen name of the secretgcr.io
is the FQDN of the private registry (in this case the GCR)_json_key
is the username for authenticating with GCR (Any value other than_json_key
will result in an error)not@val.id
can be any valid email address"$(cat key.json)"
will import your Service Account key to be used by the created secret from the set path
Now that the secret has been successfully created it can be used to pull images from the GCR in a couple of ways.
First, the imagePullSecrets property can be explicitly specified in a deployment.
Example (based on the busybox image pushed to the GCR earlier):
apiVersion: v1
kind: Pod
metadata:
name: busybox-pod
namespace: example
spec:
containers:
- name: busybox-container
image: gcr.io/arctic-goal-676703/busybox:latest
imagePullSecrets:
- name: gcr-io
The image
field references the image location in the GCR and the imagePullSecrets -name
field references the secret we created for authenticating with Google’s Container Registry before pulling the image.
Alternatively imagePullSecrets can be configured on the default service account. This allows every pod in the defined namespace to access the private GCR.
The kubectl patch
command patches the default service account with the imagePullSecrets configuration
kubectl -n=example patch serviceaccount default \
-p '{"imagePullSecrets": [{"name": "gcr-io"}]}'
Now any deployments to the same namespace will be able to pull images from the GCR without having to specify the imagePullSecrets field in the deployment itself.