Cosigned up and running on EKS
Cosigned on EKS
This blog post covers how to get started using the cosigned
admission controller on Amazon EKS as a proof of concept for preventing unverified containers in your Kubernetes cluster. The cosign
sub-project of Sigstore contains tooling for signing and verifying container images that interoperate with the rest of the Sigstore
ecosystem.
What is cosigned?
For verification in production environments, we've recently introduced a Kubernetes admission controller called cosigned
. This tool is still experimental, but can be installed and used following the documentation here. The goal of cosigned
is to provide a cross-platform, standalone tool that can be used to verify signatures and apply simple policy before allowing containers to run in a Kubernetes environment.
Cosigned
works in two main phases: a Mutating
webhook and a Validating
webhook. The Mutating
webhook looks for images in all supported Kubernetes types, and resolves the floating tags to sha256
digests, ensuring that what was originally deployed cannot change later. The Validating
webhook checks those (resolved) images for signatures that match configured public keys, and rejects images without matching signatures.
Cosigned
currently (as of release v1.2.1) supports the Pod
, ReplicaSet
, Deployment
, StatefulSet
, DaemonSet
, Job
, or CronJob
types. Higher-level types defined in CRDs will also work as long as they "compile into" one of these base primitives, but the error messages might come a little later and be harder to debug.
Okay, why do I want that?
If you can be sure that only code that you have authorized to run in your cluster, you make it that much harder to 1) run containers that have not been properly built and validated by your CI systems. And 2) prevent unauthorized containers from being scheduled.
EKS Graviton
Being new to EKS, first step for us was to create an ec2 account. After several reCAPTCHAs and adding a credit card, we had a brand new AWS account.
Based on this AWS blogpost, we are going to create the cluster using the eksctl tool.
First install eksctl
:
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
And then the authenticator helper for AWS:
brew install aws-iam-authenticator
And finally the aws
cli. Plus running aws configure
via the user guide.
NOTE: The recommended IAM polices for eksctl are here.
Then create a cluster using the tool:
NOTE: We are using us-west-2 but you can replace this with whatever you want.
eksctl create cluster -f - <<EOF
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: sig-arm
region: us-west-2
managedNodeGroups:
- name: mng-arm0
instanceType: m6g.medium
desiredCapacity: 3
EOF
After about 20 minutes, this returned:
2021-10-25 10:32:45 [ℹ] building cluster stack "eksctl-sig-arm-cluster"
2021-10-25 10:32:46 [ℹ] deploying stack "eksctl-sig-arm-cluster"
2021-10-25 10:33:16 [ℹ] waiting for CloudFormation stack "eksctl-sig-arm-cluster"
...
2021-10-25 10:47:50 [ℹ] building managed nodegroup stack "eksctl-sig-arm-nodegroup-mng-arm0"
2021-10-25 10:47:50 [ℹ] deploying stack "eksctl-sig-arm-nodegroup-mng-arm0"
2021-10-25 10:47:50 [ℹ] waiting for CloudFormation stack "eksctl-sig-arm-nodegroup-mng-arm0"
...
2021-10-25 10:52:15 [✔] all EKS cluster resources for "sig-arm" have been created
...
2021-10-25 10:52:15 [ℹ] nodegroup "mng-arm0" has 3 node(s)
...
2021-10-25 10:52:16 [✔] EKS cluster "sig-arm" in "us-west-2" region is ready
Sweet. We have a 3 node ARM cluster.
(╯°□°)╯︵ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 11m
Installing cosigned
Installing cosigned
into the cluster using the helm instructions should work but at the time of this writing the release is slightly behind schedule. In the future, if you choose to, helm should be an option and the following instructions are not required.
Install from source.
To install this from HEAD of the cosign project, use ko
.
First make sure you create a webhook
repository in ECR console and login to ECR locally. ko
takes the same parameters as docker login
, to copy and edit the command visit the repositories page, create or click on the webhook
repository, and click "View push commands" to get your login command. Replace docker
for ko
and it will look something like this:
NOTE: We are using us-west-2 but you can replace this to match your cluster's region.
aws ecr get-login-password --region us-west-2 | ko login --username AWS --password-stdin <your prefix>.dkr.ecr.us-west-2.amazonaws.com
Export KO_DOCKER_REPO
to let ko
know where to push images:
export KO_DOCKER_REPO=<your prefix>.dkr.ecr.us-west-2.amazonaws.com/
Clone the cosign project locally and let's deploy:
ko apply -Bf config/ --platform=all
We just created a multi-arch container using ko, and deployed the cosigned webhook to a new namespace called cosign-system. We can look at the running pods:
(╯°□°)╯︵ kubectl get pods -n cosign-system
NAME READY STATUS RESTARTS AGE
webhook-647c9c7858-gj8cg 1/1 Running 0 34s
By default, the installation will use the public fulcio certs, if you want more control over the cert that signed your image, use cosign
to generate the key-pair and update the verification-key
secret in the cluster:
cosign generate-key-pair k8s://cosign-system/verification-key
Demo
In this demo, we will label a namespace
and protect it with cosigned
. We can label a namespace to signal to cosigned
we would like to reject unsigned images:
kubectl create namespace demo
kubectl label namespace demo cosigned.sigstore.dev/include=true
Let's create a quick app to have something to work with:
NOTE: You will have to create a repository named demo
for this image in
ECR.
NOTE: Here we also use ko
to create the container.
pushd $(mktemp -d)
go mod init example.com/demo
cat <EOF > main.go
package main
import (
"fmt"
)
func main() {
fmt.Println("hello world")
}
EOF
demoimage=`ko publish -B example.com/demo --platform=all`
echo Created image $demoimage
popd
Now we have published an unsigned image, to prove that cosigned
will reject this:
kubectl create -n demo job demo --image=$demoimage
Should result in:
(╯°□°)╯︵ kubectl create -n demo job demo --image=$demoimage
error: failed to create job: admission webhook "cosigned.sigstore.dev" denied the request: validation
failed: invalid image signature: spec.template.spec.containers[0].image
...
Now to sign the image:
COSIGN_EXPERIMENTAL=1 cosign sign $demoimage
NOTE: the above command is using the experimental keyless flow for cosign
. If you are using your own generated keys, you will have to update the verification-key
secret in the cosigned-system
namespace.
And then use the image:
kubectl create -n demo job demo --image=$demoimage
The image was accepted and the job was scheduled, and after a moment, it completed:
(╯°□°)╯︵ kubectl get pods -n demo
NAME READY STATUS RESTARTS AGE
demo-tlkqp 0/1 Completed 0 17s
[03:15:03] n3wscott@book /tmp/demo
(╯°□°)╯︵ kubectl logs -n demo demo-tlkqp
hello world
Conclusion
We know this was a very simple demo, but shows the power of cosigned
running in a cluster where you would like to run only signed images. It should be straight forward to sign your own images in a similar way. We would love to hear from you if you are already using cosigned
or cosign
in your AWS workloads today, just click the contact us
button at the top of the page or email us at interest@chainguard.dev.
We will be posting more content on this topic, so stay tuned!
Ready to Lock Down Your Supply Chain?
Talk to our customer obsessed, community-driven team.