The Kubernetes Series - Application Management
(Photo by Andrey Konstantinov on Unsplash)
Deployments - Rollouts and Rollbacks
We previously had a look at application management in the post on Deployments, Pods and ReplicaSets.
Rollouts are when you update exiting Pods with new, updated versions of its containers in an incremental fashion, in order to not interrupt service to users connected to them. So only once a Pod has no active connections, it gets replaced with the new version. In Kubernetes, every rollout is called a Revision.
Kubernetes keeps track of these Revisions so that we can rollback to a pervious one if need be.
You can view the status of a rollouts with the command;
kubectl rollout status deployment/deployment-name
To view Revisions, use the command
kubectl rollout history deployment/deployment-name
Deployment Strategies
There are two strategies to deployments;
- Recreate Strategy(Destroy and Redeploy) –> Results in brief down-time
- Rolling Update(default) –> replace deployments one-by-one, in order to not interrupt service.
Running the kubectl apply command triggers a new Rolling Update deployment, like so;
kubectl apply -f deployment-definition.yml
or
Kubectl set image deployment/deployment-name nginx=nginx:1.2.0
Then you can view the status of the rollout with
kubectl rollout status deployment/deployment-name
Rollback
To rollback to a previous Revision, enter
kubectl rollout undo deployment/deployment-name
Specifying Commands and Arguments for Images on Pod Creation
Some Pods might need you to supply a command-line command and accompanying arguments to run when the Pod gets instantiated. You would accomplish this with the command and args properties in your containers definition dictionary like so;
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: container-name
image: image-name
command: ["ping"]
args:["127.0.0.1"]
One thing to note: the command and args properties in your Pod definition will override the ENTRYPOINT and CMD fields settings in your image Dockerfile.
Specifying Environment Variables
There are three ways of defining a Pod environment variables;
- Key/Value pairs
- ConfigMaps
- Secrets
Key/Value Pairs
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: container-name
image: image-name
env:
- name: API_KEY
value: abcdefgh
ConfigMaps
Configmaps are Kubernetes objects that can be used to inject key-value pair dictionaries into a Pod, to define environment variables.
You can create a configmap imperatively with command line and an n-number of --from-literal declarations;
kubectl create configmap {config-name} --from-literal={key}={value }
Or you can specify a *.properties file to use instead;
kubectl create configmap {secret-name} --from-file={configmap.properties}
Or use a declarative YAML definition file, just like most other Kubernetes objects. Let make config-map.yaml;
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-map
data: //<-- NOTE! Not spec!
API_KEY: abcdefgh
You can view configmaps just like any other Kubernetes object, ala;
kubectl get configmaps && kubectl describe configmaps
To reference it in your Pod definition file;
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: container-name
image: image-name
envFrom:
- configMapRef:
name: app-config-map
Or, if you just want to add a single environment variable from the configmap;
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: container-name
image: image-name
env:
- name: API_KEY
valueFrom:
configMapKeyRef:
name: app-config-map
key: API_KEY
Secrets
Secrets are similar to configmaps, except that instead of storing values in plain-text, they store values in a hashed format. This would be used to save passwords for databases and such in a more secure manner. The secret keys get obfuscated in base64 format. This is important to realise, the keys only get obfuscated, not encrypted, so the stored values are not entirely secured from prying eyes, they're just more difficult to read.
Secrets can also be created imperatively via command line, or declaratively via file definitions;
kubectl create secret generic {secret-name} --from-literal={key}={value }
Or you can specify a .properties file to use;
kubectl create secret generic {secret-name} --from-file={secretz.properties}
When using a definition file, we will create a secret.yaml file;
apiVersion: v1
kind: Secret
metadata:
name: app-config-map-secret
data:
DATABASE_URL: aHR0cHM6Ly9kYi5jb20=
DATABASE_PASSWORD: MTIzaG9vaGFh
Note that unlike the command-line creation above, you have to encode the values to base64 yourself with YAML file definitions.
Which would then be referred to in the pod definition file, just like for configMaps.
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: container-name
image: image-name
envFrom:
- secretRef:
name: app-config-map-secret
Or by revering to a particular key in the secret file,
apiVersion: v1
kind: Pod
metadata:
name: pod-name
spec:
containers:
- name: container-name
image: image-name
env:
- name: API_KEY
valueFrom:
secretKeyRef:
name: app-config-map-secret
Multi-container Pods
Sometimes you might want to deploy an application that relies on two containers that working together - a web-app and accompanying monitoring service, for instance. The two containers would only ever be relevant to each other, so it would make sense to deploy them at the same time and on the same Pod.
That way they can share the same network space(they can communicate by referring to localhost), storage volumes and other shared resources. All you have to do to accomplish this is to add the additional containers to the Pod definition file's container list.
Multi-container Pod Design Patters
There are 3 common design patterns for multi-container pods;
- Sidecar
- Adapter
- Ambassador
Sidecar
The Sidecar design pattern is the pattern used in our example above - a web-app with an accompanying monitoring service. The sidecar container, in this case the monitoring service, works with the primary container - the web app. The web app can function without the sidecar container, but not the other way round.
Adapter
The adapter pattern is used when you have a container running between you primary container and the rest of the services it needs to interact with. It formats and reshapes the inputs and outputs between your primary container and the rest of your cluster. An example would be a custom logging or data parser container.
Ambassador/Proxy
The ambassador container acts as a proxy or gateway, connecting a port on localhost on your primary container with external connections. An example would be to have a proxy handle the connection between your application and a remote database. Your application would always connect to localhost on a certain port, with the proxy handling the connection to the db, which may change between different environments - dev and prod, for instance.
Multi-containers: Init Containers
An init container is a helper container that runs on Pod creation before your primary containers get launched. This might be to delay the launch of an application until some condition is met, or to separate secrets or sensitive tasks from your main application container, in order to reduce your potential attack surface.
You also specify them as arrays under the property init containers under spec for your Pod definition file.
All init containers must execute and complete successfully before the primary container(s) get launched. If an init container fails, the Pod will be restarted and try to launch again until it succeeds. Check the use-cases for init containers in the docs.
Browsing Running Pods
Sometimes you might want to browse and ssh into a pod to check some value inside it, like logs for instance. You can use a similar command you would use for a Docker container running on you local machine, like so(this is assuming a Linux-based container);
kubectl exec -it {pod-name} /bin/sh
Conclusion
This post was a little bit more lengthy, but we learned a lot. We had a look at some deployment strategies and how to rollout updates and rollback oopsies. We also looked at different ways to handle environment variables and app secrets in a scalable and safe way, using configmaps and secrets. Finally we looked at how and why you might want to run multiple containers in the same pod.
In the next post we'll consider Kubernetes maintenance strategies and how to make sure your cluster stays up and up to date. Thanks for reading!