This is part 3 of a 4-part series on deploying an application to Azure Kubernetes. Here’s a full list of this series:
- k8spractice Overview
- k8spractice PostgreSQL Service Deployment
- k8spractice Services (this post)
- k8spractice Ingress Rules
Now that we have a running PostgreSQL server with the necessary database, tables, and roles configured it’s time to deploy k8spractice web app itself.
The containers are available from Github Packages if you want to skip creating them yourself. I want to remind you again that this app was written mainly to provide something to deploy to a Kubernetes cluster. Many security-related liberties were taken; you have been warned.
Deploy Redis Cache
The backend k8spractice service uses Redis Cache for session management. Below is the complete redis.yaml file to deploy a Redis instance:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
role: redis
template:
metadata:
labels:
role: redis
spec:
containers:
- name: redis
image: redis:alpine
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
selector:
role: redis
ports:
- port: 6379
protocol: TCP
Just like the PostgreSQL instance the Redis deployment will use one replica instance. Persistent store is not enabled since Redis is only used as a session cache. For k8spractice a restart of the Redis server will simply mean the user will have to login again.
Backend Service
The backend service needs access to the PostgreSQL service and Redis Cache services that were created in the previous posts. The Backend service uses environment variables for its configuration which are listed below:
SECRET_KEY = This is for the environment variable Flask needs to sign the session token. See here for more info.
DB_DSN = The database DSN string to pass to psycopg2.connect().
REDIS_HOST = Host to the Redis Cache server.
REDIS_PORT = Port the Redis Cache server is listening in.
For the deployment the SECRET_KEY, and DB_DSN environment variables are created in Kubernetes Secrets. This was done to prevent exposing these values in the yaml configuration file; and the secrets were created on the command-line using the commands below:
kubectl create secret generic backendsecretkey --from-literal="SECRET_KEY=$(python -c 'import os; print(os.urandom(16))')"
The DB_DSN environment variable should contain DSN string to connect to the PostgreSQL service deployed in the previous post. For this series the DSN will be like the following:
host=dbserver dbname=<DB NAME> user=<Backend DB user> password=<Backend DB password>
<DB NAME>, <Backend DB user>, and <Backend DB password> are values for the database where k8spractice.user is, and the login that the Backend service should use. The value of host is the Kubernetes Service of the PostgreSQL server. If you opted to deploy an “Azure Database for PostgreSQL”, this can be the value of the ExternalName service pointing to that server.
Below is a copy of the backend.yaml from the project repo:
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 4
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: k8sbackend
image: ghcr.io/keithmendozasr/k8spractice/backend:release1
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: backendsecretkey
optional: false
- secretRef:
name: backenddb
optional: false
env:
- name: REDIS_HOST
value: redis
- name: REDIS_PORT
value: "6379"
ports:
- name: "backendport"
containerPort: 5000
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- name: "backendport"
targetPort: "backendport"
port: 5000
Frontend Service
Below is the full yaml config file for the frontend service; which is also available here.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 4
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: k8sfrontend
image: ghcr.io/keithmendozasr/k8spractice/frontend:release1
imagePullPolicy: Always
ports:
- name: "frontendport"
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: frontend
ports:
- name: "frontendport"
targetPort: "frontendport"
port: 80
It’s a straight-forward deployment of a static site running on Apache. The number of replicas is set to 4, it’s using the k8spractice frontend container that I created and exposes TCP port 80. The matching “frontend” Service resource is also defined.
At this point, the k8spractice services are running; however, it’s not accessible to anyone that’s not directly in the cluster’s network. In the final post of this series, I’ll show you how to configure the ingress rules and set up TLS certificates.