Deploying Applications to Kubernetes
Once you have a Kubernetes cluster running, the next step is deploying applications. This guide walks you through deploying your first application and explains the key concepts you’ll use as you build more complex deployments.
Understanding Kubernetes Deployments
In Kubernetes, you don’t deploy containers directly. Instead, you define the desired state of your application using declarative configuration files, and Kubernetes works to maintain that state. The primary resource for running applications is a Deployment, which manages a set of identical pods.
A Deployment ensures that a specified number of pod replicas are running at all times. If a pod fails, Kubernetes automatically creates a replacement. When you update your application, the Deployment can perform a rolling update, gradually replacing old pods with new ones without downtime. This self-healing and update capability is one of Kubernetes’ most powerful features.
Your First Deployment
Let’s start with a simple example: deploying a web application. We’ll use a basic nginx container to demonstrate the concepts, but the same principles apply to any containerized application.
Creating a Deployment Manifest
Kubernetes uses YAML files to define resources. Create a file called nginx-deployment.yaml with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80This manifest defines a Deployment named nginx-deployment that runs three replicas of the nginx container. The selector field tells Kubernetes which pods belong to this Deployment, and the template section defines what each pod should look like. The replicas field specifies how many identical pods should run.
The containers section lists the containers that run in each pod. Here, we specify the nginx container image and expose port 80, which is nginx’s default HTTP port. The image pulls from Docker Hub by default, but you can specify images from other registries, including Thalassa Cloud’s container registry.
Applying the Deployment
To create the Deployment in your cluster, use the kubectl apply command:
kubectl apply -f nginx-deployment.yamlKubernetes will create the Deployment and start scheduling pods. You can watch the progress:
kubectl get deployments
kubectl get podsThe get pods command shows your pods transitioning from “Pending” to “ContainerCreating” to “Running”. Once all pods show “Running”, your Deployment is healthy.
Understanding Pods
Pods are the smallest deployable units in Kubernetes. Each pod represents one or more containers that share storage, networking, and lifecycle. In most cases, a pod contains a single container, but you can run multiple containers in the same pod when they need to work closely together.
Pods are ephemeral—they can be created, destroyed, and recreated as needed. This is why we use Deployments: they ensure the desired number of pods always exists, even if individual pods fail. Each pod gets its own IP address within the cluster, and containers in the same pod can communicate over localhost.
You can inspect a pod’s details:
kubectl describe pod <pod-name>This shows the pod’s status, events, and configuration. If a pod isn’t starting correctly, the events section often contains helpful error messages.
Exposing Your Application
Pods have IP addresses, but these addresses change when pods are recreated. To provide stable access to your application, you need a Service. A Service creates a stable endpoint that routes traffic to your pods, even as individual pods come and go.
Creating a Service
Create a file called nginx-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIPThis Service selects pods with the label app: nginx (matching your Deployment’s pods) and exposes them on port 80. The type: ClusterIP makes the service accessible only within the cluster. Apply it:
kubectl apply -f nginx-service.yamlNow you can access your nginx application from within the cluster using the service name nginx-service. Kubernetes provides DNS resolution for services, so you can use nginx-service as a hostname from any pod in the same namespace.
Accessing from Outside the Cluster
For applications that need to be accessible from the internet, use a LoadBalancer service. Thalassa Cloud automatically provisions a load balancer when you create a LoadBalancer service. Update your service:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancerAfter applying this change, Thalassa Cloud creates a load balancer and assigns it a public IP address. You can find the external IP:
kubectl get service nginx-serviceThe EXTERNAL-IP column shows the address you can use to access your application from the internet. It may take a minute or two for the load balancer to be provisioned and the IP to appear.
For more details about load balancers and other service types, see the Service Load Balancers documentation.
Updating Your Application
One of Kubernetes’ strengths is handling application updates with zero downtime. When you need to update your application, you have several options.
Rolling Updates
The simplest approach is to update your Deployment manifest and reapply it. For example, to update to a newer nginx version:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.26 # Updated version
ports:
- containerPort: 80Apply the updated manifest:
kubectl apply -f nginx-deployment.yamlKubernetes performs a rolling update: it gradually replaces old pods with new ones, ensuring that some pods are always running to serve traffic. You can watch the rollout:
kubectl rollout status deployment/nginx-deploymentIf something goes wrong, you can roll back to the previous version:
kubectl rollout undo deployment/nginx-deploymentUsing kubectl set image
For quick image updates without editing files, use kubectl set image:
kubectl set image deployment/nginx-deployment nginx=nginx:1.26This triggers the same rolling update process.
Scaling Your Application
Kubernetes makes scaling straightforward. To change the number of replicas, update the replicas field in your Deployment and reapply, or use the kubectl scale command:
kubectl scale deployment/nginx-deployment --replicas=5Kubernetes will create or delete pods to match the desired count. You can also enable automatic scaling using the Horizontal Pod Autoscaler, which adjusts replica counts based on CPU usage or custom metrics. See the Horizontal Pod Autoscaling documentation for details.
Working with Configuration and Secrets
Most applications need configuration data and sensitive information like API keys or database passwords. Kubernetes provides ConfigMaps for non-sensitive configuration and Secrets for sensitive data.
Using ConfigMaps
Create a ConfigMap to store configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
config.properties: |
environment=production
log_level=infoApply it and reference it in your Deployment:
spec:
containers:
- name: nginx
image: nginx:1.25
envFrom:
- configMapRef:
name: app-configThe configuration becomes available as environment variables in your container.
Using Secrets
For sensitive data, create a Secret:
kubectl create secret generic app-secret \
--from-literal=api-key=your-secret-key \
--from-literal=db-password=your-passwordReference it in your Deployment:
spec:
containers:
- name: nginx
image: nginx:1.25
envFrom:
- secretRef:
name: app-secretSecrets are base64-encoded but not encrypted by default. For production use, consider additional encryption at rest or use external secret management systems.
Managing Persistent Storage
By default, data written to a container’s filesystem is ephemeral—it disappears when the pod is deleted. For applications that need to persist data, such as databases, use PersistentVolumes.
Kubernetes supports different access modes for persistent storage, which determine how volumes can be mounted. The most common access modes are ReadWriteOnce (RWO), which allows the volume to be mounted as read-write by a single node, and ReadWriteMany (RWX), which allows the volume to be mounted as read-write by many nodes simultaneously. For more details about access modes, see the Kubernetes documentation on access modes.
Block Storage for ReadWriteOnce Volumes
Thalassa Cloud provides block storage that you can mount into your pods. Block storage uses the ReadWriteOnce access mode, making it ideal for single-pod workloads like databases or applications that don’t need shared access. First, create a PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10GiThen mount it in your Deployment:
spec:
containers:
- name: nginx
image: nginx:1.25
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: app-storageThe storage persists even if the pod is recreated. Block storage provides high performance and low latency, making it well-suited for database workloads and other IOPS-intensive applications.
TFS for ReadWriteMany Volumes
For applications that need shared storage accessible by multiple pods simultaneously, use Thalassa Filesystem Service (TFS). TFS provides NFS-based storage that supports the ReadWriteMany access mode, allowing multiple pods to read and write to the same volume concurrently. This is useful for content management systems, shared caches, or any application where multiple instances need access to the same files.
To use TFS with Kubernetes, you’ll need to set up an NFS-based PersistentVolume that points to your TFS export. Create a PersistentVolume that references your TFS endpoint:
apiVersion: v1
kind: PersistentVolume
metadata:
name: tfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
nfs:
server: <tfs-endpoint>
path: /<export-path>
persistentVolumeReclaimPolicy: RetainThen create a PersistentVolumeClaim that matches this volume:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: tfs-pvMount this claim in multiple pods, and they can all access the same shared storage. TFS is ideal for shared content, configuration files, or application data that needs to be accessible across multiple pod instances. For more information about setting up and using TFS, see the TFS documentation.
For more information about storage options and advanced configurations, see the Storage documentation.
Best Practices
As you deploy more applications, follow these practices to ensure reliable, maintainable deployments.
- Always use Deployments rather than creating pods directly. Deployments provide self-healing, rolling updates, and rollback capabilities that you don’t get with standalone pods.
- Use meaningful labels on your resources. Labels help you organize and select resources. Common labels include
app,version, andenvironment. These labels also enable your Services to find the right pods. - Set resource requests and limits for your containers. Resource requests help Kubernetes schedule pods effectively, while limits prevent a single pod from consuming all available resources on a node:
containers:
- name: nginx
image: nginx:1.25
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"- Use namespaces to organize your resources. Namespaces provide logical separation between different applications or environments. Create separate namespaces for development, staging, and production.
- Implement health checks using liveness and readiness probes. These probes help Kubernetes determine when a pod is healthy and ready to receive traffic:
containers:
- name: nginx
image: nginx:1.25
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5Next Steps
You now understand the basics of deploying applications to Kubernetes. As you build more complex applications, explore additional topics:
- The Networking documentation explains how to configure advanced networking scenarios and network policies for security. The Security documentation covers securing your workloads with RBAC, pod security standards, and network policies.
- For applications that need to run across multiple zones for high availability, see the Highly Available Deployments guide. The Storage documentation provides details about persistent volumes, storage classes, and volume snapshots.
- To automate your deployments, consider setting up GitOps workflows. Thalassa Cloud supports GitOps using FluxCD—see the GitOps documentation for details.
- For comprehensive Kubernetes concepts and advanced topics, the official Kubernetes documentation provides detailed explanations that apply to Thalassa Cloud clusters.