Deploy n8n on Thalassa Cloud Kubernetes
n8n is an open-source workflow automation platform that allows you to connect different services and automate tasks without writing code. By deploying n8n on Thalassa Cloud Kubernetes using Cloud Native PostgreSQL, you can run a production-ready workflow automation platform with high availability, automated backups, and seamless integration with Kubernetes features.
This guide walks you through deploying n8n using Kubernetes manifests, configuring it to use Cloud Native PostgreSQL for the database, and setting up ingress with TLS certificates.
Prerequisites
Before deploying n8n, ensure you have a few things in place. First, you need a running Kubernetes cluster in Thalassa Cloud. If you’re new to Thalassa Cloud Kubernetes, see the Getting Started guide for cluster creation and basic setup.
- You’ll also need cluster access configured using
kubectl. Usetcloud kubernetes connectto configure access, or set up kubeconfig manually. You’ll need cluster administrator permissions to create namespaces and deploy resources. - For the database, you’ll need Cloud Native PostgreSQL installed in your cluster. If you haven’t installed it yet, follow the Cloud Native PostgreSQL guide to set it up. This guide assumes you have Cloud Native PostgreSQL installed and ready to use.
- For TLS certificates, you’ll need Cert Manager installed with Let’s Encrypt configured. See the Cert Manager and Let’s Encrypt guide for installation and configuration instructions.
- Finally, ensure your cluster has sufficient resources. n8n requires CPU, memory, and storage for the application and database. Plan for at least one node with adequate resources, and consider using dedicated node pools for database instances. For information about storage options, see the Storage documentation and Persistent Volumes documentation.
Setting Up PostgreSQL Database
Before deploying n8n, set up a PostgreSQL database cluster using Cloud Native PostgreSQL. This provides a reliable, high-availability database for your n8n instance. This section provides a quick setup; for PostgreSQL configuration options, high availability setup, and backup configuration, see the Cloud Native PostgreSQL guide.
Step 1: Create Namespace
Create a namespace for n8n resources with pod security labels:
apiVersion: v1
kind: Namespace
metadata:
name: n8n
labels:
pod-security.kubernetes.io/enforce: baseline
pod-security.kubernetes.io/warn: restrictedApply the namespace:
kubectl apply -f namespace.yamlStep 2: Create Database Credentials Secret
Create a Kubernetes Secret for the database user credentials. CloudNativePG will use this secret to set the password for the database user:
apiVersion: v1
kind: Secret
metadata:
name: n8n-app
namespace: n8n
type: Opaque
stringData:
user: n8n
password: your-secure-passwordSecure Password Management
Use a strong, unique password for the database user. Store the password in a Kubernetes Secret rather than hardcoding it. CloudNativePG will use this secret to set up the database user automatically.
Apply the secret:
kubectl apply -f db-credentials-secret.yamlStep 3: Create PostgreSQL Cluster
Create a PostgreSQL cluster for n8n. This example creates a cluster with 2 instances for high availability:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: n8n
namespace: n8n
spec:
instances: 2
postgresql:
parameters:
max_connections: "100"
bootstrap:
initdb:
database: n8n
owner: n8n
secret:
name: n8n-app
storage:
size: 5Gi
storageClass: tc-block
resources:
requests:
memory: "512Mi"
cpu: "50m"
limits:
memory: "1Gi"
cpu: "1000m"The bootstrap.initdb section tells CloudNativePG to automatically create a database named n8n with an owner n8n, using the credentials from the secret. CloudNativePG will create the database and user during cluster initialization, so you don’t need to run SQL commands manually.
Save this to postgres-cluster.yaml and apply it:
kubectl apply -f postgres-cluster.yamlStep 4: Wait for Cluster to be Ready
Wait for the cluster to be ready:
kubectl wait --for=condition=Ready cluster/n8n -n n8n --timeout=300sCloudNativePG automatically creates the database and user during cluster initialization. The credentials are stored in the secret you created, and CloudNativePG uses them to set up the database owner.
Automatic Database and User Creation
CloudNativePG’s bootstrap configuration automatically creates the database and user specified in the bootstrap.initdb section. This eliminates the need to manually run SQL commands to create the database, user, and grant privileges. The database and user are ready to use once the cluster is ready.
Creating Persistent Storage
n8n requires persistent storage for workflow data, credentials, and configuration.
Step 1: Create PersistentVolumeClaim
Create a PersistentVolumeClaim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: n8n-claim0
namespace: n8n
labels:
service: n8n-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: tc-blockApply the PersistentVolumeClaim:
kubectl apply -f pvc.yamlStep 2: Verify PVC is Bound
Verify that the PVC is bound:
kubectl get pvc -n n8nThe default storage class tc-block provides high-performance block storage suitable for n8n’s storage needs. For information about storage classes and resizing volumes, see the Storage Classes documentation and the Resize Persistent Volume guide.
Deploying n8n
Step 1: Create Deployment Manifest
Create a Deployment for n8n. This example configures n8n with PostgreSQL database connection and proper security context:
apiVersion: apps/v1
kind: Deployment
metadata:
name: n8n
namespace: n8n
labels:
service: n8n
spec:
replicas: 1
selector:
matchLabels:
service: n8n
strategy:
type: Recreate
template:
metadata:
labels:
service: n8n
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
containers:
- name: n8n
image: n8nio/n8n:latest
ports:
- containerPort: 5678
resources:
limits:
memory: "500Mi"
requests:
memory: "250Mi"
cpu: "50m"
command:
- /bin/sh
args:
- -c
- n8n start
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
env:
- name: DB_TYPE
value: postgresdb
- name: DB_POSTGRESDB_HOST
value: n8n-rw.n8n.svc.cluster.local
- name: DB_POSTGRESDB_PORT
value: "5432"
- name: DB_POSTGRESDB_DATABASE
value: n8n
- name: DB_POSTGRESDB_USER
valueFrom:
secretKeyRef:
name: n8n-app
key: user
- name: DB_POSTGRESDB_PASSWORD
valueFrom:
secretKeyRef:
name: n8n-app
key: password
- name: N8N_PROTOCOL
value: http
- name: N8N_PORT
value: "5678"
- name: N8N_EDITOR_BASE_URL
value: https://n8n.example.com
- name: N8N_HOST
value: n8n.example.com
- name: VUE_APP_URL_BASE_API
value: https://n8n.example.com/
volumeMounts:
- mountPath: /home/node/.n8n
name: n8n-claim0
restartPolicy: Always
volumes:
- name: n8n-claim0
persistentVolumeClaim:
claimName: n8n-claim0Replace n8n.example.com with your actual domain name. The deployment uses:
- Security context with non-root user
- PostgreSQL database connection using the CloudNativePG cluster
- Persistent storage for workflow data
- Resource limits and requests
SMTP Configuration
If you have an SMTP server available, you can add SMTP environment variables to enable email notifications. Add the following environment variables to the deployment:
N8N_SMTP_HOST: Your SMTP server hostnameN8N_SMTP_PORT: SMTP port (typically 25, 587, or 465)N8N_SMTP_SSL: Set to “true” or “false”N8N_SMTP_SENDER: Sender email address
Step 2: Apply the Deployment
Apply the deployment:
kubectl apply -f deployment.yamlStep 3: Verify n8n is Running
Verify that n8n is running:
kubectl get pods -n n8nWait for the pod to be in the Running state before proceeding.
Creating the Service
Step 1: Create Service Manifest
Create a Service to expose n8n within the cluster:
apiVersion: v1
kind: Service
metadata:
labels:
service: n8n
name: n8n
namespace: n8n
spec:
type: ClusterIP
ports:
- name: "5678"
port: 5678
targetPort: 5678
protocol: TCP
selector:
service: n8nStep 2: Apply the Service
Apply the service:
kubectl apply -f service.yamlStep 3: Verify the Service
Verify the service:
kubectl get svc -n n8nConfiguring Ingress and TLS
n8n needs to be accessible from outside the cluster. Configure an ingress resource to expose n8n, and use Cert Manager to automatically provision TLS certificates.
Step 1: Create Ingress Manifest
Create an ingress resource with TLS and IP whitelisting:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: n8n
namespace: n8n
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# replace with your own IP range
nginx.ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/32"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-buffer-size: "20k"
nginx.ingress.kubernetes.io/client-body-buffer-size: 1M
spec:
ingressClassName: nginx
rules:
- host: n8n.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: n8n
port:
number: 5678
tls:
- hosts:
- n8n.example.com
secretName: n8n-tlsReplace n8n.example.com with your actual domain name. The ingress includes:
- Automatic TLS certificate provisioning via Cert Manager
- IP whitelisting for additional security
- SSL redirect to enforce HTTPS
- Proxy configuration for large payloads
IP Whitelisting
The ingress includes IP whitelisting to restrict access to trusted IP addresses. Update the whitelist-source-range annotation with your trusted IP addresses. For production deployments, consider using additional security measures such as VPN access or network policies.
Step 2: Apply the Ingress
Apply the ingress:
kubectl apply -f ingress.yamlStep 3: Verify TLS Certificate
Cert Manager will automatically create and manage the TLS certificate. Verify the certificate:
kubectl get certificate -n n8nWait for the certificate to be ready (status should show Ready). This may take a few minutes.
Step 4: Access n8n
Once the certificate is ready, you can access n8n at your configured domain (e.g., https://n8n.example.com).
Initial Setup
Once n8n is accessible, complete the initial setup through the web interface. Navigate to your configured domain in a web browser.
On first access, n8n will prompt you to:
- Create an administrator account
- Configure basic settings
- Start creating workflows
First Login
The initial setup creates the first user account, which becomes the administrator. Make sure to use a strong password and enable additional security features if available.
Backup Configuration
Regular backups are essential for production n8n deployments. Cloud Native PostgreSQL handles database backups automatically when configured. For n8n workflow data, implement a backup strategy.
Database Backups
Configure automated backups for the PostgreSQL cluster. See the Cloud Native PostgreSQL guide for detailed backup configuration, including automated backup schedules, retention policies, and restore procedures.
Upgrading n8n
To upgrade n8n to a newer version, update the image tag in the Deployment:
containers:
- name: n8n
image: n8nio/n8n:1.119.0 # Updated versionApply the updated deployment:
kubectl apply -f deployment.yamlKubernetes will perform a rolling update, ensuring zero downtime.
Backup Before Upgrades
Always create backups of both the database and workflows before upgrading n8n. This ensures you can roll back if the upgrade causes issues.
Conclusion
Deploying n8n on Thalassa Cloud Kubernetes using Cloud Native PostgreSQL. By following this guide, you’ve set up n8n with persistent storage, high-availability database, and proper ingress configuration with TLS. This foundation supports a reliable n8n deployment that can scale with your needs.
For more information about n8n features and configuration, see the official n8n documentation.