Deploying Thalassa Kubernetes Service with Terraform
Terraform enables you to define and manage your Thalassa Cloud Kubernetes clusters as code, providing version control, repeatability, and automated infrastructure management. Using the official Thalassa Cloud Kubernetes Terraform module, you can provision production-ready Kubernetes clusters with configurable node pools, networking, and advanced features.
This guide walks you through using Terraform (or OpenTofu) to deploy Thalassa Cloud Kubernetes Service clusters. By following this guide, you’ll understand how to define cluster configurations, manage node pools, and integrate Kubernetes clusters into your infrastructure-as-code workflows.
Prerequisites
Before deploying Kubernetes clusters with Terraform, ensure you have the following:
Terraform or OpenTofu: Terraform 1.0 or later, or OpenTofu installed on your local machine. Both tools are compatible with the Thalassa Cloud provider and module.
Thalassa Cloud Access: Access to a Thalassa Cloud organisation with appropriate permissions to create Kubernetes clusters and manage infrastructure resources.
API Credentials: Thalassa Cloud API credentials configured. You can use either:
- Personal Access Token (PAT)
- Client credentials (access key and secret key)
For production use, consider using a Service Account with dedicated credentials. See the Terraform Getting Started guide for authentication setup.
Networking Resources: A VPC and subnet where you’ll deploy the Kubernetes cluster. If you don’t have these yet, you can create them using Terraform or the Thalassa Cloud Console.
Provider Configuration
First, configure the Thalassa Cloud Terraform provider in your Terraform configuration:
terraform {
required_version = ">= 1.0"
required_providers {
thalassa = {
source = "thalassa-cloud/thalassa"
version = ">= 0.8"
}
}
}
provider "thalassa" {
organisation_id = var.organisation_id
token = var.thalassa_token
# Optional: API endpoint (defaults to https://api.thalassa.cloud)
# api = "https://api.thalassa.cloud"
}
variable "organisation_id" {
description = "Thalassa Cloud organisation ID"
type = string
}
variable "thalassa_token" {
description = "Thalassa Cloud API token"
type = string
sensitive = true
}Authentication Methods
You can authenticate using environment variables (THALASSA_TOKEN, THALASSA_ORGANISATION_ID) or provider configuration. For production, use service accounts or OIDC federation. See the Terraform Getting Started guide for details.
Basic Cluster Deployment
The simplest way to deploy a Kubernetes cluster is using the Terraform module with minimal configuration:
module "kubernetes" {
source = "thalassa-cloud/kubernetes/thalassa"
version = "~> 0.3.0"
organisation_id = var.organisation_id
name = "my-cluster"
description = "Production Kubernetes cluster"
region = "nl-01"
subnet_id = var.subnet_id
labels = {
environment = "production"
team = "platform"
}
}
variable "subnet_id" {
description = "Subnet ID for the Kubernetes cluster"
type = string
}This creates a Kubernetes cluster with default settings:
- CNI: Cilium (default)
- Kubernetes version: 1.33 (default)
- No node pools (you’ll need to add them separately or configure them in the module)
Initializing and Applying
Initialize Terraform to download the provider and module:
terraform initReview the planned changes:
terraform planApply the configuration to create the cluster:
terraform applyAdvanced Configuration with Node Pools
For production deployments, configure node pools as part of the cluster definition:
module "kubernetes" {
source = "thalassa-cloud/kubernetes/thalassa"
version = "~> 0.3.0"
organisation_id = var.organisation_id
name = "production-cluster"
description = "Production Kubernetes cluster with multiple node pools"
region = "nl-01"
subnet_id = var.subnet_id
# Cluster configuration
cni = "cilium"
cluster_version = "1.33"
labels = {
environment = "production"
team = "platform"
}
# Node pools configuration
nodepools = {
"system" = {
machine_type = "pgp-large"
availability_zones = ["nl-01a", "nl-01b"]
replicas = 2
subnet_id = var.subnet_id
enable_autohealing = true
labels = {
node-pool = "system"
}
}
"workers" = {
machine_type = "pgp-medium"
availability_zones = ["nl-01a", "nl-01b", "nl-01c"]
replicas = 3
subnet_id = var.subnet_id
# Auto scaling
enable_auto_scaling = true
min_replicas = 3
max_replicas = 10
# Auto healing
enable_autohealing = true
# Node labels and taints
node_labels = {
node-pool = "workers"
workload = "general"
}
node_taints = [
{
key = "workload"
value = "general"
effect = "NoSchedule"
}
]
}
}
}Node Pool Configuration Options
Node pools support extensive configuration options:
| Option | Description | Type | Default |
|---|---|---|---|
machine_type | Machine type for nodes (required) | string | - |
availability_zones | List of availability zones (required) | list(string) | - |
replicas | Number of nodes in the pool | number | 1 |
subnet_id | Subnet for the node pool (required) | string | - |
enable_auto_scaling | Enable automatic scaling | bool | false |
min_replicas | Minimum nodes when auto-scaling | number | - |
max_replicas | Maximum nodes when auto-scaling | number | - |
enable_autohealing | Enable automatic node healing | bool | false |
kubernetes_version | Kubernetes version for nodes | string | cluster version |
upgrade_strategy | Node upgrade strategy | string | “always” |
node_labels | Labels to apply to nodes | map(string) | {} |
node_annotations | Annotations to apply to nodes | map(string) | {} |
node_taints | Taints to apply to nodes | list(object) | [] |
Complete Example
Here’s a complete example that includes provider configuration, variables, and a production-ready cluster:
terraform {
required_version = ">= 1.0"
required_providers {
thalassa = {
source = "thalassa-cloud/thalassa"
version = ">= 0.8"
}
}
}
provider "thalassa" {
organisation_id = var.organisation_id
token = var.thalassa_token
}
variable "organisation_id" {
description = "Thalassa Cloud organisation ID"
type = string
}
variable "thalassa_token" {
description = "Thalassa Cloud API token"
type = string
sensitive = true
}
variable "subnet_id" {
description = "Subnet ID for the Kubernetes cluster"
type = string
}
module "kubernetes" {
source = "thalassa-cloud/kubernetes/thalassa"
version = "~> 0.3.0"
organisation_id = var.organisation_id
name = "production-cluster"
description = "Production Kubernetes cluster"
region = "nl-01"
subnet_id = var.subnet_id
cni = "cilium"
cluster_version = "1.33"
labels = {
environment = "production"
team = "platform"
managed-by = "terraform"
}
nodepools = {
"system" = {
machine_type = "pgp-large"
availability_zones = ["nl-01a", "nl-01b"]
replicas = 2
subnet_id = var.subnet_id
enable_autohealing = true
labels = {
node-pool = "system"
}
}
"workers" = {
machine_type = "pgp-medium"
availability_zones = ["nl-01a", "nl-01b", "nl-01c"]
replicas = 3
subnet_id = var.subnet_id
enable_auto_scaling = true
min_replicas = 3
max_replicas = 10
enable_autohealing = true
node_labels = {
node-pool = "workers"
workload = "general"
}
}
}
}
# Outputs
output "cluster_id" {
description = "The ID of the created Kubernetes cluster"
value = module.kubernetes.cluster_id
}
output "cluster_name" {
description = "The name of the created Kubernetes cluster"
value = module.kubernetes.cluster_name
}
output "cluster_region" {
description = "The region where the cluster is deployed"
value = module.kubernetes.cluster_region
}Module Outputs
The Kubernetes module provides several outputs that you can use in other Terraform resources:
output "cluster_id" {
description = "The ID of the created Kubernetes cluster"
value = module.kubernetes.cluster_id
}
output "cluster_name" {
description = "The name of the created Kubernetes cluster"
value = module.kubernetes.cluster_name
}
output "cluster_region" {
description = "The region where the cluster is deployed"
value = module.kubernetes.cluster_region
}
output "cluster_version" {
description = "The Kubernetes version of the cluster"
value = module.kubernetes.cluster_version
}
output "cluster_cni" {
description = "The CNI used by the cluster"
value = module.kubernetes.cluster_cni
}
output "nodepool_ids" {
description = "Map of node pool names to their IDs"
value = module.kubernetes.nodepool_ids
}Using OpenTofu
The Thalassa Cloud Kubernetes module is fully compatible with OpenTofu, an open-source fork of Terraform. You can use OpenTofu as a drop-in replacement:
# Install OpenTofu (example for macOS)
brew install opentofu
# Use OpenTofu commands instead of Terraform
tofu init
tofu plan
tofu applyThe configuration files and module usage are identical between Terraform and OpenTofu.
Managing Cluster Lifecycle
Updating Cluster Configuration
To update cluster configuration, modify your Terraform files and apply:
terraform plan # Review changes
terraform apply # Apply changesScaling Node Pools
Update the replicas count or auto-scaling configuration:
nodepools = {
"workers" = {
# ...
replicas = 5 # Increase from 3
# Or use auto-scaling
enable_auto_scaling = true
min_replicas = 3
max_replicas = 20 # Increase from 10
}
}Upgrading Kubernetes Version
Update the cluster_version to upgrade the cluster:
module "kubernetes" {
# ...
cluster_version = "1.34" # Upgrade from 1.33
}You can also leave the cluster_version empty when combined with an upgrade maintenace schedule, which will allow for automatically updating the Kubernetes Cluster version. When no maintenance schedule has been configured, you can update the cluster also through the Console or CLI with tcloud.
Version Compatibility
Ensure your workloads and add-ons are compatible with the target Kubernetes version before upgrading. Test upgrades in non-production environments first.
Destroying Resources
To destroy the cluster and all associated resources:
terraform destroyData Loss
Destroying a cluster will delete all workloads and data. Ensure you have backups before destroying production clusters.
Troubleshooting
Provider Authentication Issues
If you encounter authentication errors:
- Verify your API token is valid and has the required permissions
- Check that
organisation_idis correct - Ensure environment variables are set if using them:
export THALASSA_TOKEN="your-token" export THALASSA_ORGANISATION_ID="your-org-id"
Module Version Conflicts
If you see version conflicts:
# Update the module
terraform init -upgradeResource Creation Failures
If cluster or node pool creation fails:
- Check the Terraform error messages for specific issues
- Verify subnet and VPC configurations
- Ensure you have sufficient quotas in your organisation
- Review Thalassa Cloud Console for additional error details
Further Reading
For more information, see:
- Thalassa Cloud Kubernetes Terraform Module - Source code and examples
- Terraform Registry - Module documentation
- Terraform Getting Started Guide - General Terraform usage with Thalassa Cloud
- Kubernetes Getting Started Guide - Working with Kubernetes clusters