Creating a Kubernetes Cluster

Creating a Kubernetes Cluster

This guide shows you how to create your first Kubernetes cluster in Thalassa Cloud. You’ll learn how to create and configure a managed Kubernetes cluster.

Prerequisites

Before you start, you need:

  • Access to your Thalassa Cloud organisation
  • A VPC in your desired region
  • At least one subnet in your VPC
  • Permissions to create Kubernetes clusters

Decide if you need:

  • A public cluster (accessible from the internet)
  • A private cluster (only accessible from your VPC)

Cluster Creation Overview

Creating a cluster involves:

  1. Basic settings: name, region, Kubernetes version
  2. Network: VPC and subnet selection
  3. Node pool: initial compute capacity
  4. Access: who can access the cluster API
  5. Review: check all settings before creating

Step 1: Basic Cluster Configuration

Step 1: Navigate to Kubernetes

  1. Log into your Thalassa Cloud Console
  2. Navigate to PlatformKubernetes or ComputeKubernetes
  3. Click “Create Cluster” or “New Cluster”

Step 2: Configure Basic Settings

Enter a name for your cluster (e.g., production-cluster or development-k8s). Add a description if needed. Select the region where you want to deploy (e.g., nl-01 for Netherlands).

Choose the Kubernetes version. Use the latest stable version unless you need a specific version. You can upgrade later, but downgrading is not supported.

Step 3: Continue to Network Configuration

Click “Next” or continue to the network configuration step.

Step 2: Network Configuration

Step 1: Select VPC

Select the VPC for your cluster. Make sure it’s in the same region and has at least one subnet. The VPC provides network isolation for your cluster.

Step 2: Select Subnet

Choose a subnet based on your needs:

  • Private subnet: for internal-only clusters
  • Public subnet: for clusters that need internet access

Consider your security needs when choosing.

Step 3: Configure Networking Options

CNI plugin is set to Cilium by default. This provides network policies. Enable Network Policy support if you plan to use Kubernetes Network Policies. Pod CIDR and Service CIDR are usually auto-assigned. You can set custom ranges if needed.

Step 4: Continue to Node Pool Configuration

Click “Next” or continue to the node pool configuration.

Step 3: Node Pool Configuration

Step 1: Configure Initial Node Pool

Enter a name for your node pool (e.g., default-pool or worker-pool). Select the instance type based on your workload needs (CPU, memory, storage). Common choices: small, medium, large. You can also use custom sizes.

Set the number of nodes. Start with 2-3 nodes for high availability. You can scale up or down later. Starting small and scaling up is usually cheaper than over-provisioning.

Step 2: Configure Autoscaling (Optional)

Enable autoscaling to automatically adjust node count based on demand. Set the minimum and maximum number of nodes. Choose whether to allow automatic scale-down. Autoscaling helps save costs while handling peak loads.

Step 3: Configure Additional Settings

You can optionally:

  • Add node labels to organize nodes
  • Configure node taints to control pod scheduling
  • Configure node storage settings

These can be changed after cluster creation, so you can start with defaults.

Step 4: Continue to Access Configuration

Click “Next” or continue to the access configuration.

Step 4: Access Configuration

Step 1: Configure Cluster Access

Choose how to access your cluster:

  • Public access: API accessible from the internet (protected by authentication). Good for development or access from outside your VPC.
  • Private access: API only accessible from your VPC. More secure, but requires VPN or bastion host for management.

If you use public access, you can restrict it to specific IP ranges. This blocks access from unknown IP addresses.

Step 2: Configure RBAC (Optional)

Set up role-based access control:

  • Service Account: Configure default service account settings
  • RBAC Settings: Review default RBAC configuration

Step 3: Review and Create

Review all your settings and click “Create Cluster” or “Deploy Cluster”.

Step 5: Wait for Cluster Provisioning

After creating the cluster:

  1. Monitor Status: The cluster will be in a provisioning or creating state
  2. Wait for Completion: Cluster creation typically takes 5-15 minutes
  3. Check Status: Monitor the cluster status in the console
  4. Cluster Ready: When status changes to active or running, your cluster is ready

Step 6: Configure Cluster Access

Once your cluster is created, configure access:

Using tcloud CLI

# List your clusters
tcloud kubernetes list

# Connect to your cluster
tcloud kubernetes connect <cluster-id>

# Or use interactive selection
tcloud kubernetes connect

After connecting, you can use kubectl commands directly.

Using kubectl Directly

# Get kubeconfig
tcloud kubernetes kubeconfig <cluster-id> > ~/.kube/config-cluster

# Set KUBECONFIG environment variable
export KUBECONFIG=~/.kube/config-cluster

# Verify access
kubectl get nodes

Verifying Your Cluster

Test that your cluster is working correctly:

# Check cluster nodes
kubectl get nodes

# Check cluster info
kubectl cluster-info

# Check system pods
kubectl get pods --all-namespaces

# Check cluster version
kubectl version

Post-Creation Configuration

After your cluster is running, consider:

  • Creating additional node pools for different workloads (e.g., high-memory for databases, GPU for machine learning)
  • Configuring storage classes for persistent volumes
  • Setting up monitoring and metrics
  • Configuring Kubernetes Network Policies for security
  • Installing additional components (Ingress controllers, cert-manager, etc.)

Troubleshooting

Cluster Creation Fails

If creation fails:

  1. Check you have enough quota for instances and resources
  2. Verify the VPC and subnet are configured correctly
  3. Check cluster creation logs for error messages
  4. Verify you have permissions to create clusters

Cannot Access Cluster

For public clusters:

  • Check authorized networks are configured correctly
  • Verify your IP is in the allowed range

For private clusters:

  • Make sure you’re accessing from within the VPC
  • Or configure VPN access

For both:

  • Verify your credentials and permissions
  • Check network connectivity to the API endpoint

Nodes Not Joining

If nodes aren’t joining:

  1. Check node pool status in the console
  2. Verify instances can be created in the subnet
  3. Check for quota issues
  4. Ensure security groups allow Kubernetes traffic
  5. Check for network policy restrictions

Best Practices

Start small and scale up as needed. Don’t over-provision from the start. Deploy nodes across multiple availability zones for high availability. Choose instance types based on your workload needs. Consider both current and future requirements.

Enable autoscaling to save costs. Keep your Kubernetes version up to date. Test upgrades in non-production first. Export and backup your cluster configuration regularly. Set up monitoring and alerts to catch issues early.

Related Documentation