Working with TFS
This guide explains how to create, configure, and use Thalassa Filesystem Service (TFS) in Thalassa Cloud. TFS provides high-availability, multi-availability zone NFS storage that supports NFSv4 and NFSv4.1 protocols for shared storage across virtual machines and Kubernetes workloads.
Beta
Thalassa Filesystem Service is currently in Beta.
Prerequisites
- You need an existing VPC and subnet in Thalassa Cloud to deploy TFS. See the VPC documentation for details.
- Make sure you can SSH into your virtual machines, as this is needed to mount NFS exports.
- To use TFS with Kubernetes, you’ll need a working Kubernetes cluster and network access to TFS.
- You must have IAM permissions to create and manage TFS instances. See the IAM permissions documentation if you’re not sure about your access.
Creating TFS and mounting the filesystem
Step 1: Create a TFS Instance
Access the Console
- Log into the Thalassa Cloud Console
- Navigate to IaaS → Storage → TFS
- Click Create TFS Instance
Configure TFS Instance
Configure your TFS instance:
- Name: Choose a descriptive name (e.g.,
production-tfs) - VPC: Select the VPC where the TFS instance will be deployed
- Subnet: Select a subnet within the VPC
- Description: Optional description of the TFS instance purpose
Configure Security Groups (Optional)
Attach a security group that allows NFS traffic:
- Protocol: TCP
- Port: 2049 (NFS)
- Source: Subnet CIDR or specific IP ranges that need access
Ensure the security group allows inbound NFS traffic from:
- Your VPC subnets (for VM access)
- Kubernetes node subnets (for Kubernetes integration)
- Using CIDR, IP another Security Group
Create NFS Exports
After the TFS instance is created, create NFS exports:
- Navigate to your TFS instance
- Click Create Export
- Configure the export:
- Export Path: Path for the export (e.g.,
/app-data,/shared-files) - Description: Optional description
- Export Path: Path for the export (e.g.,
You can create multiple exports from a single TFS instance to organise data by application, team, or purpose.
Step 2: Wait for TFS to be Ready
Check TFS Status
In the console, monitor the TFS instance status. The instance will show:
- Status:
Creating→Ready - Endpoint: Available when ready
- Port: Typically
2049for NFS
Wait until the status changes to Ready before attempting to mount.
Verify Endpoint Information
Once ready, note the following information:
- TFS Endpoint: IP address or hostname (e.g.,
10.0.1.100ortfs-12345678.internal) - Port: NFS port (typically
2049) - Export Path: The path you configured (e.g.,
/app-data)
Step 3: Mount TFS in a Virtual Machine
Install NFS Client
SSH into your virtual machine and install the NFS client:
Ubuntu/Debian:
sudo apt update
sudo apt install -y nfs-commonCentOS/RHEL/Fedora:
sudo yum install -y nfs-utils
# or for newer versions
sudo dnf install -y nfs-utilsCreate Mount Point
Create a directory where the TFS export will be mounted:
sudo mkdir -p /mnt/tfsUse a descriptive path that indicates the export’s purpose (e.g., /mnt/app-data, /mnt/shared-files).
Mount the TFS Export
Mount the TFS export using NFSv4.1:
sudo mount -t nfs4 -o vers=4.1 <tfs-endpoint>:/<export-path> /mnt/tfsReplace:
<tfs-endpoint>with your TFS endpoint IP or hostname<export-path>with your export path (e.g.,/app-data)
Example:
sudo mount -t nfs4 -o vers=4.1 10.0.1.100:/app-data /mnt/tfsVerify Mount
Verify the mount is successful:
df -h | grep /mnt/tfs
mount | grep /mnt/tfsTest write access:
sudo touch /mnt/tfs/test-file
sudo rm /mnt/tfs/test-fileConfigure Persistent Mount
To mount TFS automatically after VM reboots, add it to /etc/fstab:
sudo nano /etc/fstabAdd the following line:
<tfs-endpoint>:/<export-path> /mnt/tfs nfs4 vers=4.1,defaults,_netdev 0 0Example:
10.0.1.100:/app-data /mnt/tfs nfs4 vers=4.1,defaults,_netdev 0 0Mount options explained:
vers=4.1: Use NFSv4.1 protocoldefaults: Use default mount options_netdev: Wait for network to be available before mounting
Test the fstab configuration:
sudo mount -aIf there are no errors, the configuration is correct.
Integrate TFS with Kubernetes
TFS can be used with Kubernetes using the NFS CSI driver. This allows users to provision Persistent Volume Claims (PVC) with ReadWriteMany, allowing for sharing the PVCs across multiple pods within a namespace.
Using NFS CSI Driver (Dynamic Provisioning)
Install and configure the Kubernetes NFS CSI driver.
Install NFS CSI Driver
Install the NFS CSI driver using Helm:
helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm repo update
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs \
--namespace kube-system \
--version v4.1.0Verify the installation:
kubectl get pods -n kube-system | grep csi-nfsCreate StorageClass
Create a StorageClass for TFS:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tfs-nfs
provisioner: nfs.csi.k8s.io
parameters:
server: <tfs-endpoint>
share: /<export-path>
allowVolumeExpansion: true
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4.1,_netdev,hard,noatime,timeo=20,retrans=3Replace:
<tfs-endpoint>with your TFS endpoint IP or hostname<export-path>with your export path
Example:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: tfs-nfs
provisioner: nfs.csi.k8s.io
parameters:
server: 10.0.1.100
share: /app-data
allowVolumeExpansion: true
volumeBindingMode: Immediate
mountOptions:
- nfsvers=4.1,_netdev,hard,noatime,timeo=20,retrans=3Apply the StorageClass:
kubectl apply -f tfs-storageclass.yamlCreate PersistentVolumeClaim
Create a PersistentVolumeClaim using the StorageClass:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: tfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: tfs-nfs
resources:
requests:
storage: 100GiApply the PersistentVolumeClaim:
kubectl apply -f tfs-pvc.yamlThe NFS CSI driver will automatically create a PersistentVolume and mount it when attached to a Pod.
Best Practices
Mount Options
Common NFS mount options:
nfsvers=4.1: Specifies NFS version 4.1 (recommended)_netdev: Ensures mount happens after network is availablehard: Retries I/O operations indefinitely until the server respondsnoatime: Disables updating access time on reads for better performancetimeo=20: Sets the NFS timeout to 2 seconds (in tenths of a second)retrans=3: Attempts 3 retransmissions before failing
Example with additional options:
sudo mount -t nfs4 -o vers=4.1,soft,timeo=600,retrans=2 \
10.0.1.100:/app-data /mnt/tfsReferences
- TFS Documentation — Overview of TFS capabilities
- Kubernetes NFS CSI Driver — Official NFS CSI driver documentation
- Kubernetes Persistent Volumes — Working with persistent volumes
- Deploying Applications — TFS integration examples