Metrics Server in Thalassa Cloud Kubernetes
The Metrics Server is a vital component in Thalassa Cloud Kubernetes, providing resource usage data for CPU and memory consumption of running Pods and Nodes. It is primarily used by the Horizontal Pod Autoscaler (HPA) and the kubectl top
command to enable real-time monitoring and scaling decisions.
Thalassa Cloud pre-installs and fully manages Metrics Server, ensuring seamless access to resource metrics without requiring additional configuration.
How Metrics Server Works
Metrics Server collects resource usage statistics from the Kubernetes kubelet API on each node. It then aggregates and exposes this data through the Kubernetes Metrics API, making it available for tools and controllers that require real-time resource monitoring.
Key Functions of Metrics Server:
- Provides CPU and Memory Metrics: Allows
kubectl top
to display resource consumption. - Enables Horizontal Pod Autoscaling (HPA): Supplies real-time metrics for automated scaling decisions.
- Lightweight and Efficient: Designed to work with large clusters with minimal overhead.
- Cluster-Wide Visibility: Ensures access to aggregated resource usage data for monitoring tools.
Key Features
Feature | Description |
---|---|
Real-Time Resource Metrics | Provides live CPU and memory usage of Pods and Nodes. |
Enables Autoscaling | Supplies data for Kubernetes Horizontal Pod Autoscaler (HPA). |
kubectl top Integration | Allows real-time monitoring with kubectl top nodes and kubectl top pods . |
Lightweight and Efficient | Minimal performance overhead, suitable for production workloads. |
Usage
Viewing Node Metrics
To check resource usage for nodes:
kubectl top nodes
Example output:
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-node-1 250m 5% 1024Mi 30%
k8s-node-2 300m 6% 1100Mi 32%
Viewing Pod Metrics
To check resource usage for Pods:
kubectl top pods -n my-namespace
Example output:
NAME CPU(cores) MEMORY(bytes)
web-app-1 100m 256Mi
web-app-2 150m 300Mi
Checking if Metrics Server is Running
To verify that the Metrics Server is active in your cluster, run:
kubectl get deployment -n kube-system metrics-server
A healthy cluster should return an active deployment.
Troubleshooting Metrics Server
If the Metrics Server is not providing data, consider the following checks:
- Ensure the Metrics API is available:
kubectl get apiservices | grep metrics
- Check Metrics Server logs for errors:
kubectl logs -n kube-system deployment/metrics-server
- Ensure API communication is not blocked by network policies.