Mastering Production TiDB Cluster Management: Access, Scaling, and Upgrades
This guide walks through accessing a production TiDB cluster via pod IP, Service ClusterIP, or DNS, initializing users and databases, and performing scaling and version upgrades by editing the cluster's YAML configuration in Kubernetes.
Production TiDB Access
After deploying TiDB, two tasks are required: accessing the cluster and initializing it. Access can be done via pod IP, Service ClusterIP, or the service DNS name (${service name}.${namespace}.svc.cluster.local). The article shows kubectl commands to list pods and services, and mysql commands to connect using each method.
<code># kubectl get pods -n tidb-test -o wide | grep tidb-test-tidb
# Example output:
# tidb-test-tidb-0 Running 10.xxx.xxx.1 db01
# tidb-test-tidb-1 Running 10.xxx.xxx.162 db03
# tidb-test-tidb-2 Running 10.xxx.xxx.183 db08
# mysql -h 10.xxx.xxx.1 -uroot -p -P4000
</code> <code># kubectl get services -n tidb-test -o wide | grep tidb-test-tidb
# Example output:
# tidb-test-tidb NodePort 172.xxx.xxx.213:<none> 4000:32569/TCP,10080:30286/TCP
# tidb-test-tidb-peer ClusterIP None 10080/TCP
# mysql -h 172.xxx.xxx.213 -uroot -p -P4000
</code> <code># mysql -h tidb-test-tidb-peer.tidb-test.svc.cluster.local -uroot -p -P4000
</code>For production, using the DNS name is recommended because Service ClusterIP may change when the service is recreated.
Initialize Permissions and Create Databases
The default root password is empty and should be changed. Then create a business database, a user, and grant privileges. Example MySQL statements are provided.
<code>set password for 'root'@'%'='xxxx';
create database user_db;
create user user_db_sdml@`10.%` identified by 'xxxx';
grant select,insert,update,delete on user_db.* to user_db_sdml@`10.%`;
</code>TiDB Cluster Scaling
Scaling decisions are based on monitoring CPU, memory, and storage usage. Horizontal scaling adds more TiKV instances; vertical scaling increases resources per instance. The article lists scenarios for scaling up and down and shows how to modify the tidb-test.yaml file (replicas, requests, limits) and apply the changes.
<code>tikv:
baseImage: pingcap/tikv:v8.4.0
replicas: 5 # change 3->5
storageClassName: local-storage
requests:
cpu: 6000m
memory: 12Gi
storage: 1760Gi
limits:
cpu: 20000m # change 14000->20000m
memory: 32Gi
storage: 1760Gi
</code> <code># kubectl apply -f tidb-test.yaml -n tidb-test
</code>TiDB Cluster Upgrade
Upgrading the cluster only requires changing the baseImage fields in tidb-test.yaml to the desired version and applying the manifest. The article recommends pulling images into an internal registry and avoiding external network access from DB servers.
<code>tidb:
baseImage: pingcap/tidb:v8.4.0
tikv:
baseImage: pingcap/tikv:v8.4.0
pd:
baseImage: pingcap/pd:v8.4.0
</code>Conclusion
Managing a TiDB cluster on Kubernetes is straightforward: all operations are driven by changes to a single YAML file. Using Git to version‑control the YAML mitigates accidental deletions and enables safe rollbacks, leading to faster and more reliable maintenance.
Xiaolei Talks DB
Sharing daily database operations insights, from distributed databases to cloud migration. Author: Dai Xiaolei, with 10+ years of DB ops and development experience. Your support is appreciated.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.