Cloud Native 13 min read

Deploying Nginx Ingress on Tencent Cloud TKE: Overview, Deployment Options, and Best Practices

This article provides a comprehensive guide to Nginx Ingress on Tencent Cloud TKE, covering its fundamentals, three deployment architectures, selection criteria, internal and external load‑balancer configurations, bandwidth considerations, Ingress creation, and monitoring with Prometheus.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
Deploying Nginx Ingress on Tencent Cloud TKE: Overview, Deployment Options, and Best Practices

Author Chen Peng, a Tencent engineer, introduces Nginx Ingress, a widely used Ingress controller for Kubernetes, and explains its role in L7 traffic forwarding.

Nginx Ingress watches Ingress resources, converts rules into Nginx configuration, and forwards traffic at the application layer. Two implementations exist: the community project (ingress‑nginx) and the official Nginx version; the article focuses on the community implementation.

Deployment schemes on TKE

Scheme 1: Deployment + LoadBalancer – Deploy the controller as a Deployment and expose it via a LoadBalancer Service (NodePort‑based CLB). Example installation commands: kubectl create ns nginx-ingress kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment.yaml -n nginx-ingress

Scheme 2: DaemonSet + hostNetwork + LoadBalancer – Run the controller as a DaemonSet with hostNetwork , bind the CLB directly to node IPs, and label specific edge nodes for placement. Installation steps include labeling nodes and applying the DaemonSet manifest: kubectl label node 10.0.0.3 nginx-ingress=true kubectl create ns nginx-ingress kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-daemonset-hostnetwork.yaml -n nginx-ingress

Scheme 3: Deployment + LoadBalancer directly to Pods (ENI) – Use VPC‑CNI (ENI) so the CLB can bind directly to pod IPs, eliminating NodePort and enabling automatic scaling. Installation command: kubectl create ns nginx-ingress kubectl apply -f https://raw.githubusercontent.com/TencentCloudContainerTeam/manifest/master/nginx-ingress/nginx-ingress-deployment-eni.yaml -n nginx-ingress

Selection guidance

Scheme 1 is simple and suitable for small‑scale workloads with modest performance requirements.

Scheme 2 offers better performance with hostNetwork but requires manual CLB and node management.

Scheme 3 provides the best performance and automatic scaling, recommended when the cluster supports VPC‑CNI.

Supporting internal Ingress – For schemes that create a CLB, add the annotation service.kubernetes.io/qcloud-loadbalancer-internal-subnetid with the internal subnet ID to the nginx-ingress-controller Service.

Reusing an existing CLB – Add the annotation service.kubernetes.io/tke-existed-lbid with the CLB ID to the Service definition, e.g.:

apiVersion: v1
kind: Service
metadata:
annotations:
service.kubernetes.io/tke-existed-lbid: lb-6swtxxxx
labels:
app: nginx-ingress
component: controller
name: nginx-ingress-controller

Public bandwidth considerations – Bandwidth depends on the account type (bandwidth‑upstream or not) and whether the CLB is bound to nodes or ENI pods. Non‑upstream accounts aggregate node bandwidth; upstream accounts use the CLB’s purchased bandwidth (default 10 Mbps).

Creating an Ingress resource – Since TKE does not yet provide a UI for Nginx Ingress, create Ingress objects via YAML and specify the class annotation:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: *
http:
paths:
- path: /
backend:
serviceName: nginx-v1
servicePort: 80

Monitoring – The controller exposes a metrics port that can be scraped by Prometheus. Example ServiceMonitor and raw Prometheus scrape configuration are provided.

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: nginx-ingress-controller
namespace: nginx-ingress
labels:
app: nginx-ingress
component: controller
spec:
endpoints:
- port: metrics
interval: 10s
namespaceSelector:
matchNames:
- nginx-ingress
selector:
matchLabels:
app: nginx-ingress
component: controller

Additionally, a native Prometheus job configuration and Grafana dashboards (nginx.json, request‑handling‑performance.json) are referenced.

Conclusion – The article summarizes the three deployment options for Nginx Ingress on TKE, offers practical advice on selection, internal/external load‑balancer usage, bandwidth, Ingress creation, and monitoring, and hints at upcoming productized one‑click deployment support.

Cloud NativeKubernetesPrometheustkeloadbalancerNginx Ingress
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.