How to Deploy and Troubleshoot Kong & Konga on Kubernetes
This guide walks through deploying Kong and its UI Konga on Kubernetes, covering configuration of PostgreSQL, handling common initialization errors, correcting environment variables, and ensuring successful startup with detailed YAML and Docker commands.
Background
With the maturity of Kubernetes, many users adopt ingress controllers such as ingress‑nginx, Traefik, APISIX, and Kong. Kong can serve as a Kubernetes ingress or as a standalone API gateway.
When used as an ingress, Kong requires both the
kongimage and the
kong/kubernetes-ingress-controllerimage, plus ServiceAccount, RBAC, and CRDs. When used solely as a gateway, only the Kong image and a ServiceAccount are needed.
Kong and Konga support MySQL, MongoDB, and PostgreSQL; this guide uses PostgreSQL and deploys both components inside Kubernetes.
Deploy Konga
<code>cat kong-ui-pre.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
appName: konga-ui-aggre-sit
appEnv: sit
name: konga-ui-aggre-sit
namespace: kong
spec:
replicas: 1
selector:
matchLabels:
appName: konga-ui-aggre-sit
appEnv: sit
template:
metadata:
labels:
appName: konga-ui-aggre-sit
appEnv: sit
spec:
imagePullSecrets:
- name: registry-auth
containers:
- env:
- name: NODE_ENV
value: "production"
- name: DB_ADAPTER
value: "postgres"
- name: DB_HOST
value: "<your-pgsql-host>"
- name: DB_PORT
value: "<your-pgsql-port>"
- name: DB_USER
value: "kong"
- name: DB_PASSWORD
value: "<your-pgsql-password>"
- name: DB_DATABASE
value: "konga"
- name: TOKEN_SECRET
value: "<random-string>"
- name: NO_AUTH
value: "false"
- name: NODE_TLS_REJECT_UNAUTHORIZED
value: "0"
image: registry.ayunw.cn/kube-system/pantsel/konga:0.14.9
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /
port: 1337
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: konga-ui-aggre-sit
ports:
- containerPort: 1337
name: kong-ui
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /
port: 1337
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
serviceAccountName: kong-serviceaccount</code>When initializing PostgreSQL for Konga, the official documentation requires a manual setup when
NODE_ENV=production. The correct initialization command is:
<code>node ./bin/konga.js prepare --adapter postgres --uri postgresql://<user>:<password>@<host>:<port>/konga</code>Common pitfalls include misspelling
postgresqlas
postgresand forgetting to URL‑encode the
#character in passwords (use
%23).
Do not mistype the connection‑uri ; it must start with postgresql:// . Replace any # in passwords with %23 .
Deploy Kong
<code>---
# Source: kong-custom-pre-master/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kong-custom-pre-master
namespace: kong
labels:
appEnv: pre
appName: kong-custom
appGroup: kong
spec:
replicas: 1
progressDeadlineSeconds: 1800
minReadySeconds: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 50%
maxSurge: 50%
selector:
matchLabels:
appEnv: pre
appName: kong-custom
appGroup: kong
template:
metadata:
labels:
appEnv: pre
appName: kong-custom
appGroup: kong
spec:
dnsPolicy: ClusterFirst
terminationGracePeriodSeconds: 10
serviceAccountName: kong-serviceaccount
imagePullSecrets:
- name: registry-auth-kong-pro
initContainers:
- name: wait-for-migrations
image: "registry.ayunw.cn/kong/kong-custom:398-c44f9085"
command:
- /bin/sh
- -c
- while true; do kong migrations bootstrap; if [[ 0 -eq 0 ]]; then exit 0; fi; sleep 2; done;
env:
- name: KONG_DATABASE
value: "kong" # <-- should be "postgres"
- name: KONG_PG_USER
value: "kong"
- name: KONG_PG_PORT
value: "<pgsql-port>"
- name: KONG_PG_PASSWORD
value: "<pgsql-password>"
- name: KONG_PG_HOST
value: "<pgsql-host>"
containers:
- name: kong-custom-pre-master
image: "registry.ayunw.cn/kong/kong-custom:398-c44f9085"
ports:
- name: proxy
containerPort: 8000
protocol: TCP
- name: proxy-ssl
containerPort: 9443
protocol: TCP
- name: metrics
containerPort: 8100
protocol: TCP
- name: admin-url
containerPort: 8444
protocol: TCP
resources:
limits:
cpu: "5000m"
memory: "1024Mi"
requests:
cpu: "100m"
memory: "512Mi"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- kong quit
env:
- name: KONG_PROXY_LISTEN
value: "0.0.0.0:8000, 0.0.0.0:9443 ssl http2"
- name: KONG_PORT_MAPS
value: "80:8000, 443:8443"
- name: KONG_ADMIN_LISTEN
value: "0.0.0.0:8444 ssl"
- name: KONG_STATUS_LISTEN
value: "0.0.0.0:8100"
- name: KONG_NGINX_WORKER_PROCESSES
value: "2"
- name: KONG_ADMIN_ACCESS_LOG
value: "/dev/stdout"
- name: KONG_ADMIN_ERROR_LOG
value: "/dev/stderr"
- name: KONG_PROXY_ERROR_LOG
value: "/dev/stderr"
- name: KONG_DATABASE
value: "kong" # <-- should be "postgres"
- name: KONG_PG_USER
value: "kong"
- name: KONG_PG_PORT
value: "<pgsql-port>"
- name: KONG_PG_PASSWORD
value: "<pgsql-password>"
- name: KONG_PG_HOST
value: "<pgsql-host>"
</code>Viewing Kong Logs and Errors
<code>kubectl logs -f --tail=20 -n kong kong-custom-sit-9c5cf7b69-4q29l</code>The logs show a migration error because the
KONG_DATABASEenvironment variable was set to
konginstead of
postgres. The correct value must be
postgres(or
off/
mysqlas appropriate). After fixing the variable, Kong starts normally.
Final Note
The Konga manifests are not directly available as raw YAML; they were generated from the Helm chart and then adapted to the specific environment. Careful review of documentation and correct configuration of environment variables can prevent many of the issues described above.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.