Information Security 26 min read

Implementing Transparent Encrypted Communication with mTLS Using Nginx and Self‑Signed Certificates

This article explains how to secure cross‑data‑center traffic by encrypting it with TLS/mTLS, covering the principles of TLS, certificate authority roles, generating self‑signed certificates with OpenSSL, and configuring Nginx proxies for both HTTP and TCP streams to provide transparent encrypted channels without modifying applications.

Architect
Architect
Architect
Implementing Transparent Encrypted Communication with mTLS Using Nginx and Self‑Signed Certificates

When deploying across data‑centers without a dedicated line, traffic must travel over the public Internet and therefore requires security measures: only allow known IPs (firewall/Iptables) and encrypt the traffic.

TLS/SSL Basics

TLS encrypts data in transit so that a man‑in‑the‑middle cannot read or modify it. It uses asymmetric cryptography (public/private key pairs) to prove identity and to negotiate a symmetric session key for fast encryption.

Data encrypted with the public key can only be decrypted with the private key.

The private key can sign data; the public key can verify the signature.

The server publishes its public key as a certificate. The client generates a random value, encrypts it with the server’s public key, and sends it to the server. After the server decrypts it, both sides derive a shared secret for symmetric encryption.

Beyond confidentiality, the client must also verify that the certificate truly belongs to the server. This is achieved by a trusted Certificate Authority (CA) that signs certificates only after confirming the owner’s identity.

The CA’s private root key is highly protected (key ceremonies, HSMs, physical security) because compromise would allow issuance of fraudulent certificates.

Transparent Encrypted Channel Solution

Background

We need to secure communication between two custom services running in different data‑centers over the public Internet. Requirements:

Encrypt the traffic.

Mutually authenticate both ends.

Make the encryption transparent to the applications (no code changes).

mTLS

Mutual TLS adds client‑certificate verification to the usual server‑side TLS handshake. It works similarly to SSH key authentication.

Solution Architecture

Deploy an Nginx instance in each data‑center. The local Nginx forwards plain HTTP requests to the remote Nginx over mTLS. Applications continue to talk to a local endpoint as if everything were in the same LAN.

Step‑by‑Step Setup

1. Prepare Certificates (self‑signed CA)

Generate a CA private key (password‑protected):

$ openssl genrsa -des3 -out ca.key 4096

Create a self‑signed CA certificate:

$ openssl req -new -x509 -days 365 -key ca.key -out ca.crt

Generate a server key and CSR, then sign it with the CA:

$ openssl genrsa -out server.key 4096
$ openssl req -new -key server.key -out server.csr
$ openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

Generate a client key and CSR, and sign it similarly to obtain client.crt and client.key .

2. Configure Remote Nginx (HTTPS)

server {
    listen 443 ssl;
    ssl_certificate /home/vagrant/cert/server.crt;
    ssl_certificate_key /home/vagrant/cert/server.key;
    location / { proxy_pass http://127.0.0.1:8000; }
}

Test with curl using --cacert ca.crt and the server name proxy.example.com (SNI).

3. Enable Client Certificate Verification

server {
    listen 443 ssl;
    ssl_certificate /home/vagrant/cert/server.crt;
    ssl_certificate_key /home/vagrant/cert/server.key;
    ssl_verify_client on;
    ssl_client_certificate /home/vagrant/cert/ca.crt;
    location / { proxy_pass http://127.0.0.1:8000; }
}

Now curl must also provide --cert client.crt --key client.key .

4. Configure Local Nginx (HTTP → Remote HTTPS)

upstream remote { server 127.0.0.1:443; }
server {
    listen 80;
    location / {
        proxy_pass https://remote;
        proxy_ssl_trusted_certificate /home/vagrant/cert/ca.crt;
        proxy_ssl_verify on;
        proxy_ssl_server_name on;
        proxy_ssl_name proxy.example.com;
        proxy_ssl_certificate /home/vagrant/cert/client.crt;
        proxy_ssl_certificate_key /home/vagrant/cert/client.key;
    }
}

Now a plain HTTP request to the local Nginx is forwarded over an encrypted mTLS channel to the remote Nginx, while the backend FastAPI service receives the original plaintext request.

TCP Stream Proxy (e.g., Redis)

Switch the http block to a stream block and reuse the same certificates. Example remote server configuration:

server {
    listen 443 ssl;
    proxy_pass 127.0.0.1:6379;
    ssl_certificate /home/vagrant/cert/server.crt;
    ssl_certificate_key /home/vagrant/cert/server.key;
    ssl_verify_client on;
    ssl_client_certificate /home/vagrant/cert/ca.crt;
}

Local client configuration:

upstream remote { server 127.0.0.1:443; }
server {
    listen 80;
    proxy_pass remote;
    proxy_ssl_trusted_certificate /home/vagrant/cert/ca.crt;
    proxy_ssl_verify on;
    proxy_ssl_server_name on;
    proxy_ssl_name config.example.com;
    proxy_ssl_certificate /home/vagrant/cert/client.crt;
    proxy_ssl_certificate_key /home/vagrant/cert/client.key;
}

After reloading Nginx, redis-cli -p 80 can be used to access the remote Redis instance securely.

References

Nginx reverse‑proxy documentation: nginx.org/en/docs/http/ngx_http_proxy_module.html

Certificate Authority key storage: security.stackexchange.com

Self‑signed certificate risks: keyfactor.com

TLS handshake illustration: tls.ulfheim.net

What is mTLS?: cloudflare.com

nginxencryptionOpenSSLTLSCertificate AuthoritymTLS
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.