Backend Development 9 min read

Integrating Zipkin Distributed Tracing into Node.js Applications

This guide shows how to set up Zipkin with Docker‑Compose, configure Elasticsearch storage, and integrate the zipkin and zipkin‑transport‑http npm packages into a Node.js app—using either ExplicitContext or the simpler Zone‑Context—to collect, send, store, and visualize OpenTracing‑compatible distributed traces.

vivo Internet Technology
vivo Internet Technology
vivo Internet Technology
Integrating Zipkin Distributed Tracing into Node.js Applications

This article continues from the previous piece on full‑stack tracing in Node.js and explains how to store and visualize tracing data using the OpenTracing‑compatible Zipkin solution.

Background : Distributed tracing systems originated from Google’s paper on large‑scale distributed system tracing. Implementations such as Zipkin and Jaeger follow the OpenTracing standard, which provides a lightweight abstraction layer between application code and tracing back‑ends.

OpenTracing Overview : OpenTracing standardizes tracing APIs, making it easy to switch between tracing systems. It acts like a universal connector (similar to a Type‑C interface for phones), allowing developers to add tracing with minimal code changes.

Zipkin : Developed by Twitter, Zipkin follows OpenTracing and consists of three main parts:

Collector – receives, validates, stores, and indexes trace data.

Storage – default in‑memory, with optional Elasticsearch or MySQL back‑ends.

Search – provides a JSON API for querying traces.

Web UI – visualizes trace data using the Search API.

Zipkin Architecture (simplified):

1. Full‑stack information acquisition (using zone-context instead of Zipkin’s built‑in collector). 2. Transport layer – sends trace data to Zipkin via HTTP. 3. Core Zipkin components (collector, storage, search, UI).

Environment Setup : The article uses Docker and Docker‑Compose to spin up an Elasticsearch instance and a Zipkin server. The docker-compose.yml file is as follows:

version: '3.8'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
    container_name: elasticsearch
    restart: always
    ports:
      - 9200:9200
    healthcheck:
      test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - TZ=Asia/Shanghai
    ulimits:
      memlock:
        soft: -1
        hard: -1
  zipkin:
    image: openzipkin/zipkin:2.21
    container_name: zipkin
    depends_on:
      - elasticsearch
    links:
      - elasticsearch
    restart: always
    ports:
      - 9411:9411
    environment:
      - TZ=Asia/Shanghai
      - STORAGE_TYPE=elasticsearch
      - ES_HOSTS=elasticsearch:9200

Run docker-compose up -d in the directory containing this file to start the services. After startup, the Zipkin UI is accessible at http://localhost:9411 and Elasticsearch at http://localhost:9200 .

Node.js Integration :

1. Full‑stack information acquisition – covered in the previous article.

2. Transport layer – uses the official zipkin and zipkin-transport-http npm packages. Core code for the transport layer:

const { BatchRecorder, Tracer, jsonEncoder: { JSON_V1, JSON_V2 } } = require('zipkin');
const { HttpLogger } = require('zipkin-transport-http');

// Configuration object
const options = {
  serviceName: 'zipkin-node-service',
  targetServer: '127.0.0.1:9411',
  targetApi: '/api/v2/spans',
  jsonEncoder: 'v2'
};

// HTTP transport
async function recorder({ targetServer, targetApi, jsonEncoder }) {
  return new BatchRecorder({
    logger: new HttpLogger({
      endpoint: `${targetServer}${targetApi}`,
      jsonEncoder: (jsonEncoder === 'v2' || jsonEncoder === 'V2') ? JSON_V2 : JSON_V1,
    })
  });
}

const baseRecorder = await recorder({
  targetServer: options.targetServer,
  targetApi: options.targetApi,
  jsonEncoder: options.jsonEncoder
});

3. Tracer setup – two approaches are shown:

ExplicitContext approach (requires manual propagation):

const { Tracer } = require('zipkin');
const ctxImpl = new ExplicitContext();
const tracer = new Tracer({ ctxImpl, recorder: baseRecorder });
// Additional header handling needed

Zone‑Context approach (implicit, less intrusive):

const zoneContextImpl = new ZoneContext();
const tracer = new Tracer({ zoneContextImpl, recorder: baseRecorder });
// No extra handling required

Using Zone‑Context simplifies integration by automatically propagating trace context.

4. Data collection, storage, and visualization – Zipkin provides built‑in collector, UI, and supports MySQL or Elasticsearch for storage. The article uses the Elasticsearch back‑end configured via Docker‑Compose.

Conclusion : The article demonstrates a complete Node.js solution for distributed tracing based on the OpenTracing standard, leveraging Zipkin, Docker, and optional Elasticsearch storage. Readers should now have a clear understanding of how to acquire, transmit, store, and visualize trace data in a Node.js application.

backendDockerNode.jsOpenTracingDistributed TracingDocker-ComposeZipkin
vivo Internet Technology
Written by

vivo Internet Technology

Sharing practical vivo Internet technology insights and salon events, plus the latest industry news and hot conferences.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.