How Containerization Transformed a Securities Trading Platform: Lessons from OpenTrading
This article explains how a securities firm adopted Docker and cloud‑native practices to overcome scaling, upgrade, and reliability challenges, detailing its four‑layer platform architecture, massive container deployments, orchestration strategies, and immutable DevOps best practices for high‑frequency trading environments.
Overview
The securities industry has become highly internet‑enabled, with online trading accounting for over 90% of transactions by the end of 2014 and more than 55 firms piloting internet securities by mid‑2015. Guangfa Securities, a pioneer in this space, built a four‑layer platform (access control, business pre‑layer, transaction bus, virtualization/container layer) supporting multiple languages such as Java, Go, Node.js, Lua, and C++.
Docker Adoption and Scale
Research on Docker began around 2013, moving from Docker‑1.3.2 to Docker‑1.10.0, with large‑scale production use starting in 2015. Today the firm runs over 4,000 instances for the market‑data cloud (supporting >30 million concurrent connections and 1.5 Gbps throughput) and more than 300 instances for the trading cloud (handling >1 million daily DMA requests and peak transaction volumes exceeding 3 billion RMB).
Market‑Data Cloud: 4,000+ instances across six IDC sites, serving over 2 million customers.
Trading Cloud: 300+ instances in a Guangzhou data center, processing >1 million daily trades with average daily turnover around 800 million RMB.
Why Containerize?
High‑frequency trading, real‑time risk control, stringent regulatory zero‑tolerance, and massive market data (peaking at 2 trillion RMB in a single day) demand ultra‑low latency, high consistency, and high availability. Traditional monolithic deployments caused resource waste, lengthy upgrade windows, patch‑induced instability, slow test environment provisioning, and inability to roll back.
Benefits of Containerization
① Lightweight engine, efficient virtualization ② Second‑level deployment, easy migration and scaling ③ Portable, elastically scalable, simple management ④ Early cloud capabilities ⑤ Enables micro‑services ⑥ Standardized server‑side deliverables ⑦ Crucial step toward DevOps
Container Technology Meets Cloud
Docker is not a virtual machine but a process‑level isolation tool. Cloud is viewed as a distributed multi‑process system requiring remote orchestration and scheduling. A monitoring service example combines InfluxDB, Grafana, and StatsD via a Docker‑Compose file:
influxdb:
image: "docker.gf.com.cn/gfcloud/influxdb:0.9"
ports:
- "8083:8083"
- "8086:8086"
expose:
- "8090"
- "8099"
volumes:
- "/var/monitor/influxdb:/data"
environment:
- "PRE_CREATE_DB=influxdb"
- "ADMIN_USER=root"
- "INFLUXDB_INIT_PWD=root"
grafana:
image: "docker.gf.com.cn/gfcloud/grafana"
ports:
- "3000:3000"
volumes:
- "/var/monitor/grafana:/var/lib/grafana"
statsd:
image: "docker.gf.com.cn/gfcloud/statsd"
ports:
- "8125:8125/udp"
links:
- "influxdb:influxdb"
volumes:
- "/var/monitor/statsd/log:/var/log"
environment:
- INFLUXDB_HOST=influxdb
- INFLUXDB_PORT=8086
- INFLUXDB=influxdb
- INFLUXDB_USERNAME=root
- INFLUXDB_PASSWORD=rootContainer orchestration is achieved with tools such as Kubernetes, Mesos + Marathon, and Rancher, enabling cross‑host management and optimal resource utilization.
Transparent Deployment and Immutable Operations
Applications are designed to be stateless, fault‑tolerant, and unaware of deployment specifics, similar to LEGO bricks that can be recombined without the application knowing its physical arrangement. The platform runs entirely on a private cloud, replicated across multiple IDC sites, using consistent‑hash sharding for services like Redis.
Financial systems require high timeliness, strong consistency, and near‑zero downtime. The solution combines centralized deployment of critical business data (e.g., customer assets) with distributed deployment of market data and static resources at edge IDC locations, ensuring low latency and high availability through DNS‑based failover.
Docker Best Practices (Immutable Ops)
Never modify a running container; redeploy for any change.
Use tagged images instead of latest to enable rollbacks.
Separate build environment from runtime image.
Remove temporary packages in the same RUN step to keep images small.
Avoid running daemons inside containers; rely on external monitoring for restarts.
FROM docker.gf.com.cn/gfcloud/ubuntu
RUN echo "deb http://nginx.org/packages/ubuntu trusty nginx" >> /etc/apt/sources.list && \
echo "deb-src http://nginx.org/packages/ubuntu trusty nginx" >> /etc/apt/sources.list
RUN apt-get update && apt-get install -y wget && \
wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_signing.key && \
apt-get update && apt-get install -y nginx && \
rm -rf /var/lib/apt/lists/* && \
echo "
daemon off;" >> /etc/nginx/nginx.conf && \
apt-get purge -y --auto-remove wget
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
WORKDIR /etc/nginx
EXPOSE 80
EXPOSE 443
CMD ["nginx","-g","daemon off;"]Signed-in readers can open the original source through BestHub's protected redirect.
This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactand we will review it promptly.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
