Designing a US Presidential Election Voting System: 1M TPS, 10M QPS, Immutable and Non‑Duplicate Votes
This article presents a comprehensive architectural design for a high‑throughput US presidential voting platform that must handle 1 million transactions per second and 10 million queries per second while guaranteeing vote immutability, one‑person‑one‑vote enforcement, real‑time result aggregation, and scalable storage using microservices, Kafka, Redis, Bloom filters, and blockchain anchoring.
The voting scenario requires handling a massive spike of up to 1 million TPS during the final minute and up to 10 million QPS for result queries after the election closes. Assuming 50 million voters, the peak write load is calculated as 833 k votes per second, and the system is designed with a safety margin.
Core APIs are defined as POST /api/v1/vote for vote submission, GET /api/v1/votes/{voter_id} for personal vote lookup, and GET /api/v1/results for aggregated results. These endpoints are exposed through a four‑layer load‑balancing front end (L4 IP‑based balancer plus L7 API gateway) that performs authentication, rate limiting, and request routing.
Service Layer follows a micro‑service architecture with three main services: a stateless vote‑write service that validates requests and pushes messages to a high‑throughput Kafka topic, a vote‑query service that first checks a local Caffeine cache, then a distributed Redis cache, and finally falls back to the database, and a result‑service that reads pre‑aggregated data from a dedicated statistics store (TiDB or ClickHouse) and cache.
Write Path uses asynchronous processing: after basic validation, the vote is placed into Kafka, allowing the front‑end to respond immediately. A batch worker consumes messages, performs bulk inserts with INSERT … ON DUPLICATE KEY UPDATE, and enforces a unique index on voter_id to guarantee idempotency.
Idempotency Strategy follows the "one‑lock, two‑checks, three‑updates" pattern popularized by Alipay. A distributed Redis lock serialises concurrent requests for the same voter, a cache check quickly rejects duplicate votes, and a final database check provides the authoritative verdict.
Bloom‑Filter Based Duplicate Detection replaces a naïve per‑voter key in Redis. Using RedisBloom, 50 shards of 2 MB Bloom filters store 1 million voter IDs each, achieving a false‑positive rate of 0.01 % while consuming only ~100 MB of memory. The filter is automatically sharded by Redis Cluster key hashing, and a second‑level exact check in MySQL eliminates false positives.
Read Path relies on a two‑level cache hierarchy: a local Caffeine cache for hot personal‑vote lookups and a Redis cluster for global result queries. Hot‑spot detection promotes frequently accessed state‑level results to additional cache nodes, preventing single‑node overload.
Storage Design combines a relational DB (MySQL/TiDB) for transactional vote records with a NoSQL store (HBase) for massive historical data. The relational schema includes a snowflake‑generated vote_id, unique voter_id, candidate ID, state code, timestamps, and optional blockchain hash. Data is sharded by state_code (first level) and vote_id hash (second level) across 16 databases and 64 tables per database, yielding 1 024 physical tables.
Immutability via Blockchain computes a SHA‑256 hash of the core vote fields, batches hashes into a Merkle tree, and writes the root hash to an Ethereum‑compatible consortium chain every minute. The on‑chain hash provides tamper‑evidence; any post‑election audit recomputes the hash and compares it to the stored root.
Monitoring and Resilience includes Kafka lag alerts, database write‑rate throttling, auto‑scaling of batch workers, and a dead‑letter queue for failed messages. The system uses MySQL binlog + Canal for eventual consistency between the relational store and HBase.
Overall, the design meets the 1 M TPS write and 10 M QPS read targets, guarantees vote immutability and one‑person‑one‑vote, and provides real‑time result visibility while remaining horizontally scalable and fault‑tolerant.
Tech Freedom Circle
Crazy Maker Circle (Tech Freedom Architecture Circle): a community of tech enthusiasts, experts, and high‑performance fans. Many top‑level masters, architects, and hobbyists have achieved tech freedom; another wave of go‑getters are hustling hard toward tech freedom.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
