Databases 42 min read

Unifying Card and Coin Payments: KaiwuDB’s Dual‑Mode Solution for Amusement Parks

This article presents a detailed technical case study of using KaiwuDB’s multi‑model database to unify card‑based and coin‑based payment processing in amusement parks, covering architecture, schema design, SQL implementations, offline handling, cross‑model analytics, hot‑cold data tiering, visualization, monitoring, security, and high‑availability strategies.

Wukong Talks Architecture
Wukong Talks Architecture
Wukong Talks Architecture
Unifying Card and Coin Payments: KaiwuDB’s Dual‑Mode Solution for Amusement Parks

Background

Modern amusement parks often support both smart‑card (or wrist‑band) swipe payments and traditional token/coin payments. A medium‑size park may have 50‑200 devices and generate tens of thousands of transactions per day, with peak traffic 5‑8 times higher. The core pain points are fragmented device management, real‑time monitoring, dual‑mode billing, owner revenue sharing, and intelligent operations.

Industry Challenges

Roles and Data Needs

Visitor : needs fast swipe verification, real‑time balance query and traceable consumption records.

Device/Booth Owner : requires clear revenue statistics, settlement details and utilization rates.

Operations Staff : monitors device online status, fault alerts, coin‑box and reader status.

Operations Manager : wants overall revenue, hotspot ranking, time‑slot visitor flow and booth performance comparison.

Specificities of Amusement‑Park Consumption

Dual‑mode consumption : card, coin and hybrid modes must be accounted for in a single accounting system.

Mixed online/offline operation : devices work offline, store up to 10,000 records and sync later.

Multi‑level revenue sharing : devices may belong to the park or third‑party vendors, requiring proportional revenue split.

High concurrency and strong consistency : peak periods demand millisecond‑level card payment processing and strict balance consistency.

Technical Pain Points

Heterogeneous device protocols and data formats.

Massive real‑time transaction writes (hundreds per second).

Cold‑hot data disparity: real‑time transactions need fast access, historical data needs cheap long‑term storage.

Offline data back‑fill consistency.

Complex revenue‑sharing calculations.

High‑cost traditional stack (MySQL + Redis + MongoDB) with high operational complexity.

KWDB Solution Architecture

Overall Design

KWDB’s multi‑model capability allows a unified platform that stores relational tables (members, devices, card types, split rules) and time‑series tables (transactions, device heartbeats, offline transactions) in the same cluster. Cross‑model joins are supported, although cross‑model transactions are not available in version 3.1.0.

KWDB architecture diagram
KWDB architecture diagram

Core Table Structures

Relational tables: members, devices, card_types, etc.

Time‑series tables: transactions, device_heartbeats, offline_transactions.

Materialized view booth_daily_revenue aggregates daily revenue per booth.

CREATE TABLE members (
    card_id VARCHAR(32) PRIMARY KEY,
    member_name VARCHAR(50),
    phone VARCHAR(20),
    card_type VARCHAR(20),
    balance DECIMAL(10,2),
    total_recharge DECIMAL(10,2),
    total_bonus DECIMAL(10,2),
    create_date DATE,
    expire_date DATE,
    status VARCHAR(10),
    INDEX idx_phone (phone),
    INDEX idx_card_type (card_type)
);
CREATE TABLE card_types (
    card_type_id INT PRIMARY KEY,
    type_name VARCHAR(50),
    discount_rate DECIMAL(3,2),
    need_password BOOLEAN,
    daily_limit DECIMAL(10,2),
    per_trans_limit DECIMAL(10,2),
    bonus_rate DECIMAL(5,2),
    description VARCHAR(200)
);
CREATE TABLE devices (
    device_id VARCHAR(32) PRIMARY KEY,
    device_name VARCHAR(100),
    device_type VARCHAR(30),
    booth_id VARCHAR(20),
    owner_type VARCHAR(10),
    split_ratio DECIMAL(4,2),
    location VARCHAR(100),
    price_per_play DECIMAL(6,2),
    coins_per_play INT,
    install_date DATE,
    status VARCHAR(10),
    INDEX idx_booth (booth_id),
    INDEX idx_type (device_type)
);
CREATE TABLE device_heartbeats (
    report_time TIMESTAMP,
    coin_box_level INT,
    temperature FLOAT,
    coins_increment INT,
    transactions_increment INT,
    voltage FLOAT,
    heartbeat_id VARCHAR(32),
    device_id VARCHAR(32),
    online_status BOOLEAN,
    card_reader_status VARCHAR(20),
    printer_paper BOOLEAN,
    error_code INT,
    firmware_version VARCHAR(20),
    TAGS(heartbeat_id, device_id, online_status, card_reader_status, printer_paper, error_code, firmware_version)
);
CREATE TABLE transactions (
    transaction_time TIMESTAMPTZ(3) NOT NULL DEFAULT now(),
    amount INT,
    coins_used INT,
    balance_before INT,
    balance_after INT,
    game_duration INT,
    transaction_id VARCHAR(64) NOT NULL,
    device_id VARCHAR(32) NOT NULL,
    card_id VARCHAR(32),
    transaction_type VARCHAR(10) NOT NULL,
    result_code INT NOT NULL,
    sync_status VARCHAR(10) NOT NULL,
    TAGS(transaction_id)
);
CREATE TABLE offline_transactions (
    capture_time TIMESTAMPTZ(3) NOT NULL DEFAULT now(),
    transaction_count INT DEFAULT 1,
    total_amount INT,
    data_size INT,
    upload_time TIMESTAMP,
    TAGS(transaction_id VARCHAR(64) NOT NULL, batch_id VARCHAR(64), device_id VARCHAR(32), upload_status VARCHAR(10), sync_version INT)
);

Core Business Scenarios

Dual‑Mode Consumption (Card + Coin)

Visitors can start a game either by swiping a stored‑value card/wrist‑band (card mode) or by inserting physical/e‑coins (coin mode). The two modes generate different data flows.

Traditional Architecture Pain Points

Card transactions stored in MySQL, coin counts stored in InfluxDB/Redis – data is fragmented.

Complex consistency logic for card balance deduction (cache‑first → DB transaction → cache update).

# Pseudo‑code of traditional flow
def card_payment(card_id, device_id):
    balance = redis.get(f"balance:{card_id}")
    conn.begin()
    try:
        real_balance = conn.execute("SELECT balance FROM members WHERE card_id = ? FOR UPDATE", card_id)
        price = get_device_price(device_id)
        if real_balance < price:
            return "Insufficient balance"
        conn.execute("UPDATE members SET balance = balance - ? WHERE card_id = ?", price, card_id)
        conn.execute("INSERT INTO transactions ...")
        conn.commit()
        redis.set(f"balance:{card_id}", real_balance - price)
        return "Success"
    except:
        conn.rollback()
        return "Failure"

KWDB Integrated Solution

Design Idea

Store member and device information in relational tables and transaction logs in a time‑series table. All data lives in a single KWDB cluster, eliminating cross‑system joins.

Card‑Mode Implementation

-- 1. Query member balance with row lock
SELECT balance FROM main.members WHERE card_id = 'CARD-628001' FOR UPDATE;
-- 2. Get device price
SELECT price_per_play FROM main.devices WHERE device_id = 'device-001' AND status = '正常';
-- 3. Insert transaction (time‑series)
INSERT INTO "trace".transactions (transaction_id, device_id, card_id, transaction_type, transaction_time, result_code, sync_status, amount, coins_used, balance_before, balance_after, game_duration)
VALUES ('20250901100000D001P00000001', 'device-001', 'CARD-628001', 'CARD', now(), 0, 'online', 20, NULL, 100, 80, 5);
-- 4. Update member balance (relational)
UPDATE main.members SET balance = 80 WHERE card_id = 'CARD-628001';

Coin‑Mode Implementation

-- 1. Get required coin count (optional)
SELECT coins_per_play, device_name FROM devices WHERE device_id = 'device-003';
-- 2. Insert coin transaction (no balance update)
INSERT INTO "trace".transactions (transaction_id, device_id, card_id, transaction_type, transaction_time, result_code, sync_status, amount, coins_used, balance_before, balance_after, game_duration)
VALUES ('20250901100000D001P00000201', 'device-002', NULL, 'COIN', now(), 0, 'online', NULL, 2, NULL, NULL, 10);

Operational Analysis Example

A single SQL statement can combine card and coin usage with device status, removing the need for data export.

SELECT MAX(d.device_name), d.device_type, t.transaction_type,
       COUNT(*) AS play_count,
       AVG(t.game_duration) AS avg_duration,
       AVG(h.temperature) AS avg_device_temp,
       SUM(CASE WHEN h.error_code > 0 THEN 1 ELSE 0 END) AS error_count
FROM trace.transactions t
JOIN main.devices d ON t.device_id = d.device_id
LEFT JOIN trace.device_heartbeats h ON t.device_id = h.device_id
       AND h.report_time BETWEEN t.transaction_time - INTERVAL '1 MINUTE' AND t.transaction_time + INTERVAL '1 MINUTE'
WHERE t.transaction_time > NOW() - INTERVAL '365 DAY'
GROUP BY d.device_type, t.transaction_type
ORDER BY d.device_type, t.transaction_type;

Cross‑Model Query for Device Health & Revenue

Example 1 – Device Offline but Still Generating Transactions

SELECT DISTINCT t.device_id,
       MAX(d.device_name) AS device_name,
       COUNT(*) AS transaction_count
FROM trace.transactions t
LEFT JOIN trace.device_heartbeats h ON t.device_id = h.device_id
       AND h.report_time > NOW() - INTERVAL '10 MINUTE'
JOIN main.devices d ON d.device_id = t.device_id
WHERE t.transaction_time > NOW() - INTERVAL '1 HOUR'
  AND h.device_id IS NULL
GROUP BY t.device_id;

Example 2 – Online Device with Zero Transactions (Possible Fault)

SELECT h.device_id, h.report_time, h.temperature, h.coin_box_level, d.device_name
FROM device_heartbeats h
INNER JOIN main.devices d ON d.device_id = h.device_id
WHERE h.report_time > NOW() - INTERVAL '5 MINUTE'
  AND h.online_status = true
  AND NOT EXISTS (
        SELECT 1 FROM transactions t
        WHERE t.device_id = h.device_id
          AND t.transaction_time > NOW() - INTERVAL '30 MINUTE'
      );

Example 3 – Coin Box Near Full

SELECT h.device_id, d.device_name, d.location,
       h.coin_box_level AS box_level,
       h.report_time,
       COALESCE(SUM(t.coins_used),0) AS coins_last_hour,
       COALESCE(COUNT(t.transaction_id),0) AS tx_last_hour
FROM device_heartbeats h
JOIN devices d ON h.device_id = d.device_id
LEFT JOIN transactions t ON h.device_id = t.device_id
       AND t.transaction_time > NOW() - INTERVAL '1 HOUR'
WHERE h.coin_box_level > 80
  AND (h.device_id, h.report_time) IN (
        SELECT device_id, MAX(report_time)
        FROM device_heartbeats
        GROUP BY device_id)
GROUP BY h.device_id, d.device_name, d.location, h.coin_box_level, h.report_time
ORDER BY h.coin_box_level DESC;

Example 4 – Comprehensive Health & Revenue Prioritization

SELECT d.device_id, d.device_name, d.location, d.booth_id,
       COALESCE(rev.yesterday_revenue,0) AS yesterday_revenue,
       f.failure_count_7d,
       h.coin_box_level, h.online_status, h.temperature, h.last_report_time,
       (COALESCE(rev.yesterday_revenue,0)/100)*0.4 +
       (f.failure_count_7d)*10*0.3 +
       (CASE WHEN h.coin_box_level>80 THEN 50
             WHEN h.coin_box_level>60 THEN 30
             ELSE 0 END)*0.3 AS priority_score
FROM main.devices d
LEFT JOIN (
        SELECT device_id, SUM(amount) AS yesterday_revenue
        FROM trace.transactions
        WHERE transaction_time = CURRENT_DATE - INTERVAL '1 DAY'
          AND transaction_type IN ('刷卡','投币')
        GROUP BY device_id) rev ON d.device_id = rev.device_id
LEFT JOIN (
        SELECT device_id, COUNT(*) AS failure_count_7d
        FROM trace.device_heartbeats
        WHERE report_time >= NOW() - INTERVAL '7 DAY' AND error_code > 0
        GROUP BY device_id) f ON d.device_id = f.device_id
LEFT JOIN (
        SELECT device_id, coin_box_level, online_status, temperature, report_time AS last_report_time
        FROM trace.device_heartbeats
        WHERE (device_id, report_time) IN (
                SELECT device_id, MAX(report_time)
                FROM trace.device_heartbeats
                GROUP BY device_id)) h ON d.device_id = h.device_id
WHERE d.status = '正常'
  AND (h.coin_box_level>60 OR f.failure_count_7d>0 OR h.online_status = false)
ORDER BY priority_score DESC
LIMIT 20;

Offline Operation & Automatic Back‑Fill

In areas with poor network, terminals store up to 10,000 offline records. When connectivity returns, records are uploaded with idempotent logic to avoid duplicates or loss.

Project Hotness Analysis & Dynamic Pricing

SELECT d.device_name, d.device_type,
       EXTRACT(HOUR FROM t.transaction_time) AS hour_of_day,
       COUNT(*) AS play_count,
       SUM(t.amount) AS revenue,
       (COUNT(*) - AVG(COUNT(*)) OVER (
            PARTITION BY d.device_id,
                         EXTRACT(DOW FROM t.transaction_time),
                         EXTRACT(HOUR FROM t.transaction_time)
            ORDER BY t.transaction_time
            ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING)) AS vs_last_week
FROM trace.transactions t
JOIN devices d ON t.device_id = d.device_id
WHERE t.transaction_time >= NOW() - INTERVAL '7 DAY'
GROUP BY d.device_id, d.device_name, d.device_type, EXTRACT(HOUR FROM t.transaction_time)
ORDER BY play_count DESC;

Hot‑Cold Data Tiering (Enterprise Edition)

Business Challenge

Transaction volume can exceed 10 million rows per year; heartbeats add another 100 k rows per device annually. Storing all data together degrades query performance and inflates storage cost.

Tiered Storage Mechanism

Data is automatically moved among three directories based on age: hot (0‑30 days), warm (30‑90 days) and cold (>90 days). The tier configuration is defined in ts_tier.cfg:

/data/kwdb/hot,0
/data/kwdb/warm,1
/data/kwdb/cold,2

Tier Management Commands

SET CLUSTER SETTING ts.tier.duration = "30d,90d";

Hot data stays in level 0, warm in level 1, cold in level 2. The system migrates data automatically as it ages.

Big‑Screen Visualization

Dashboard Design

The dashboard shows global KPIs, device health alerts, hot‑device rankings, booth revenue, and a geographic heat map.

Global KPI Queries

-- Today's total revenue
SELECT SUM(amount) AS today_revenue FROM trace.transactions WHERE transaction_time::date = CURRENT_DATE AND result_code = 0;

-- Today's unique visitors
SELECT COUNT(DISTINCT card_id) AS today_visitors FROM trace.transactions WHERE transaction_time::date = CURRENT_DATE;

-- Device online rate
SELECT ROUND(100.0 * SUM(CASE WHEN online_status THEN 1 ELSE 0 END) / COUNT(*),2) AS online_rate
FROM (SELECT device_id, online_status FROM trace.device_heartbeats
      WHERE (device_id, report_time) IN (SELECT device_id, MAX(report_time) FROM trace.device_heartbeats GROUP BY device_id)) t;

Device Monitoring (Real‑Time Alerts)

-- Faulty devices
SELECT d.device_name, d.location, h.error_code, h.report_time AS last_heartbeat
FROM device_heartbeats h
JOIN main.devices d ON h.device_id = d.device_id
WHERE h.error_code > 0
  AND (h.device_id, h.report_time) IN (SELECT device_id, MAX(report_time) FROM device_heartbeats GROUP BY device_id)
ORDER BY h.report_time DESC;

-- Coin box >80%
SELECT d.device_name, d.location, h.coin_box_level
FROM device_heartbeats h
JOIN main.devices d ON h.device_id = d.device_id
WHERE h.coin_box_level > 80
  AND (h.device_id, h.report_time) IN (SELECT device_id, MAX(report_time) FROM device_heartbeats GROUP BY device_id);

Business Analysis (Top Devices)

-- Top 10 devices by play count today
SELECT d.device_name, COUNT(*) AS play_count, SUM(t.amount) AS revenue
FROM trace.transactions t
JOIN main.devices d ON t.device_id = d.device_id
WHERE t.transaction_time::date = CURRENT_DATE
GROUP BY d.device_name
ORDER BY play_count DESC
LIMIT 10;

-- Hourly heat (last 7 days)
SELECT EXTRACT(HOUR FROM t.transaction_time) AS hour, COUNT(*) AS play_count
FROM trace.transactions t
WHERE t.transaction_time >= NOW() - INTERVAL '7 DAY'
GROUP BY hour
ORDER BY hour;

Booth Revenue Board

SELECT d.booth_id,
       SUM(t.amount) AS revenue,
       SUM(t.amount * d.split_ratio) AS park_share,
       SUM(t.amount * (1 - d.split_ratio)) AS booth_share
FROM trace.transactions t
JOIN main.devices d ON t.device_id = d.device_id
WHERE t.transaction_time::date = CURRENT_DATE
GROUP BY d.booth_id
ORDER BY revenue DESC;

Geographic Heat Map

SELECT d.location, COUNT(*) AS recent_plays
FROM trace.transactions t
JOIN main.devices d ON t.device_id = d.device_id
WHERE t.transaction_time > NOW() - INTERVAL '30 MINUTE'
GROUP BY d.location;

Monitoring & Observability

Stack Overview

Prometheus – collects KWDB metrics.

Alertmanager – processes alerts from Prometheus.

Grafana – visualizes metrics and dashboards.

cAdvisor – monitors container resource usage.

Node Exporter – gathers host‑level metrics.

Service Deployment

Service: kwdb-prometheus
Image: prom/prometheus:v2.53.0
Port: 9090
Service: kwdb-alertmanager
Image: prom/alertmanager:v0.26.0
Port: 9093
Service: kwdb-grafana
Image: grafana/grafana:10.2.2
Port: 3000
Default login: admin / admin
Service: kwdb-cadvisor
Image: gcr.io/cadvisor/cadvisor:latest
Port: 8088
Privileged: true

Grafana Dashboards

Pre‑built dashboards cover overall overview, hardware, runtime, SQL, storage, replication, distribution, queue and slow‑query monitoring. They display QPS, connection counts, memory usage, transaction latency and error rates.

Security Design

KWDB supports SSL/TLS encryption for data in transit, username/password authentication, host‑based access control, fine‑grained role‑based permissions and comprehensive audit logging, meeting financial‑grade security requirements.

High Availability, Disaster Recovery & Backup

KWDB uses a Raft‑based leader‑follower consensus with a default three‑replica configuration, providing strong consistency and automatic failover. Write‑Ahead Logging (WAL) enables point‑in‑time recovery. Backup is performed via logical export/import at database or table level.

Conclusion & Outlook

Unified multi‑model data platform eliminates data silos and simplifies dual‑mode billing.

Cross‑model queries enable powerful operational analytics with a single SQL statement.

Hot‑cold tiering reduces storage cost while keeping recent data fast.

Built‑in monitoring, alerting and HA ensure 24/7 reliability.

Future directions include edge‑KWDB Lite for richer offline logic, AI‑driven visitor flow prediction, personalized offers, digital twins of devices, and multi‑park data federation.

Time Series DatabaseData integrationAmusement ParkDual-Mode PaymentsKaiwuDB
Wukong Talks Architecture
Written by

Wukong Talks Architecture

Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.