How to Rigorously Test Lottery Modules for Reliability and Performance
This article explains comprehensive methods for testing lottery modules—including probability verification, high‑concurrency simulation with JMeter, and real‑time data monitoring using Grafana—to ensure stable, accurate prize distribution during large‑scale promotional events.
Background Introduction
We encounter many online operational activities daily, such as lotteries, voting, and user invitations. Lottery is one of the most common and effective activity forms, making it a priority for testing. A faulty lottery can cause serious issues, including incorrect prize distribution, economic loss, and poor user experience.
Probability errors may lead to incorrect prize allocation, reducing activity effectiveness or causing over‑distribution of high‑value prizes.
Insufficiently robust code can cause exceptions under concurrency, potentially resulting in prize over‑issuance.
Failure to award prizes after a successful draw severely harms user experience.
To address these problems, we must conduct thorough lottery module testing, monitor online data after launch, and respond quickly to any issues.
The article explores lottery testing from three dimensions:
Lottery probability testing
Lottery concurrency testing
Online data monitoring
Lottery Module Testing
Lottery mechanisms generally fall into three categories: probability‑based draws, "X‑th" wins, and physical random‑number draws. This article focuses on the most common probability‑based approach.
Lottery Probability Testing
Testing must consider user eligibility, draw probability, and prize issuance. Manual testing of probabilities is time‑consuming; automated testing via APIs is more efficient.
The actual probability of a prize equals
prize wins / total draws. If the deviation from the configured probability is within ±1%, the implementation is considered correct.
We use Python to simulate 1,000 draws via the lottery API. The script is shown below:
<code>def rq(count, moneyRange):
result = {}
if moneyRange != []:
for c in range(count):
money = random.choice(moneyRange)
resp = requests.get('http://xxx?money=%d' % int(money))
if float(resp.text) > 0:
result_money = float(resp.text)
percent = str(result_money / int(money))
if money not in result.keys():
result[money] = {}
result[money]['total'] = 1
result[money][percent] = 1
elif money in result.keys() and percent not in result[money].keys():
result[money]['total'] += 1
result[money][percent] = 1
else:
result[money]['total'] += 1
result[money][percent] += 1
return result
def analysis(dic):
for money in dic:
for k, v in dic[money].items():
total = dic[money]['total']
if k != 'total':
print('%s cash coupon %s times probability: %2.f' % (money, k, round(v * 1.0 / total * 100)))
</code>Running the script with 1,000 iterations yields probabilities that closely match the design. It is recommended to run the simulation three times to ensure accuracy, especially when prize quantities are limited.
Lottery Concurrency Testing
Lottery is a classic high‑concurrency scenario. During large promotions like Double‑Eleven, traffic spikes can cause multiple users to compete for the last prize, potentially leading to over‑issuance or negative inventory.
Key concurrency test points include:
Ensuring each draw consumes the correct number of points.
Verifying that users marked as winners receive the prize.
Confirming that the total number of awarded prizes does not exceed the configured limit.
After validating these points, the load can be gradually increased, and repeated draws within short intervals (e.g., 10 draws per user within 10 seconds) can be tested.
Concurrency Test Case
Using JMeter, we simulate multiple users attempting to claim the last prize simultaneously. The transaction includes point verification, prize generation, and point deduction. If a user completes the transaction after the prize has been taken, the transaction rolls back and the user receives a failure response.
Using JMeter for Concurrency Testing
JMeter, an Apache‑based open‑source load testing tool, provides a Synchronizing Timer to create a “collection point” where a defined number of threads fire simultaneously. By configuring three users to hit the lottery API at the same moment, we can observe whether more than one prize is issued.
If three prizes are issued when only one was available, the implementation needs optimization.
Online Data Monitoring
Why Monitor Online Data
Even after thorough testing, issues may arise in production. Early detection via monitoring minimizes impact. Relying solely on user feedback leads to delayed response.
Grafana can be used to visualize key metrics in real time.
Monitoring Metrics
Lottery success rate
Number of successful draws
Total prizes issued
Prize distribution per prize type
Number of participating users (to detect cheating)
Lottery API response time
These metrics help assess both functional stability and the effectiveness of the promotional activity.
Setting Alert Thresholds
Alert thresholds should be based on historical data. For example, a success‑rate threshold of 80% can trigger alerts via email, WeChat, SMS, or other channels.
Data Monitoring Example
Comparing successful draw counts with prize issuance reveals discrepancies. In one case, the success count slightly exceeded the number of prizes issued, indicating users who drew without receiving a prize. The development team identified the affected accounts and resolved the issue promptly.
Summary
The article covered three key aspects of lottery module testing: probability testing, concurrency testing, and online data monitoring. Together, they ensure the reliability and stability of lottery features, which are critical to the success of large‑scale promotional activities.
Guarantee core functionality: verify draw flow, probability, prize issuance, limits, and concurrency.
Maintain activity stability: continuously monitor online data and respond quickly to issues.
By abstracting common testing patterns and reusing scripts, testing efficiency improves for future lottery implementations.
Baixing.com Technical Team
A collection of the Baixing.com tech team's insights and learnings, featuring one weekly technical article worth following.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.