Operations 26 min read

How Lalamove Built a Scalable Localization Testing Platform with Prism

Facing resource constraints, complex market variations, and limited tooling, Lalamove’s QA team designed the Prism platform to automate multi‑market, multi‑language data testing, push verification, and comprehensive localization validation, leveraging Faker for data generation, custom pipelines, and integrated reporting to boost test efficiency and coverage.

Huolala Tech
Huolala Tech
Huolala Tech
How Lalamove Built a Scalable Localization Testing Platform with Prism

Background and Challenges

In a globalized business environment, companies increasingly prioritize product localization to meet diverse regional user needs and habits. Localization involves language, culture, platform compatibility, legal compliance, user preferences, time zones, currency management, and frequent content changes. Test engineers must verify these aspects to deliver an excellent user experience, facing challenges such as limited resources, complex scenarios, subtle regional differences, and a lack of suitable tools.

Resource constraints: Localization testing requires professional translators and test engineers, which many companies lack.

Complex scenarios: Includes translation, special character handling, currency conversion, time‑zone adjustment, UX optimization, and compliance testing, leading to time‑consuming configurations.

Subtle differences: Regional user expectations vary, making functional consistency across markets a major challenge.

Technical integration: Few off‑the‑shelf tools meet our needs, requiring custom development.

Solution

To reduce repetitive regression work and improve testing efficiency, we built the Prism testing platform. Prism offers a unified front‑end, supporting localization data testing, multi‑market push testing, multi‑language translation testing, and automated time‑zone validation, aiming to become a comprehensive internationalization testing hub.

Implementation

We introduce several tools realized during the localization testing exploration.

3.1 Localization Data Testing

As Lalamove expands globally, data‑text testing becomes more diverse. Traditional UI and API automation struggle to detect unknown data defects efficiently. We present a data‑generation solution using the company's MTC UI automation and QACI API automation capabilities to automatically detect potential multi‑market data compatibility defects.

3.1.1 Data Prototype Collection

User feedback: Collected via surveys, online feedback, and support lines, revealing real‑world data issues such as name, address, and contact fields.

Market research: Leveraged industry reports to understand market trends and competitor data handling.

Known defects: Extracted from historical test records (e.g., input format errors, character parsing, language mismatches).

3.1.2 Requirement Analysis

Based on product modules and workflows, we defined the languages and text types to test, creating datasets for English, Traditional Chinese, Thai, Vietnamese, etc., and considering different stylistic expressions to validate app text handling and user experience.

3.1.3 Tool Selection

We chose the widely used multi‑language data library Faker . Faker provides rich language resources and grammar rules, supporting global locales and locale‑specific fields such as street names and postal codes.

Faker also generates random data, ensuring each automation run uses varied test scenarios to uncover more potential defects.

3.1.4 Data Construction

We focused on core user and driver input types (first name, last name, full name, address, email, phone, etc.).

Faker faker = new Faker(new Locale("en_US"));

// Generate random name
String firstName = faker.name().firstName();
String lastName = faker.name().lastName();
String fullName = faker.name().fullName();

// Generate random address
String streetAddress = faker.address().streetAddress();
String city = faker.address().city();
String state = faker.address().state();
String country = faker.address().country();
String zipCode = faker.address().zipCode();

// Generate random email
String email = faker.internet().emailAddress();

We also needed to handle varying ID formats across countries. Faker offers letterify, numerify, and bothify to create random alphabetic, numeric, or alphanumeric strings.

FakeValueService requires a Locale and RandomService:

public void generateIdentifyID() {
    FakeValuesService fakeValuesService = new FakeValuesService(new Locale("en-GB"), new RandomService());
    String identifyID = fakeValuesService.bothify("????####??");
}

This replaces "?" with random letters and "#" with random digits.

Similarly, regexify generates strings matching a given regular expression:

@Test
public void givenValidService_whenRegexifyCalled_checkPattern() throws Exception {
    FakeValuesService fakeValuesService = new FakeValuesService(new Locale("en-GB"), new RandomService());
    String alphaNumericString = fakeValuesService.regexify("[a-z1-9]{10}");
    Matcher alphaNumericMatcher = Pattern.compile("[a-z1-9]{10}").matcher(alphaNumericString);
}

3.1.5 Interaction Design

Transmission efficiency: Adopted efficient data transfer protocols and compression, optimizing network flow.

Transmission security: Used HTTPS encryption and retry mechanisms for UI automation data transfer.

Process flow:

Prism platform provides predefined data templates and a friendly front‑end for test engineers.

Automation platforms (mobile cloud test and API automation) request data from Prism, integrate it into test scenarios, and report results.

QA engineers use Prism‑generated data during functional testing.

3.1.6 Assertion and Result Presentation

Assertion rules: Defined detailed rules covering text content, format, and special symbols for both UI and end‑to‑end API validation.

Result display: Automated test reports present data details, assertion descriptions, outcomes, and error messages. Visual analysis uses charts to show defect distribution and trends.

3.2 Multi‑Market Push Testing

Traditional API automation focuses on response values and data persistence. Push verification requires checking generation, transmission, reception, and processing across components. We map the push flow, from the push center through downstream services to Kafka, then to FCM or MQTT for delivery.

3.2.1 Test Scenario Construction

Manual generation: Developers or testers trigger pushes during normal app usage.

Platform generation: Automated tasks create push messages linked to test cases.

Scenario assembly: Reuse and combine cases from existing automation projects via XML routing.

3.2.2 Task Integration

Three concepts link automation and platform:

Push assertion: Configurable expected results for each market.

Test scenario: Associates automation steps, assertions, and visual scripts.

Test task: Binds scenario, market, language, environment, and accounts; serves as execution entry point.

3.2.3 Message Persistence

We store only business‑relevant push events, filtering out device‑related noise. Kafka messages are categorized as successful, failed, or exceptional, and persisted according to the chosen strategy.

3.2.4 Comparison Assertion

Real‑time comparison: Immediate matching of incoming Kafka messages against filter rules.

Scheduled comparison: Periodic scans of stored messages that missed real‑time checks.

3.2.5 Result Polling

Different push triggers (synchronous, asynchronous, scheduled) require configurable timeouts. For example, a marketing message must be sent within three minutes after user registration; the system records a timeout if the assertion is not met.

3.2.6 Result Notification

Upon test completion, results are pushed to a Feishu bot with a task link for easy lookup.

3.3 Multi‑Language Testing

Lalamove frequently validates translations across markets. We built and continuously improved a multi‑language testing tool integrated into Prism, offering automated checks for language detection, translation comparison, spelling, and long‑text handling.

3.3.1 Language Detection

We detect untranslated keys by parsing translation files and applying an N‑Gram language detection model built from Wikipedia and Twitter corpora.

3.3.2 Translation Comparison

We compare current branch translations with previous releases and with product‑provided CSV/Excel files to ensure consistency.

3.3.3 Spell Check

Full‑text spell checking identifies misspellings in translated content.

3.3.4 Long‑Text Validation

We set dynamic length thresholds per market; for example, a string that fits in two lines in English may require three lines in Vietnamese, risking UI overflow. The tool flags texts exceeding the threshold.

Future Outlook

Localization testing has shown early success, but the journey continues. Future work includes deepening platform tool development, enhancing data accuracy and diversity, integrating with other automation suites, expanding detection types (currency, compliance), combining long‑text checks with UI automation, and adopting advanced machine translation and NLP for automated quality assessment.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

TestingLocalizationFakerprism
Huolala Tech
Written by

Huolala Tech

Technology reshapes logistics

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.