Backend Development 15 min read

Refactoring a Legacy Payment System to Microservices: Challenges, Decisions, and Lessons Learned

This article recounts how a rapidly growing payment team split off from its original department, faced scaling, performance, and maintainability issues with a legacy SSH‑based Java architecture, and chose a micro‑service redesign, detailing the problems encountered, the restructuring approach, and the insights gained.

DevOps
DevOps
DevOps
Refactoring a Legacy Payment System to Microservices: Challenges, Decisions, and Lessons Learned

Original Architecture

From a technical viewpoint, the original system was a traditional SSH‑based Java multi‑layer architecture, using Apache Struts for the presentation layer, Spring AOP and Hibernate for the service and data access layers, and MySQL as the single database with multiple tables.

The code looked nostalgic, being about four years old but built on a ten‑year‑old stack. Data was stored in a single MySQL instance without read/write separation.

Architectural Problems

DAO Layer Hibernate abstracted database access, which simplified development but hid the underlying SQL, making performance tuning in high‑volume scenarios difficult.

Service Layer The service layer suffered from tight coupling to the controller layer, use of generic Map parameters, over‑reliance on factory patterns without proper domain modeling, and excessive injection of Spring BeanFactory or Facade services, leading to unclear dependencies and bloated controllers.

Controller Layer Implemented with Apache Struts, which has frequent security vulnerabilities and slow patches. Because of service‑layer issues, controllers also contained business logic, making them hard to test.

Functional Problems

The system only provided basic payment functions and did not separate services for different clients or operational management, leading to a monolithic design.

Implementation Problems

Poor Scalability and Performance All database operations were on a single MySQL instance without read/write separation. The largest table held 50 million rows, with peak QPS around 1,000, supported mainly by simple queries and SSD storage. Anticipated traffic spikes (e.g., flash‑sale “秒杀”) would exceed MySQL’s capacity, requiring a shift to in‑memory databases and more granular SQL optimization.

System Bloat and Long Learning Curve The codebase comprised three projects with over 100 interfaces and thousands of classes (the largest project >1,300 classes). New developers struggled with the massive codebase, encountering “dinosaur‑level” code, huge classes (2,000+ lines), and duplicated logic.

High Collaboration Cost Frequent branch merges and code conflicts caused significant time loss; multiple branches (often five or six) were active simultaneously, leading to daily conflict resolution.

Testing Difficulty Testing environments were costly to set up for each branch, and merged changes often required retesting, slowing progress.

Deployment Risk Complex inter‑dependencies meant that even a small change could jeopardize the whole system, requiring long deployment windows.

Introducing New Technologies The SSH framework limited adoption of newer tools such as caching, read/write splitting, and other modern architectures, making technical upgrades costly and difficult.

Conway’s Law

Initially, a handful of developers maintained the system, mixing external interfaces and internal operations. As the team grew, the existing collaboration model no longer fit, illustrating Conway’s Law: the system’s structure mirrors the organization’s communication structure.

Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.

A new mechanism was needed to support the evolving system.

Layered Architecture and Shared Libraries

To avoid massive changes, the team first leveraged the existing layered design, encapsulating business logic, data access, and presentation layers into independent libraries, splitting the interface layer by business domain, and establishing separate code repositories for each library.

This approach reduced learning time because each layer could be tackled independently: newcomers start with the DAO layer, senior engineers handle core business logic, and front‑end developers focus on the presentation layer.

However, this model required solid upfront architectural design and introduced high communication overhead when interfaces changed, which is often unrealistic in fast‑moving internet companies.

Microservices

Before refactoring the payment system, the company had already migrated its data‑warehouse project to a micro‑service architecture, and the experience was shared on the DockOne community.

Adopting microservices addressed many of the earlier problems:

Performance : High‑traffic interfaces could be accelerated with dedicated caches.

Learning Curve : Small, tightly‑coupled services with simple business logic could be understood in 1–2 hours.

Collaboration Cost : Services interact only via well‑defined APIs, allowing independent development, testing, and deployment.

Version Control : Each service lives in its own repository; with mostly single‑developer ownership, merge conflicts are virtually eliminated, enabling trunk‑based development.

Testing : Independent deployment and testing of each service reduces duplicated test effort.

Deployment Risk : Failures affect only the impacted service, not the entire system.

New Technologies : After a quarter of micro‑service migration, technologies such as Spark, Hadoop, HBase, Couchbase, and Redis were introduced to solve big‑data and caching challenges.

Nevertheless, microservices also bring drawbacks—more services increase operational complexity and troubleshooting difficulty—which will be explored in future posts.

For readers who wish to stay updated with the author’s latest work, please follow the WeChat public account “凤凰牌老熊”.

This article may be reproduced provided the source is credited to the WeChat public account “凤凰牌老熊”.

software architecturePerformance OptimizationmicroservicesBackend Developmentteam scalinglegacy system
DevOps
Written by

DevOps

Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.