How UAI-Train Accelerated Face Recognition Model Training by 85% for a FinTech Leader

The UAI-Train distributed GPU platform cut a 7‑million‑image face‑recognition training cycle from a week to a day, slashed GPU costs by up to 90%, and boosted algorithm optimization efficiency by 85.7% for the fintech company Paipaidai.

UCloud Tech
UCloud Tech
UCloud Tech
How UAI-Train Accelerated Face Recognition Model Training by 85% for a FinTech Leader

What is UAI-Train

UAI-Train is a large‑scale distributed computing platform for AI training tasks, built on GPU cloud instances such as P40 and V100. By scaling out across clusters, it can deliver up to 192 TFlops of single‑precision performance and offers a one‑stop service that automates node scheduling, environment setup, data transfer, and fault tolerance, with pay‑as‑you‑go pricing.

Impact on Paipaidai

Using the platform, Paipaidai reduced the training time for a 7 million‑image face dataset from one week to one day, improving overall algorithm optimization efficiency by 85.7% . GPU costs dropped from over ten thousand yuan per month to a few thousand yuan, and resource utilization reached 100%.

UAI-Train vs. purchased GPU comparison
UAI-Train vs. purchased GPU comparison

About Paipaidai

Paipaidai is a leading fintech company that heavily invests in AI technologies, applying computer vision, speech analysis, natural language processing, and network analysis to various business scenarios such as face recognition, OCR, fraud detection, and intelligent chatbots, thereby enhancing risk control and operational efficiency.

Face Recognition

The company’s self‑developed face‑recognition algorithm is trained on 7 million images covering diverse ages, poses, expressions, and environments. It explores network architectures like Inception‑v3 and optimized ResNet, and loss functions such as triplet_loss, sphere, cosine, and arc_loss to improve verification, search, and clustering accuracy.

Face recognition application scenarios
Face recognition application scenarios

Challenges

Training on a single GPU required about a week per iteration, and scaling by purchasing more GPUs led to linear cost growth while utilization remained low.

Adopting UAI-Train

After learning about UAI-Train at a technical meetup, Paipaidai chose the platform for its on‑demand GPU rental and multi‑machine, multi‑card distributed training capabilities. UCloud’s AI team released an Insightface case on GitHub to help convert single‑machine face‑recognition code to distributed training.

Insightface Integration

Insightface, an open‑source MXNet‑based face‑recognition project, was adapted by UCloud and published at GitHub . Paipaidai engineers leveraged this code, completing development and debugging within one week and launching iterative training on UAI-Train.

Face recognition algorithm integration process
Face recognition algorithm integration process

Results and Future Plans

After multiple optimization cycles, the model achieved test‑set accuracies of lfw 99.8%, cfp_fp 97%, agedb_30 98.2%, and over 99% accuracy in production, further improving risk monitoring and anti‑fraud efficiency. Paipaidai and UCloud plan deeper collaboration across more algorithmic and application scenarios.

face recognitionAI trainingMXNetInsightfaceUAI-Traindistributed GPU
UCloud Tech
Written by

UCloud Tech

UCloud is a leading neutral cloud provider in China, developing its own IaaS, PaaS, AI service platform, and big data exchange platform, and delivering comprehensive industry solutions for public, private, hybrid, and dedicated clouds.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.