Brainwave and Behavior Recognition: Multi‑Modal Biometric Authentication with Adversarial Contrastive Transfer Learning
This article presents Ant Security's research on novel biometric methods—brainwave (脑纹) and behavior recognition—detailing their scientific background, data collection, multi‑modal deep‑learning algorithms, adversarial and contrastive training strategies, experimental results, and practical applications for inclusive, secure identity verification.
In recent years, biometric technologies have rapidly evolved, leveraging deep‑learning algorithms to achieve high accuracy, yet they face security risks such as physical cloning and accessibility issues for visually impaired users. Ant Security's Tianji Lab proposes brainwave and behavior recognition as new biometric modalities to mitigate these challenges.
Brainwave Recognition Background : Brain‑computer interfaces are emerging as a key frontier for human‑machine interaction. Current brainwave research suffers from limited data, single‑session recordings, and lack of standardized benchmarks, hindering broader adoption.
Data : A 64‑channel EEG cap captures raw brain signals; common paradigms include motor imagery, SSVEP, and P300, each generating distinct EEG tasks. The dataset is illustrated in Figure 1.
Technical Solution : The proposed multi‑modal brainwave algorithm first preprocesses EEG signals (wavelet transform, STFT, mel‑spectrogram) and then extracts features using separate encoders (M5 1‑D CNN for temporal features, seResNet101 2‑D CNN for spectral features). Feature representations are fused and enhanced with metric learning and a contrastive loss. Adversarial perturbation (AMP) is applied during training to improve robustness, and a co‑teaching strategy mitigates noisy pseudo‑labels during domain transfer.
Experimental Results : On the public M3CV dataset, the baseline accuracy of 68% rose to 71.5% with contrastive learning, 74% with adversarial training, and finally 76.9% after model‑weight fusion, earning first place in the China Telecom Brain‑Computer Interface Challenge.
Behavior Recognition Background : Action‑interaction recognition, using sensors (accelerometer, gyroscope) and screen data, enables identity verification and activity detection, especially valuable for users with visual impairments.
Data : Collected sensor streams include 3‑axis acceleration/gyroscope, touch coordinates, pressure, and screen‑capture images (Figures 11‑13).
Technical Solution : A unified AI framework processes multi‑modal inputs: DeepSense (CNN + RNN) for motion sensors, ConvBERT (dynamic convolution + self‑attention) for screen sensors, and EfficientNet + Swin Transformer for screen images. Feature fusion employs Concat Attention, Cross Attention, and Cross‑Contrastive Learning, followed by downstream tasks such as identity verification for visually impaired users.
Privacy Protection : Computation can be performed on‑device, with privacy‑preserving pipelines that transmit only non‑sensitive embeddings to the cloud, ensuring data remains usable yet invisible.
Business Applications & Competitions : The sensor‑based authentication system has been deployed in Alipay, marking the first financial‑grade biometric interaction. It achieved second place in the 2022 CCF Innovation Application Case Competition.
Summary & Outlook : By integrating brainwave and behavior recognition, Ant Security addresses data leakage and accessibility gaps in existing biometrics. Future work includes co‑creating accessibility standards, open‑sourcing patents, and advancing industry‑wide security standards for biometric technologies.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.