2024 Security and Trusted AI Research Highlights from Alibaba, Tsinghua, Zhejiang, and Partner Institutions
This article presents sixteen peer‑reviewed research papers published in top conferences and journals in 2024, covering trusted AI, large‑model applications, network security, adversarial training, deep‑fake detection, secure inference, and related topics from collaborations among Alibaba, Tsinghua, Zhejiang, and other leading institutions.
In 2024, the rapid acceleration of global digital transformation has heightened the importance of security, prompting Alibaba and partners—including Tsinghua University, Zhejiang University, the National Key Laboratory of Cryptography, Shanghai Jiao Tong University, East China Normal University, and others—to publish sixteen papers across trusted AI, large‑model applications, and network security.
1. Sophon: Non‑Fine‑Tunable Learning to Restrain Task Transferability For Pre‑trained Models Authors: Jiangyi Deng, Shengyuan Pang, Yanjiao Chen, Liangming Xia, Yijie Bai, Haiqin Weng, Wenyuan Xu Venue: S&P 2024 (CCF A) Abstract: Proposes the SOPHON framework that introduces a non‑fine‑tunable learning paradigm to prevent misuse of pre‑trained models while preserving original task performance, significantly increasing fine‑tuning cost and resisting various fine‑tuning attacks. Read paper
2. Exploring ChatGPT's Capabilities on Vulnerability Management Authors: Peiyu Liu, Junming Liu, Lirong Fu, Kangjie Lu, Yifan Xia, Xuhong Zhang, Wenzhi Chen, Haiqin Weng, Shouling Ji, Wenhai Wang (et al.) Venue: USENIX Security Symposium 2024 (CCF A) Abstract: Evaluates ChatGPT on six vulnerability‑management sub‑tasks using a dataset of 70,346 samples, comparing it with state‑of‑the‑art methods, analyzing prompt effects, and identifying challenges and future research directions. Read paper
3. RACONTEUR: A Knowledgeable, Insightful, and Portable LLM‑Powered Shell Command Explainer Authors: Jiangyi Deng, Xinfeng Li, Yanjiao Chen, Yijie Bai, Haiqin Weng, Yan Liu, Tao Wei, Wenyuan Xu Venue: NDSS 2025 Abstract: Introduces RACONTEUR, an LLM‑driven shell command explainer that integrates domain knowledge to provide comprehensive functional and intent explanations, maps explanations to MITRE ATT&CK techniques, and employs a document retriever for unseen private commands. Read paper
4. Alchemy: Data‑Free Adversarial Training Authors: Yijie Bai, Zhongming Ma, Yanjiao Chen, Jiangyi Deng, Shenyuan Pang, Yan Liu, Wenyuan Xu Venue: CCS 2024 (CCF A) Abstract: Presents the first data‑free adversarial training framework that reconstructs robust training data without accessing original datasets, enhancing model robustness while maintaining high accuracy. Read paper
5. ProFake: Detecting Deepfakes in the Wild against Quality Degradation with Progressive Quality‑adaptive Learning Authors: Huiyu Xu, Yaopeng Wang, Zhibo Wang, Zhongjie Ba, Wenxin Liu, Lu Jin, Haiqin Weng, Tao Wei, Kui Ren (et al.) Venue: CCS 2024 (CCF A) Abstract: Analyzes the impact of real‑world image quality degradation on deep‑fake detection and proposes ProFake, a progressive quality‑adaptive learning framework that improves robustness against varied compression levels. Read paper
6. Course‑Correction: Safety Alignment Using Synthetic Preferences Authors: Rongwu Xu, Yishuo Cai, Zhenhong Zhou, Renjie Gu, Haiqin Weng, Liu Yan, Tianwei Zhang, Wei Xu, Han Qiu Venue: EMNLP (Industry Track) 2024 (CCF B) Abstract: Addresses harmful content generation in large language models by synthesizing preference data to improve model self‑correction capabilities without sacrificing performance. Read paper
7. DKCF: A Trustworthy Framework for Large‑Model Applications Authors: Wei Tao, Liu Yan, Weng Haiqin, Zhong Zhenyu, Zhu Zhetou, Wang Yu, Wang Meiqin (et al.) Venue: Information Security Research, Dec 2024 Abstract: Proposes the DKCF framework integrating data, knowledge, collaboration, and feedback to ensure trustworthy outcomes for large‑model deployments in finance, healthcare, and security domains. Read paper
8. Constructing SDN Covert Timing Channels between Hosts with Unprivileged Attackers Authors: Yixiong Ji, Jiahao Cao, Qi Li, Yan Liu, Tao Wei, Ke Xu, Jianping Wu Venue: IEEE/ACM Transactions on Networking (CCF A) Abstract: Introduces a novel covert timing channel in software‑defined networks that operates without privileged access, and presents CovertGuard, a detection and mitigation system based on timing characteristics. Read paper
9. TypeFSL: Type Prediction from Binaries via Inter‑procedural Data‑flow Analysis and Few‑shot Learning Authors: Zi‑rui Song, Yu‑tong Zhou, Shuai‑ke Dong, Ke Zhang, Ke Huan (et al.) Venue: ASE 2024 Abstract: Proposes a few‑shot learning framework leveraging inter‑procedural data‑flow information to recover variable types from stripped binaries, achieving high accuracy and resilience to obfuscation. Read paper
A Fast, Performant, Secure Distributed Training Framework For LLM Authors: Huang Wei, Wang Yinggui, Cheng Anda, Zhou Aihui, Yu Chaofan, Wang Lei Venue: ICASSP 2024 Abstract: Presents a secure distributed LLM training framework using model slicing, TEEs, and lightweight One‑Time‑Pad encryption to prevent data and parameter leakage while maintaining performance. Read paper
Enhanced Face Recognition using Intra‑class Incoherence Constraint Authors: Huang Yuanqing, Wang Yinggui, Yang Le, Wang Lei Venue: ICLR 2024 (spotlight) Abstract: Improves face recognition by orthogonal decomposition of features from a weaker model, followed by magnitude adjustment and vector addition, and introduces an intra‑class incoherence constraint to further boost performance. Read paper
LMSanitator: Defending Prompt‑Tuning Against Task‑Agnostic Backdoors Authors: Wei Chengkun, Meng Wenlong, Zhang Zhikun, Chen Min, Zhao Minghu, Fang Wenjing, Wang Lei, Zhang Zihui, Chen Wenzhi (et al.) Venue: NDSS 2024 Abstract: Detects and removes task‑agnostic backdoors in transformer models by reversing predefined attack vectors and leveraging frozen pretrained weights for efficient inference‑time monitoring. Read paper
Ditto: Quantization‑aware Secure Inference of Transformers upon MPC Authors: Wu Haoqi, Fang Wenjing, Zheng Yancheng, Ma Junming, Tan Jin, Wang Lei Venue: ICML 2024 Abstract: Implements layer‑wise static quantization and optimized MPC operators to achieve 2‑4× speed‑up for secure inference of BERT and GPT‑2 without significant accuracy loss. Read paper
SecretFlow‑SCQL: A Secure Collaborative Query Platform Authors: Fang Wenjing, Cao Shunde, Hua Guojin, Ma Junming, Yu Yongqiang, Huang Qunshan, Feng Jun, Tan Jin (et al.) Venue: VLDB 2024 Abstract: Provides a SQL‑compatible secure multi‑party computation platform that supports flexible security‑performance trade‑offs for cross‑institutional data analysis. Read paper
Nimbus: Secure and Efficient Two‑Party Inference for Transformers Authors: Li Zhengyi, Yang Kang, Tan Jin, Lu Wenjie, Wu Haoqi, Wang Xiao, Yu Yu, Zhao Derun, Zheng Yancheng (et al.) Venue: NeurIPS 2024 Abstract: Introduces a 2‑PC framework that uses an outer‑product based matrix multiplication protocol and low‑degree polynomial approximations for GELU/Softmax, achieving 2.7‑4.7× speed‑up over prior two‑party inference with <0.1% accuracy loss. Read paper
Privacy Evaluation Benchmarks for NLP Models Authors: Huang Wei, Wang Yinggui, Chen Cen Venue: EMNLP 2024 Abstract: Proposes a comprehensive benchmark for evaluating privacy attacks and defenses on NLP models, including a chainable attack framework and insights into data‑augmentation effects on attack strength. Read paper
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.