WDTA Releases International Standards for Generative AI and Large Language Model Safety Testing at the 27th UN CSTD Annual Meeting
At the 27th UN CSTD Annual Meeting in Geneva, the World Digital Technology Academy unveiled two pioneering international standards—one for generative AI application security testing and another for large language model security testing—crafted by experts from leading AI firms to establish a new global benchmark for AI safety.
The 27th UN CSTD Annual Meeting is being held in Geneva, where the World Digital Technology Academy (WDTA) released two international standards: the “Generative AI Application Security Testing Standard” and the “Large Language Model Security Testing Method.” These standards were compiled by experts from OpenAI, Ant Group, iFlytek, Google, Microsoft, Nvidia, Baidu, Tencent and dozens of other organizations.
Ant Group led the drafting of the Large Language Model Security Testing Method and also participated in the Generative AI standard. This marks the first time an international organization has issued standards in the large‑model safety field. WDTA’s AI STR (Safety, Trustworthy, Responsible) Working Group leader Huang Lianjin said the standards set a new benchmark for AI safety assessment and testing worldwide.
WDTA is an international non‑governmental organization registered in Geneva, operating under the UN framework to promote digital technology globally. Its AI STR program aims to ensure AI systems are safe, trustworthy, and responsible. Members include Ant Group, Huawei, iFlytek, the International Data Spaces Association (IDSA), Fraunhofer Institute, China Electronics, among others.
Since the surge of large‑language‑model (LLM) technology last year, the safety of these models has become a global focus. Nations and leading vendors are investing heavily in LLM safety research and governance, yet no unified standards existed—until now. The two new standards fill this gap, offering a unified testing framework that gives AI enterprises clear testing requirements, helps improve system safety, reduces potential risks, and promotes responsible AI development.
The first standard, “Generative AI Application Security Testing Standard,” is led by WDTA with Ant Group and other partners. It provides a framework for testing and verifying the security of generative AI applications, covering everything from base model selection, embeddings, vector databases, Retrieval‑Augmented Generation (RAG), to runtime security, ensuring comprehensive security and compliance throughout an application’s lifecycle.
The second standard, “Large Language Model Security Testing Method,” is led by Ant Group. It offers a comprehensive, practical structure for assessing LLM security, including risk classification, attack‑type grading, testing methods, and a novel classification of four attack‑strength levels. It details evaluation metrics, capability grading, test‑data‑set construction requirements, and testing processes.
This method addresses the inherent complexity of LLMs by testing resistance to various adversarial attacks—L1 random attacks, L2 blind‑box attacks, L3 black‑box attacks, and L4 white‑box attacks—enabling developers and organizations to identify and mitigate vulnerabilities, thereby enhancing the safety and reliability of AI systems built on LLMs.
Ant Group has been investing in trustworthy AI since 2015, establishing a comprehensive LLM security governance system and developing the industry‑first integrated LLM security solution “Ant Tianjian.” The solution supports AIGC safety and authenticity evaluation, LLM risk control, AI robustness, and explainability testing. Ant Group’s Machine Intelligence Department head and chief scientist of the Ant Security Lab, Wang Weiqiang, spoke at the meeting, emphasizing the need for responsible AI development, industry standards, and collaborative tools to build a safe, ethical AI ecosystem.
Wang concluded that while generative AI can unleash massive productivity, it also brings new risks that must be vigilantly managed. Large tech companies should take responsibility for safe, responsible AI development by providing clear standards, open security tools, and fostering industry‑wide governance.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.