Open Privacy Computing Protocol SS‑LR: A Secret‑Sharing Based Logistic Regression Framework
The SS‑LR open protocol describes a secret‑sharing based logistic regression algorithm split into four layers—machine learning, secure operators, cryptographic protocol, and network transmission—enabling interoperable, privacy‑preserving data flow and secure multi‑party model training across institutions.
Data element flow includes analysis, feature preprocessing, model training, evaluation, reporting, and online/offline prediction; privacy computing is applied to each step to secure cross‑institution data circulation, enabling more accurate risk models and efficient business decisions.
However, insufficient interoperability and heterogeneous architectures among enterprises hinder data exchange, making a unified, trusted data network a practical necessity.
The privacy‑computing open protocol suite is a collection of transparent algorithm specifications (e.g., ECDH‑PSI, SS‑LR). By clearly defining each algorithm’s computation flow and interaction information, platforms can independently implement compatible solutions. The open protocol offers transparency, controllable security, easy integration, and reduced audit costs.
SS‑LR (Secret Sharing based Logistic Regression) is a secret‑sharing logistic regression algorithm. Its design decomposes the protocol into four layers:
Machine Learning layer : defines LR logic, gradient calculation, weight updates, regularization, etc.
Secure Operator layer : provides privacy‑preserving implementations of basic operators such as Sigmoid and MatMul.
Cryptographic Protocol layer : based on the SPDZ Semi2K secret‑sharing protocol, detailing execution of the secret‑sharing scheme.
Network Transmission layer : specifies communication interfaces, data formats, and networking links for the algorithm.
This layered decomposition decouples complex logic, achieving high cohesion and low coupling for each component.
The protocol design comprises two phases: a handshake negotiation phase and the main algorithm execution phase. The handshake aligns algorithm versions and parameters, enhancing extensibility and compatibility. Participants exchange a HandshakeRequest and HandshakeResponse containing parameters for each layer, as summarized in the table below.
Machine Learning Parameters
Epoch_num, batch_size, optimizer params (sgd/momentum/adam…), L0/L1/L2 Norm
Secure Operator Parameters
MatMul, Sigmoid (minimax/taylor‑3…)
Cryptographic Protocol Parameters
SS Protocol (Semi2k/ABY3…), CSPRNG params (AES128CTR/SM4CTR…), Scale & Truncation mode, SS field type (Ring32/64/128), Triple service config
Network Communication Parameters
Protocol (grpc/brpc/https…), Role (rank 0/1), Buffer format
During the execution phase, SS‑LR employs mini‑batch gradient descent. For each batch, the protocol runs under the SPDZ Semi2K secret‑sharing scheme, ensuring every variable remains in an arithmetic sharing state; each party sees only its local share, preventing any leakage of raw data. A detailed algorithm description is available in the linked PDF.
Guided by the Privacy Computing Alliance of the China Academy of Information and Communications Technology, China Telecom, China Mobile, China Unicom, Ant Group, and Dongjian Technology jointly completed the SS‑LR protocol specification, code development, and practical deployment. On July 7, 2023, at the World AI Conference’s Data Elements and Privacy Computing Summit, the "Privacy Computing Cross‑Platform Interoperability Open Protocol: SS‑LR" was officially released, enabling cross‑industry interoperability among the three major telecom operators and the internet, with reference implementations published on the SecretFlow community.
It is believed that the open privacy‑computing protocol suite will build a more open, transparent, and secure privacy‑computing platform, protecting data security and fostering a sustainable, stable digital‑economy ecosystem.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.