Tag

trustworthy AI

1 views collected around this technical thread.

AntTech
AntTech
Sep 11, 2024 · Artificial Intelligence

2024 Inclusion·Bund Conference Forum: Exploring the Creative Boundaries and Application Imagination of Large Models

The 2024 Inclusion·Bund Conference hosted a forum on "Large Model Creativity Boundaries and Application Imagination," featuring leading AI experts who discussed agents, multimodal technology, knowledge graphs, announced a new industry alliance, unveiled three major model products, and presented a trustworthy AI framework report for finance, healthcare, and government sectors.

AIFinancial AIindustry alliance
0 likes · 6 min read
2024 Inclusion·Bund Conference Forum: Exploring the Creative Boundaries and Application Imagination of Large Models
AntTech
AntTech
Sep 6, 2024 · Artificial Intelligence

Large Model Industry Trustworthy Application Framework Research Report

Ant Group and the China Academy of Information and Communications Technology released a research report outlining a trustworthy application framework for large models in rigorous sectors such as finance and healthcare, detailing technical safeguards, industry case studies, and guidance for scalable, secure AI deployment.

AI deploymentAI governanceHealthcare AI
0 likes · 3 min read
Large Model Industry Trustworthy Application Framework Research Report
AntTech
AntTech
Aug 12, 2024 · Artificial Intelligence

DKCF Trustworthy Framework for Large Model Applications and AI Security Practices

The article outlines the DKCF (Data‑Knowledge‑Collaboration‑Feedback) trustworthy framework presented at the 2024 Shanghai Cybersecurity Expo, detailing challenges of large AI models, four key trust factors, and Ant Group's practical security implementations for professional AI deployments.

AI SafetyDKCFfeedback loops
0 likes · 10 min read
DKCF Trustworthy Framework for Large Model Applications and AI Security Practices
AntTech
AntTech
Aug 6, 2024 · Artificial Intelligence

Trustworthy Alignment of Retrieval‑Augmented Large Language Models via Reinforcement Learning

The article explains how recent research tackles large language model hallucinations by combining retrieval‑augmented generation with reinforcement learning, achieving significant accuracy and reliability gains and paving the way for safe AI deployment in critical sectors such as finance and healthcare.

ICML2024Retrieval-Augmented Generationhallucination
0 likes · 5 min read
Trustworthy Alignment of Retrieval‑Augmented Large Language Models via Reinforcement Learning
Sohu Tech Products
Sohu Tech Products
Jul 31, 2024 · Artificial Intelligence

MMEvalPro: A Trustworthy Benchmark for Evaluating Multimodal Large Models

MMEvalPro, a new benchmark created by researchers from Peking University, Chinese Academy of Medical Sciences, CUHK and Alibaba, augments existing multimodal datasets with perception and knowledge questions and introduces a Genuine Accuracy metric, revealing that top multimodal models still lag far behind humans and exposing shortcut‑driven performance on prior tests.

BenchmarkMMEvalProlarge language models
0 likes · 11 min read
MMEvalPro: A Trustworthy Benchmark for Evaluating Multimodal Large Models
AntTech
AntTech
Jul 9, 2024 · Artificial Intelligence

2024 Large Model Security Practice Whitepaper Unveiled at the World AI Conference

The jointly authored 2024 Large Model Security Practice whitepaper, released at the World AI Conference, outlines a comprehensive safety framework covering security, reliability, and controllability, presents industry case studies, and proposes a five‑dimensional governance model to guide high‑quality development of large AI models.

AI SafetyWhitepaperindustry practice
0 likes · 7 min read
2024 Large Model Security Practice Whitepaper Unveiled at the World AI Conference
AntTech
AntTech
Dec 26, 2023 · Artificial Intelligence

Key Insights from Wang Weiqiang’s Speech on Large‑Model Security at the AI Innovation and Governance Conference

Wang Weiqiang, chief scientist of Ant Group’s Security Lab, highlighted the urgent need for both rapid detection and long‑term trustworthy safeguards for large AI models, outlining Ant’s data‑detox, guard‑rail, and detection platforms as core solutions to emerging risks such as hallucinations, bias, and data leakage.

AI SafetyAI governanceAnt Group
0 likes · 10 min read
Key Insights from Wang Weiqiang’s Speech on Large‑Model Security at the AI Innovation and Governance Conference
AntTech
AntTech
Sep 12, 2023 · Artificial Intelligence

Ensuring Trustworthy and Secure AI: Insights from the 2023 Pujiang Innovation Forum

The 2023 Pujiang Innovation Forum highlighted the rapid rise of generative AI, its associated security and privacy risks, and presented Ant Group's multi‑stage, multi‑layered approach—including data, training, and inference controls and three core defense technologies—to achieve safe, reliable, and open knowledge sharing in the era of large language models.

AI SafetyInformation Securityknowledge sharing
0 likes · 10 min read
Ensuring Trustworthy and Secure AI: Insights from the 2023 Pujiang Innovation Forum
DataFunTalk
DataFunTalk
Jun 19, 2023 · Artificial Intelligence

Rensselaer Polytechnic Institute (RPI) Computer Science Faculty, Resources, and PhD/Intern Recruitment Overview

The announcement introduces RPI's prestigious computer science department, its extensive GPU resources, collaborations with IBM Research, and detailed profiles of three incoming faculty members—highlighting their research areas in graph neural networks, trustworthy AI, data‑centric AI, drug‑design generative models, and neural‑symbolic reasoning—while inviting PhD and intern applicants to apply with full scholarships and funding support.

Data-Centric AIDrug DesignGraph Neural Networks
0 likes · 8 min read
Rensselaer Polytechnic Institute (RPI) Computer Science Faculty, Resources, and PhD/Intern Recruitment Overview
DataFunTalk
DataFunTalk
May 1, 2023 · Artificial Intelligence

Trustworthy Intelligent Decision-Making: Framework, Counterfactual Reasoning, Complex Payoffs, Predictive Fairness, and Regulated Decisions

This article presents a comprehensive overview of trustworthy intelligent decision-making, introducing a decision framework and discussing counterfactual reasoning, complex reward modeling, predictive fairness, and regulatory constraints, while highlighting practical methods and recent research advances in each sub‑area.

Policy Evaluationcausal inferencecounterfactual reasoning
0 likes · 29 min read
Trustworthy Intelligent Decision-Making: Framework, Counterfactual Reasoning, Complex Payoffs, Predictive Fairness, and Regulated Decisions
AntTech
AntTech
Dec 26, 2022 · Artificial Intelligence

AntSec MLOps: Building a Scalable, Automated, and Trustworthy AI Risk‑Control Platform

This article describes the challenges, overall architecture, data development, model monitoring, continuous training, security‑trustworthiness, and future roadmap of Ant Security's intelligent risk‑control platform, illustrating how AI, big data, and cloud computing are integrated to create a scalable, automated MLOps solution for dynamic fraud detection and mitigation.

AIAutomationmlops
0 likes · 28 min read
AntSec MLOps: Building a Scalable, Automated, and Trustworthy AI Risk‑Control Platform
AntTech
AntTech
Sep 28, 2022 · Artificial Intelligence

Advancing Trustworthy AI to Industrial-Scale Applications: Insights from Ant Group

The article outlines Ant Group's comprehensive approach to promoting trustworthy AI in large‑scale industrial settings, detailing the four core pillars of robustness, explainability, privacy protection, and fairness, and describing practical methodologies, open platforms, and ecosystem collaborations that drive responsible AI deployment.

AI Safetyexplainabilityfairness
0 likes · 13 min read
Advancing Trustworthy AI to Industrial-Scale Applications: Insights from Ant Group
AntTech
AntTech
Jul 18, 2022 · Artificial Intelligence

Trusted AI Research at Ant Group: Advances in Computer Vision, Watermark Defense, Robust Machine Learning, and Explainable NLG

Ant Group’s security labs present a series of cutting‑edge AI research achievements—including hierarchical multi‑granular classification for computer vision, watermark‑vaccine defenses, multi‑modal document understanding, robust and explainable machine learning, and logic‑driven data‑to‑text generation—highlighting their commitment to trustworthy and secure AI applications.

AI SafetyData2TextRobust Machine Learning
0 likes · 12 min read
Trusted AI Research at Ant Group: Advances in Computer Vision, Watermark Defense, Robust Machine Learning, and Explainable NLG
AntTech
AntTech
Mar 31, 2022 · Artificial Intelligence

Trustworthy AI in the Digital Economy: Practices and Explorations by Ant Group

In a keynote at the Machine Heart AI Technology Conference, Ant Group's Zhou Jun presented the concept of trustworthy AI, detailing its integration with privacy, security, graph learning, explainable and adversarial machine learning, and large‑scale privacy‑preserving techniques to enhance financial risk control in the digital economy.

Explainable Machine LearningGraph Neural Networksadversarial learning
0 likes · 20 min read
Trustworthy AI in the Digital Economy: Practices and Explorations by Ant Group
Alimama Tech
Alimama Tech
Aug 25, 2021 · Artificial Intelligence

Calibration Techniques for User Response Prediction in Online Advertising

Alibaba Mama’s talk explains how calibrated probability models—evolving from simple Platt scaling to Bayesian isotonic regression and real‑time wave‑adjusted variants—improve click‑through and conversion predictions, enabling more accurate bidding, stable auctions, and fairer ad allocation despite data drift and sparsity.

CTR predictionalgorithmcalibration
0 likes · 20 min read
Calibration Techniques for User Response Prediction in Online Advertising
DataFunTalk
DataFunTalk
Aug 9, 2021 · Artificial Intelligence

Calibration Techniques for User Behavior Prediction in Online Advertising: Background, Algorithm Evolution, and Engineering Practice

This article introduces the concept of calibration in trustworthy machine learning, explains why accurate probability estimates are crucial for online advertising, reviews related research and evaluation metrics, and details the evolution of calibration algorithms such as Smoothed Isotonic Regression, Bayes‑SIR, real‑time optimizations, and post‑click conversion models, concluding with engineering deployment and future directions.

algorithm optimizationcalibrationclick‑through rate
0 likes · 18 min read
Calibration Techniques for User Behavior Prediction in Online Advertising: Background, Algorithm Evolution, and Engineering Practice