Tag

MPS

0 views collected around this technical thread.

Kuaishou Tech
Kuaishou Tech
Jul 18, 2024 · Artificial Intelligence

Multidimensional Preference Model (MPS) for Text-to-Image Generation: Dataset, Architecture, and Experimental Analysis

This article introduces the Multidimensional Preference Model (MPS), the first multi‑dimensional scoring system for evaluating text‑to‑image generation, built on the newly released MHP dataset with extensive human annotations across aesthetic, semantic alignment, detail quality, and overall preference dimensions, and demonstrates its superior performance through comprehensive experiments and RLHF integration.

AI evaluationMHP datasetMPS
0 likes · 10 min read
Multidimensional Preference Model (MPS) for Text-to-Image Generation: Dataset, Architecture, and Experimental Analysis
Architects' Tech Alliance
Architects' Tech Alliance
Jul 7, 2023 · Fundamentals

Factors Affecting PCIe Link Performance: Encoding, Link Layer, MPS, and MRRS

This article reviews the key factors that influence PCIe link performance—including data encoding schemes, link‑layer and physical‑layer overhead, and the configuration of Maximum Payload Size (MPS) and Maximum Read Request Size (MRRS)—and explains how each impacts real‑world throughput.

Data EncodingLink LayerMPS
0 likes · 8 min read
Factors Affecting PCIe Link Performance: Encoding, Link Layer, MPS, and MRRS
Python Programming Learning Circle
Python Programming Learning Circle
Mar 22, 2023 · Artificial Intelligence

Overview of PyTorch 2.0 Features and New APIs

The article provides a detailed overview of PyTorch 2.0, highlighting its stable and beta features such as torch.compile, accelerated transformers, MPS backend, new quantization support, and prototype parallelism tools, while emphasizing performance improvements for dynamic shapes, distributed training, and CPU/GPU inference.

AIAccelerated TransformersMPS
0 likes · 6 min read
Overview of PyTorch 2.0 Features and New APIs
DataFunSummit
DataFunSummit
Nov 3, 2022 · Artificial Intelligence

Applying NVIDIA MPS to Boost GPU Utilization for Recommendation Inference

This article explains why traditional CPU inference and naïve GPU usage are inefficient for recommendation workloads, introduces NVIDIA Multi‑Process Service (MPS) technology, describes VIVO's custom Rust‑based inference engine and deployment strategies, and presents performance and cost benefits along with practical deployment considerations.

GPU inferenceKubernetesMPS
0 likes · 13 min read
Applying NVIDIA MPS to Boost GPU Utilization for Recommendation Inference
Baidu Geek Talk
Baidu Geek Talk
Jul 18, 2022 · Artificial Intelligence

GPU Container Virtualization for AI Heterogeneous Computing: Architecture and Best Practices

The article surveys GPU container virtualization for AI heterogeneous computing, detailing utilization challenges, historical architectures, various virtualization methods, Baidu's dual-engine user- and kernel-space design with isolation and scheduling features, performance benefits, best‑practice scenarios, and deployment guidance, concluding with a technical Q&A.

AI computingCloud NativeContainerization
0 likes · 30 min read
GPU Container Virtualization for AI Heterogeneous Computing: Architecture and Best Practices
Tencent Cloud Developer
Tencent Cloud Developer
Apr 15, 2020 · Cloud Computing

Building Community Features with Tencent Cloud: COS, CI, MPS, and CDN Solutions

This guide shows how to build community and feed‑stream features on Tencent Cloud by combining COS for scalable media storage, CI for image processing and anti‑hotlinking, MPS for video transcoding and moderation, and CDN for fast global delivery, all configured via URL parameters.

CDNCOSCloud Infinite
0 likes · 13 min read
Building Community Features with Tencent Cloud: COS, CI, MPS, and CDN Solutions