Tag

Intel MKL

1 views collected around this technical thread.

58 Tech
58 Tech
Dec 8, 2021 · Artificial Intelligence

dl_inference: A General Deep Learning Inference Service with TensorRT and Intel MKL Acceleration

The article introduces dl_inference, an open‑source deep learning inference platform that integrates TensorRT GPU acceleration, Intel MKL CPU optimization, and Caffe support, detailing its features, model conversion workflow, deployment steps, performance gains, and how developers can contribute.

Deep LearningDockerInference
0 likes · 12 min read
dl_inference: A General Deep Learning Inference Service with TensorRT and Intel MKL Acceleration
58 Tech
58 Tech
Oct 28, 2020 · Artificial Intelligence

Optimizing Resource Utilization of 58.com Deep Learning Platform: Practices and Techniques

This article details how 58.com’s end‑to‑end deep‑learning platform was optimized for higher CPU and GPU inference performance using Intel MKL, OpenVINO, mixed TensorFlow deployment, GPU virtualization, and a Prometheus‑Grafana monitoring system, achieving a 37% reduction in GPU usage and a 146% increase in average GPU utilization.

Deep LearningGPU virtualizationIntel MKL
0 likes · 12 min read
Optimizing Resource Utilization of 58.com Deep Learning Platform: Practices and Techniques