Tag

low-power AI

0 views collected around this technical thread.

DataFunTalk
DataFunTalk
Nov 10, 2020 · Artificial Intelligence

Low‑Power ADAS on Didi’s JueShi Devices Reduces Traffic Accidents

This article describes how Didi’s vehicle‑vision team built an ultra‑low‑power ADAS solution on the JueShi dash‑cam platform, using lightweight detection models, temporal fusion, camera‑calibration techniques and data‑driven optimization to cut rear‑end collision rates by over 11% and improve overall traffic safety.

ADAScamera calibrationcomputer vision
0 likes · 15 min read
Low‑Power ADAS on Didi’s JueShi Devices Reduces Traffic Accidents
Didi Tech
Didi Tech
Nov 9, 2020 · Artificial Intelligence

Ultra-Low-Power ADAS on DiDi's JueShi Devices for Reducing Traffic Accidents

DiDi’s ultra‑low‑power JueShi ADAS combines lightweight vision models, temporal‑fusion Kalman filtering, and camera‑calibration techniques to deliver real‑time forward‑collision warnings and brake‑light alerts, cutting rear‑end crashes by over 11% and overall accidents by 9% through continuous edge‑AI learning.

ADAScollision avoidancecomputer vision
0 likes · 15 min read
Ultra-Low-Power ADAS on DiDi's JueShi Devices for Reducing Traffic Accidents
Architects' Tech Alliance
Architects' Tech Alliance
Jan 4, 2020 · Artificial Intelligence

In‑Memory Computing: Overcoming the Memory Wall for AI Chips

The article explains how the memory‑wall limitation of traditional von Neumann architectures hampers AI chip performance, describes two in‑memory computing approaches—circuit‑level modifications and new memory devices—highlights recent conference trends, and showcases a Chinese startup’s 8‑bit low‑power in‑memory AI chip that could enable ubiquitous AI on edge devices.

AI chipsin-memory computinglow-power AI
0 likes · 12 min read
In‑Memory Computing: Overcoming the Memory Wall for AI Chips
Alibaba Cloud Infrastructure
Alibaba Cloud Infrastructure
Sep 20, 2018 · Artificial Intelligence

High‑Efficiency Neural Network Computing Architectures and the Thinker AI Chip Family by Prof. Yin Shouyi

Prof. Yin Shouyi of Tsinghua University presented a reconfigurable, low‑bit quantized neural‑network architecture and the Thinker‑I, Thinker‑II, and Thinker‑S chips, demonstrating ultra‑low power consumption and high energy‑efficiency for AI deployment on edge devices.

AI hardwareThinker chiplow-power AI
0 likes · 4 min read
High‑Efficiency Neural Network Computing Architectures and the Thinker AI Chip Family by Prof. Yin Shouyi