Four Alibaba Papers Accepted at AAAI 2021: Bandits, Video Adaptation, Sentiment, Segmentation

AAAI 2021, the premier AI conference with a 21.4% acceptance rate, accepted four papers from Alibaba Entertainment covering non‑stationary stochastic bandits with graph feedback, spatial‑temporal causal inference for image‑to‑video adaptation, a unified MRC framework for aspect‑based sentiment analysis, and amodal segmentation using shape priors.

Youku Technology
Youku Technology
Youku Technology
Four Alibaba Papers Accepted at AAAI 2021: Bandits, Video Adaptation, Sentiment, Segmentation

AAAI 2021, organized by the Association for the Advancement of Artificial Intelligence, received 9,034 submissions, of which 7,911 were reviewed and 1,692 papers were accepted (21.4% acceptance rate), often called the “strictest AAAI ever”. Alibaba Entertainment Group had four papers accepted.

Stochastic Bandits with Graph Feedback in Non‑stationary Environments

Authors: Shiyin Lu, Yao Hu, Lijun Zhang.

This work studies the stochastic multi‑armed bandit problem with graph feedback, where pulling an arm reveals rewards of neighboring arms. Existing studies assume stationary reward distributions, which is unrealistic for recommendation systems and online advertising. The authors propose a suite of algorithms for non‑stationary settings. When the number of reward changes is known in advance, one algorithm achieves the optimal dynamic regret bound matching the lower bound. They also design an adaptive algorithm that does not require prior knowledge of change count while attaining optimal coverage‑dependent regret.

Spatial‑temporal Causal Inference for Partial Image‑to‑video Adaptation

Authors: Jin Chen, Xinxiao Wu, Yao Hu, Jiebo Luo.

The task is to leverage labeled images to improve learning on unlabeled video data, reducing the cost of training video models from scratch. Compared with image‑only adaptation, image‑to‑video adaptation faces two domain shifts: spatial shift due to appearance differences between images and video frames, and temporal shift because images lack motion information. Moreover, the impact of these shifts varies across video categories. The paper introduces a spatial‑temporal causal graph and uses counterfactual reasoning to quantify the influence of each shift. Based on the causal estimates, a bidirectional heterogeneous mapping between images and videos is learned, and a class‑alignment module is embedded to handle practical scenarios. Experiments on multiple video datasets demonstrate the effectiveness of the proposed causal inference approach.

A Joint Training Dual‑MRC Framework for Aspect‑Based Sentiment Analysis

Authors: Yue Mao, Yi Shen, Chao Yu, Longjun Cai.

The authors reformulate all subtasks of fine‑grained sentiment analysis (ABSA) as question‑answering problems and propose a unified machine‑reading‑comprehension (MRC) framework that jointly trains the tasks. The approach achieves state‑of‑the‑art results on several public ABSA benchmarks.

Amodal Segmentation Based on Visible Region Segmentation and Shape Prior

Authors: Yuting Xiao, Yanyu Xu, Ziming Zhong, Weixin Luo, Jiawei Li, Shenghua Gao.

To segment invisible regions, the paper proposes a unified framework that first obtains coarse visible‑light and invisible masks, then incorporates visible‑region cues and shape priors to infer the occluded parts. The shape prior improves robustness, and the model outperforms existing methods on three datasets.

Artificial IntelligenceSentiment Analysisdomain adaptationBandit AlgorithmsAAAI 2021Alibaba ResearchAmodal Segmentation
Youku Technology
Written by

Youku Technology

Discover top-tier entertainment technology here.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.