Improving Machine Translation: Addressing Exposure Bias, Efficient Decoding, and Non‑Autoregressive Models
This article reviews recent research on machine translation that tackles the training‑inference distribution gap, exposure bias, and slow autoregressive decoding by introducing scheduled sampling, differentiable sequence‑level losses, cube‑pruning, and sequence‑aware non‑autoregressive decoding, showing BLEU gains and significant speedups.
In a recent talk by Feng Yang, associate researcher at the Chinese Academy of Sciences, the challenges of current neural machine translation (NMT) models are outlined, including the reliance on teacher forcing, the resulting exposure bias, and the inefficiency of sequential decoding.
The first set of solutions focuses on reducing the training‑inference mismatch through scheduled (plan) sampling, an ACL 2019 best‑paper method that randomly mixes oracle predictions and ground‑truth tokens during training, and on designing differentiable sequence‑level loss functions that weight n‑gram probabilities instead of using hard argmax BLEU scores.
To accelerate decoding, a cube‑pruning algorithm is introduced, which restructures the traditional beam search from a two‑dimensional to a three‑dimensional search, grouping hypotheses by their last token and approximating scores to dramatically cut the computation from Beam × |V| to a much smaller constant factor, achieving 3.3×–4.2× speedups on GPU/CPU.
Further speed gains are obtained by non‑autoregressive (NAT) decoding enhanced with sequence information: a fertility predictor copies source tokens according to predicted target lengths, and reinforcement learning with top‑K sampling injects sequence‑level signals, while a hybrid NAT+AR architecture adds an autoregressive top layer to improve fluency.
Experimental results on Chinese‑English and English‑German news datasets show BLEU improvements of up to 2.3 points over baselines for RNN‑Search and 1.5 points for Transformer, along with decoding speed increases of 3.3–4.2× and training speedups of up to 10× when using the proposed sequence‑aware methods.
The talk concludes with acknowledgments and references to the research group and personal webpages.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.