Tag

non-autoregressive

0 views collected around this technical thread.

DataFunTalk
DataFunTalk
Sep 23, 2023 · Artificial Intelligence

Paraformer: An Industrial Non‑Autoregressive End‑to‑End Speech Recognition Model and Its Deployment on ModelScope

This article introduces the Paraformer non‑autoregressive end‑to‑end speech recognition model released by Alibaba DAMO Academy, details its architecture, training strategies, large‑scale performance, and provides step‑by‑step guidance for using and fine‑tuning the model on the ModelScope platform with the FunASR toolkit.

ASRModelScopeParaformer
0 likes · 13 min read
Paraformer: An Industrial Non‑Autoregressive End‑to‑End Speech Recognition Model and Its Deployment on ModelScope
DataFunSummit
DataFunSummit
Jun 15, 2023 · Artificial Intelligence

Paraformer: An Industrial Non‑Autoregressive End‑to‑End Speech Recognition Model

This article introduces the Paraformer model released by Alibaba DAMO Academy on ModelScope, detailing its non‑autoregressive architecture, training strategies, performance on benchmark datasets, and step‑by‑step guidance for fine‑tuning and deploying the model using FunASR and ModelScope pipelines.

ASRModelScopeParaformer
0 likes · 13 min read
Paraformer: An Industrial Non‑Autoregressive End‑to‑End Speech Recognition Model
DataFunSummit
DataFunSummit
Jul 18, 2022 · Artificial Intelligence

Advances in Natural Language Generation: ProphetNet, Knowledge‑Enhanced Generation, Non‑Autoregressive Pre‑training, Long‑Text Modeling, and Efficient Attention

This talk presents recent year’s research on natural language generation, covering the ProphetNet pre‑trained generation model, external‑knowledge integration for generation, non‑autoregressive pre‑training (BANG), the Poolingformer long‑text architecture, EL‑attention for faster decoding, and a new multi‑task generation benchmark.

Natural Language GenerationPretrainingefficient attention
0 likes · 22 min read
Advances in Natural Language Generation: ProphetNet, Knowledge‑Enhanced Generation, Non‑Autoregressive Pre‑training, Long‑Text Modeling, and Efficient Attention
DataFunTalk
DataFunTalk
Apr 6, 2021 · Artificial Intelligence

Advances in Text Summarization: Pointer-Generator, Coverage Mechanisms, Entity Knowledge Integration, and Non-Autoregressive Models

This article reviews recent advances in abstractive summarization, covering pointer‑generator networks with coverage loss, integration of entity knowledge, strategies to mitigate repetition such as unlikelihood training and nucleus sampling, and emerging non‑autoregressive approaches like the Levenshtein Transformer.

CoverageNLPentity-knowledge
0 likes · 15 min read
Advances in Text Summarization: Pointer-Generator, Coverage Mechanisms, Entity Knowledge Integration, and Non-Autoregressive Models