Highlights and Paper Summaries from ACL 2018 Conference
An extensive overview of ACL 2018, featuring acceptance statistics, award-winning papers, tutorial insights, and concise summaries of notable research across machine translation, semantic parsing, question answering, domain adaptation, text classification, summarization, dialogue systems, generation, and related tools.
The 2018 ACL conference was held in Melbourne, Australia from July 15‑20, attracting about 1,500 participants. A total of 1,544 submissions were received, with 258 long papers and 126 short papers accepted, yielding an overall acceptance rate of 24.9%.
Best Paper Awards : The award‑winning works focused on new problem formulation and dataset creation, including SQuAD 2.0 (unanswerable questions), ranking clarification questions from StackExchange, and detecting adverbial presupposition triggers.
Tutorial – Neural Semantic Parsing : The tutorial covered classic datasets (GeoQuery, ATIS, CoNLL‑A), decoding algorithms, training methods (maximum margin, structured learning, reinforcement learning), and practical tools such as AllenNLP for building semantic parsers.
Machine Translation : Highlighted papers combined RNN‑based NMT with self‑attention mechanisms, introduced dynamic sentence sampling for efficient training, and proposed coverage‑aware NMT models that incorporate attention‑based coverage features.
Semantic Parsing : Summarized approaches included coarse‑to‑fine encoder‑decoder decoding, structured VAE for latent semantic representations, and graph‑based generation of semantic parses using RNNs.
Question Answering : Discussed methods that integrate commonsense knowledge via external memory, multi‑paragraph reading comprehension with cross‑passage answer verification, and analyses of model understanding using attribution techniques.
Domain Adaptation : Presented a strong baseline using tri‑training for semi‑supervised learning under domain shift, showing competitive performance against adversarial methods.
Machine Learning : Described the SPIGOT algorithm for back‑propagating through structured argmax operations, enabling differentiable parsing and other structured prediction tasks.
Text Classification : Covered the integration of regular expressions with neural networks, joint word‑label embeddings, universal language model fine‑tuning (ULMFiT), and unsupervised random‑walk sentence embeddings.
Summarization : Reviewed soft‑template based neural summarization with retrieval, reranking, and rewriting, as well as reinforcement‑learning‑driven sentence rewriting for abstractive summarization of long documents.
Dialogue Systems : Presented exemplar encoder‑decoder models for conversation generation, memory‑augmented task‑oriented dialog (Mem2Seq), and planning‑integrated reinforcement learning for dialogue policy (Deep Dyna‑Q).
Generation : Introduced cooperative discriminators that evaluate generated text on repetition, entailment, relevance, and lexical style to improve language generation quality.
Data : Described MojiTalk, a large‑scale dataset of emoji‑annotated Twitter conversations for training emotional response generators.
Tools : Provided a link to the C++ open‑source Neural Machine Translation toolkit Marian.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.