Yu Wu
2020
MuTual: A Dataset for Multi-Turn Dialogue Reasoning
Leyang Cui
|
Yu Wu
|
Shujie Liu
|
Yue Zhang
|
Ming Zhou
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Non-task oriented dialogue systems have achieved great success in recent years due to largely accessible conversation data and the development of deep learning techniques. Given a context, current systems are able to yield a relevant and fluent response, but sometimes make logical mistakes because of weak reasoning capabilities. To facilitate the conversation reasoning research, we introduce MuTual, a novel dataset for Multi-Turn dialogue Reasoning, consisting of 8,860 manually annotated dialogues based on Chinese student English listening comprehension exams. Compared to previous benchmarks for non-task oriented dialogue systems, MuTual is much more challenging since it requires a model that be able to handle various reasoning problems. Empirical results show that state-of-the-art methods only reach 71%, which is far behind human performance of 94%, indicating that there is ample room for improving reasoning ability.
A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation
Shuo Ren
|
Yu Wu
|
Shujie Liu
|
Ming Zhou
|
Shuai Ma
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The commonly used framework for unsupervised machine translation builds initial translation models of both translation directions, and then performs iterative back-translation to jointly boost their translation performance. The initialization stage is very important since bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance. In this paper, we propose a novel retrieval and rewriting based method to better initialize unsupervised translation models. We first retrieve semantically comparable sentences from monolingual corpora of two languages and then rewrite the target side to minimize the semantic gap between the source and retrieved targets with a designed rewriting model. The rewritten sentence pairs are used to initialize SMT models which are used to generate pseudo data for two NMT models, followed by the iterative back-translation. Experiments show that our method can build better initial unsupervised translation models and improve the final translation performance by over 4 BLEU scores. Our code is released at https://github.com/Imagist-Shuo/RRforUNMT.git.
Curriculum Pre-training for End-to-End Speech Translation
Chengyi Wang
|
Yu Wu
|
Shujie Liu
|
Ming Zhou
|
Zhenglu Yang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
End-to-end speech translation poses a heavy burden on the encoder because it has to transcribe, understand, and learn cross-lingual semantics simultaneously. To obtain a powerful encoder, traditional methods pre-train it on ASR data to capture speech features. However, we argue that pre-training the encoder only through simple speech recognition is not enough, and high-level linguistic knowledge should be considered. Inspired by this, we propose a curriculum pre-training method that includes an elementary course for transcription learning and two advanced courses for understanding the utterance and mapping words in two languages. The difficulty of these courses is gradually increasing. Experiments show that our curriculum pre-training method leads to significant improvements on En-De and En-Fr speech translation benchmarks.
Search
Co-authors
- Shujie Liu 3
- Ming Zhou 3
- Leyang Cui 1
- Yue Zhang 1
- Shuo Ren 1
- show all...
Venues
- ACL3