Yoshua Bengio
2020
Exploiting Syntactic Structure for Better Language Modeling: A Syntactic Distance Approach
Wenyu Du
|
Zhouhan Lin
|
Yikang Shen
|
Timothy J. O’Donnell
|
Yoshua Bengio
|
Yue Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
It is commonly believed that knowledge of syntactic structure should improve language modeling. However, effectively and computationally efficiently incorporating syntactic structure into neural language models has been a challenging topic. In this paper, we make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called “syntactic distances”, where information between these two separate objectives shares the same intermediate representation. Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality.
Compositional Generalization by Factorizing Alignment and Translation
Jacob Russin
|
Jason Jo
|
Randall O’Reilly
|
Yoshua Bengio
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. Inspired by work in cognitive science suggesting a functional distinction between systems for syntactic and semantic processing, we implement a modification to an existing approach in neural machine translation, imposing an analogous separation between alignment and translation. The resulting architecture substantially outperforms standard recurrent networks on the SCAN dataset, a compositional generalization task, without any additional supervision. Our work suggests that learning to align and to translate in separate modules may be a useful heuristic for capturing compositional structure.
Search
Co-authors
- Wenyu Du 1
- Zhouhan Lin 1
- Yikang Shen 1
- Timothy O’Donnell 1
- Yue Zhang 1
- show all...
Venues
- ACL2