Qinliang Su
2020
Generative Semantic Hashing Enhanced via Boltzmann Machines
Lin Zheng
|
Qinliang Su
|
Dinghan Shen
|
Changyou Chen
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Generative semantic hashing is a promising technique for large-scale information retrieval thanks to its fast retrieval speed and small memory footprint. For the tractability of training, existing generative-hashing methods mostly assume a factorized form for the posterior distribution, enforcing independence among the bits of hash codes. From the perspectives of both model representation and code space size, independence is always not the best assumption. In this paper, to introduce correlations among the bits of hash codes, we propose to employ the distribution of Boltzmann machine as the variational posterior. To address the intractability issue of training, we first develop an approximate method to reparameterize the distribution of a Boltzmann machine by augmenting it as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution. Based on that, an asymptotically-exact lower bound is further derived for the evidence lower bound (ELBO). With these novel techniques, the entire model can be optimized efficiently. Extensive experimental results demonstrate that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains.
Low-Resource Generation of Multi-hop Reasoning Questions
Jianxing Yu
|
Wei Liu
|
Shuang Qiu
|
Qinliang Su
|
Kai Wang
|
Xiaojun Quan
|
Jian Yin
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This paper focuses on generating multi-hop reasoning questions from the raw text in a low resource circumstance. Such questions have to be syntactically valid and need to logically correlate with the answers by deducing over multiple relations on several sentences in the text. Specifically, we first build a multi-hop generation model and guide it to satisfy the logical rationality by the reasoning chain extracted from a given text. Since the labeled data is limited and insufficient for training, we propose to learn the model with the help of a large scale of unlabeled data that is much easier to obtain. Such data contains rich expressive forms of the questions with structural patterns on syntax and semantics. These patterns can be estimated by the neural hidden semi-Markov model using latent variables. With latent patterns as a prior, we can regularize the generation model and produce the optimal results. Experimental results on the HotpotQA data set demonstrate the effectiveness of our model. Moreover, we apply the generated results to the task of machine reading comprehension and achieve significant performance improvements.
Search
Co-authors
- Lin Zheng 1
- Dinghan Shen 1
- Changyou Chen 1
- Jianxing Yu 1
- Wei Liu 1
- show all...
Venues
- ACL2