Tongfei Chen
2020
Hierarchical Entity Typing via Multi-level Learning to Rank
Tongfei Chen
|
Yunmo Chen
|
Benjamin Van Durme
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We propose a novel method for hierarchical entity classification that embraces ontological structure at both training and during prediction. At training, our novel multi-level learning-to-rank loss compares positive types against negative siblings according to the type tree. During prediction, we define a coarse-to-fine decoder that restricts viable candidates at each level of the ontology based on already predicted parent type(s). Our approach significantly outperform prior work on strict accuracy, demonstrating the effectiveness of our method.
Uncertain Natural Language Inference
Tongfei Chen
|
Zhengping Jiang
|
Adam Poliak
|
Keisuke Sakaguchi
|
Benjamin Van Durme
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
We introduce Uncertain Natural Language Inference (UNLI), a refinement of Natural Language Inference (NLI) that shifts away from categorical labels, targeting instead the direct prediction of subjective probability assessments. We demonstrate the feasibility of collecting annotations for UNLI by relabeling a portion of the SNLI dataset under a probabilistic scale, where items even with the same categorical label differ in how likely people judge them to be true given a premise. We describe a direct scalar regression modeling approach, and find that existing categorically-labeled NLI data can be used in pre-training. Our best models correlate well with humans, demonstrating models are capable of more subtle inferences than the categorical bin assignment employed in current NLI tasks.