Min-Yen Kan
2020
Expertise Style Transfer: A New Task Towards Better Communication between Experts and Laymen
Yixin Cao
|
Ruihao Shui
|
Liangming Pan
|
Min-Yen Kan
|
Zhiyuan Liu
|
Tat-Seng Chua
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
The curse of knowledge can impede communication between experts and laymen. We propose a new task of expertise style transfer and contribute a manually annotated dataset with the goal of alleviating such cognitive biases. Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions using simple words. This is a challenging task, unaddressed in previous work, as it requires the models to have expert intelligence in order to modify text with a deep understanding of domain knowledge and structures. We establish the benchmark performance of five state-of-the-art models for style transfer and text simplification. The results demonstrate a significant gap between machine and human performance. We also discuss the challenges of automatic evaluation, to provide insights into future research directions. The dataset is publicly available at https://srhthu.github.io/expertise-style-transfer/.
Semantic Graphs for Generating Deep Questions
Liangming Pan
|
Yuxi Xie
|
Yansong Feng
|
Tat-Seng Chua
|
Min-Yen Kan
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This paper proposes the problem of Deep Question Generation (DQG), which aims to generate complex questions that require reasoning over multiple pieces of information about the input passage. In order to capture the global structure of the document and facilitate reasoning, we propose a novel framework that first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN). Afterward, we fuse the document-level and graph-level representations to perform joint training of content selection and question decoding. On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance. The code is publicly available at https://github.com/WING-NUS/SG-Deep-Question-Generation.
It’s Morphin’ Time! Combating Linguistic Discrimination with Inflectional Perturbations
Samson Tan
|
Shafiq Joty
|
Min-Yen Kan
|
Richard Socher
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Training on only perfect Standard English corpora predisposes pre-trained neural networks to discriminate against minorities from non-standard linguistic backgrounds (e.g., African American Vernacular English, Colloquial Singapore English, etc.). We perturb the inflectional morphology of words to craft plausible and semantically similar adversarial examples that expose these biases in popular NLP models, e.g., BERT and Transformer, and show that adversarially fine-tuning them for a single epoch significantly improves robustness without sacrificing performance on clean data.
Search
Co-authors
- Liangming Pan 2
- Tat-Seng Chua 2
- Yixin Cao 1
- Ruihao Shui 1
- Zhiyuan Liu 1
- show all...
Venues
- ACL3