Jonathan May
2020
Grounding Conversations with Improvised Dialogues
Hyundong Cho
|
Jonathan May
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Effective dialogue involves grounding, the process of establishing mutual knowledge that is essential for communication between people. Modern dialogue systems are not explicitly trained to build common ground, and therefore overlook this important aspect of communication. Improvisational theater (improv) intrinsically contains a high proportion of dialogue focused on building common ground, and makes use of the yes-and principle, a strong grounding speech act, to establish coherence and an actionable objective reality. We collect a corpus of more than 26,000 yes-and turns, transcribing them from improv dialogues and extracting them from larger, but more sparsely populated movie script dialogue corpora, via a bootstrapped classifier. We fine-tune chit-chat dialogue systems with our corpus to encourage more grounded, relevant conversation and confirm these findings with human evaluations.
Cross-lingual Structure Transfer for Zero-resource Event Extraction
Di Lu
|
Ananya Subburathinam
|
Heng Ji
|
Jonathan May
|
Shih-Fu Chang
|
Avi Sil
|
Clare Voss
Proceedings of The 12th Language Resources and Evaluation Conference
Most of the current cross-lingual transfer learning methods for Information Extraction (IE) have been only applied to name tagging. To tackle more complex tasks such as event extraction we need to transfer graph structures (event trigger linked to multiple arguments with various roles) across languages. We develop a novel share-and-transfer framework to reach this goal with three steps: (1) Convert each sentence in any language to language-universal graph structures; in this paper we explore two approaches based on universal dependency parses and complete graphs, respectively. (2) Represent each node in the graph structure with a cross-lingual word embedding so that all sentences in multiple languages can be represented with one shared semantic space. (3) Using this common semantic space, train event extractors from English training data and apply them to languages that do not have any event annotations. Experimental results on three languages (Spanish, Russian and Ukrainian) without any annotations show this framework achieves comparable performance to a state-of-the-art supervised model trained from more than 1,500 manually annotated event mentions.
Search
Co-authors
- Hyundong Cho 1
- Di Lu 1
- Ananya Subburathinam 1
- Heng Ji 1
- Shih-Fu Chang 1
- show all...