Parisa Kordjamshidi
2020
Cross-Modality Relevance for Reasoning on Language and Vision
Chen Zheng
|
Quan Guo
|
Parisa Kordjamshidi
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR). We design a novel cross-modality relevance module that is used in an end-to-end framework to learn the relevance representation between components of various input modalities under the supervision of a target task, which is more generalizable to unobserved data compared to merely reshaping the original representation space. In addition to modeling the relevance between the textual entities and visual entities, we model the higher-order relevance between entity relations in the text and object relations in the image. Our proposed approach shows competitive performance on two different language and vision tasks using public benchmarks and improves the state-of-the-art published results. The learned alignments of input spaces and their relevance representations by NLVR task boost the training efficiency of VQA task.
From Spatial Relations to Spatial Configurations
Soham Dan
|
Parisa Kordjamshidi
|
Julia Bonn
|
Archna Bhatia
|
Zheng Cai
|
Martha Palmer
|
Dan Roth
Proceedings of The 12th Language Resources and Evaluation Conference
Spatial Reasoning from language is essential for natural language understanding. Supporting it requires a representation scheme that can capture spatial phenomena encountered in language as well as in images and videos.Existing spatial representations are not sufficient for describing spatial configurations used in complex tasks. This paper extends the capabilities of existing spatial representation languages and increases coverage of the semantic aspects that are needed to ground spatial meaning of natural language text in the world. Our spatial relation language is able to represent a large, comprehensive set of spatial concepts crucial for reasoning and is designed to support composition of static and dynamic spatial configurations. We integrate this language with the Abstract Meaning Representation (AMR) annotation schema and present a corpus annotated by this extended AMR. To exhibit the applicability of our representation scheme, we annotate text taken from diverse datasets and show how we extend the capabilities of existing spatial representation languages with fine-grained decomposition of semantics and blend it seamlessly with AMRs of sentences and discourse representations as a whole.
Latent Alignment of Procedural Concepts in Multimodal Recipes
Hossein Rajaby Faghihi
|
Roshanak Mirzaee
|
Sudarshan Paliwal
|
Parisa Kordjamshidi
Proceedings of the First Workshop on Advances in Language and Vision Research
We propose a novel alignment mechanism to deal with procedural reasoning on a newly released multimodal QA dataset, named RecipeQA. Our model is solving the textual cloze task which is a reading comprehension on a recipe containing images and instructions. We exploit the power of attention networks, cross-modal representations, and a latent alignment space between instructions and candidate answers to solve the problem. We introduce constrained max-pooling which refines the max pooling operation on the alignment matrix to impose disjoint constraints among the outputs of the model. Our evaluation result indicates a 19% improvement over the baselines.
Search
Co-authors
- Chen Zheng 1
- Quan Guo 1
- Soham Dan 1
- Julia Bonn 1
- Archna Bhatia 1
- show all...