Soham Dan
2020
Understanding Spatial Relations through Multiple Modalities
Soham Dan
|
Hangfeng He
|
Dan Roth
Proceedings of The 12th Language Resources and Evaluation Conference
Recognizing spatial relations and reasoning about them is essential in multiple applications including navigation, direction giving and human-computer interaction in general. Spatial relations between objects can either be explicit – expressed as spatial prepositions, or implicit – expressed by spatial verbs such as moving, walking, shifting, etc. Both these, but implicit relations in particular, require significant common sense understanding. In this paper, we introduce the task of inferring implicit and explicit spatial relations between two entities in an image. We design a model that uses both textual and visual information to predict the spatial relations, making use of both positional and size information of objects and image embeddings. We contrast our spatial model with powerful language models and show how our modeling complements the power of these, improving prediction accuracy and coverage and facilitates dealing with unseen subjects, objects and relations.
From Spatial Relations to Spatial Configurations
Soham Dan
|
Parisa Kordjamshidi
|
Julia Bonn
|
Archna Bhatia
|
Zheng Cai
|
Martha Palmer
|
Dan Roth
Proceedings of The 12th Language Resources and Evaluation Conference
Spatial Reasoning from language is essential for natural language understanding. Supporting it requires a representation scheme that can capture spatial phenomena encountered in language as well as in images and videos.Existing spatial representations are not sufficient for describing spatial configurations used in complex tasks. This paper extends the capabilities of existing spatial representation languages and increases coverage of the semantic aspects that are needed to ground spatial meaning of natural language text in the world. Our spatial relation language is able to represent a large, comprehensive set of spatial concepts crucial for reasoning and is designed to support composition of static and dynamic spatial configurations. We integrate this language with the Abstract Meaning Representation (AMR) annotation schema and present a corpus annotated by this extended AMR. To exhibit the applicability of our representation scheme, we annotate text taken from diverse datasets and show how we extend the capabilities of existing spatial representation languages with fine-grained decomposition of semantics and blend it seamlessly with AMRs of sentences and discourse representations as a whole.
Search
Co-authors
- Dan Roth 2
- Hangfeng He 1
- Parisa Kordjamshidi 1
- Julia Bonn 1
- Archna Bhatia 1
- show all...
Venues
- LREC2