Open Access. Powered by Scholars. Published by Universities.®

Computational Linguistics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 4 of 4

Full-Text Articles in Computational Linguistics

Visual Salience And Reference Resolution In Situated Dialogues: A Corpus-Based Evaluation., Niels Schütte, John D. Kelleher, Brian Mac Namee Nov 2010

Visual Salience And Reference Resolution In Situated Dialogues: A Corpus-Based Evaluation., Niels Schütte, John D. Kelleher, Brian Mac Namee

Conference papers

Dialogues between humans and robots are necessarily situated and so, often, a shared visual context is present. Exophoric references are very frequent in situated dialogues, and are particularly important in the presence of a shared visual context - for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The …


Situating Spatial Templates For Human-Robot Interaction, John D. Kelleher, Robert J. Ross, Brian Mac Namee, Colm Sloan Nov 2010

Situating Spatial Templates For Human-Robot Interaction, John D. Kelleher, Robert J. Ross, Brian Mac Namee, Colm Sloan

Conference papers

People often refer to objects by describing the object's spatial location relative to another object. Due to their ubiquity in situated discourse, the ability to use 'locative expressions' is fundamental to human-robot dialogue systems. A key component of this ability are computational models of spatial term semantics. These models bridge the grounding gap between spatial language and sensor data. Within the Artificial Intelligence and Robotics communities, spatial template based accounts, such as the Attention Vector Sum model (Regier and Carlson, 2001), have found considerable application in mediating situated human-machine communication (Gorniak, 2004; Brenner et a., 2007; Kelleher and Costello, 2009). …


Applying Computational Models Of Spatial Prepositions To Visually Situated Dialog, John D. Kelleher, Fintan Costello Jun 2009

Applying Computational Models Of Spatial Prepositions To Visually Situated Dialog, John D. Kelleher, Fintan Costello

Articles

This article describes the application of computational models of spatial prepositions to visually situated dialog systems. In these dialogs, spatial prepositions are important because people often use them to refer to entities in the visual context of a dialog. We first describe a generic architecture for a visually situated dialog system and highlight the interactions between the spatial cognition module, which provides the interface to the models of prepositional semantics, and the other components in the architecture. Following this, we present two new computational models of topological and projective spatial prepositions. The main novelty within these models is the fact …


Proximity In Context: An Empirically Grounded Computational Model Of Proximity For Processing Topological Spatial Expression., John D. Kelleher, Geert-Jan Kruijff, Fintan Costello Jan 2006

Proximity In Context: An Empirically Grounded Computational Model Of Proximity For Processing Topological Spatial Expression., John D. Kelleher, Geert-Jan Kruijff, Fintan Costello

Conference papers

The paper presents a new model for context-dependent interpretation of linguistic expressions about spatial proximity between objects in a natural scene. The paper discusses novel psycholinguistic experimental data that tests and verifies the model. The model has been implemented, and enables a conversational robot to identify objects in a scene through topological spatial relations (e.g. ''X near Y''). The model can help motivate the choice between topological and projective prepositions.