Open Access. Powered by Scholars. Published by Universities.®

Computational Linguistics Commons

Open Access. Powered by Scholars. Published by Universities.®

Discipline
Institution
Keyword
Publication Year
Publication
Publication Type
File Type

Articles 181 - 210 of 234

Full-Text Articles in Computational Linguistics

Misheard Me Oronyminator: Using Oronyms To Validate The Correctness Of Frequency Dictionaries, Jennifer G. Hughes Jun 2013

Misheard Me Oronyminator: Using Oronyms To Validate The Correctness Of Frequency Dictionaries, Jennifer G. Hughes

Master's Theses

In the field of speech recognition, an algorithm must learn to tell the difference between "a nice rock" and "a gneiss rock". These identical-sounding phrases are called oronyms. Word frequency dictionaries are often used by speech recognition systems to help resolve phonetic sequences with more than one possible orthographic phrase interpretation, by looking up which oronym of the root phonetic sequence contains the most-common words.

Our paper demonstrates a technique used to validate word frequency dictionary values. We chose to use frequency values from the UNISYN dictionary, which tallies each word on a per-occurance basis, using a proprietary text corpus, …


Csc Senior Project: Nlpstats, Michael Mease Mar 2013

Csc Senior Project: Nlpstats, Michael Mease

Computer Science and Software Engineering

Natural Language Processing has recently increased in popularity. The field of authorship analysis, specifically, uses various characteristics of text quantified by markers. NLPStats serves as a tool designed to streamline marker extraction based on user needs. A flexible query system allows for custom marker requests, adjustment of result formatting, and preprocessing options. Furthermore, an efficiently designed structure ensures that users retrieve information quickly. As a whole, NLPStats enables anyone, regardless of NLP experience, to extract important information about the text of a document.


Relation Between Harappan And Brahmi Scripts, Subhajit Kumar Ganguly Jan 2013

Relation Between Harappan And Brahmi Scripts, Subhajit Kumar Ganguly

Subhajit Kumar Ganguly

Around 45 odd signs out of the total number of Harappan signs found make up almost 100 percent of the inscriptions, in some form or other, as said earlier. Out of these 45 signs, around 40 are readily distinguishable. These form an almost exclusive and unique set. The primary signs are seen to have many variants, as in Brahmi. Many of these provide us with quite a vivid picture of their evolution, depending upon the factors of time, place and usefulness. Even minor adjustments in such signs, depending upon these factors, are noteworthy. Many of the signs in this list …


Maximizing Classification Accuracy In Native Language Identification, Scott Jarvis, Yves Bestgen, Steve Pepper Jan 2013

Maximizing Classification Accuracy In Native Language Identification, Scott Jarvis, Yves Bestgen, Steve Pepper

Yves Bestgen

This paper reports our contribution to the 2013 NLI Shared Task. The purpose of the task was to train a machine-learning system to identify the native-language affiliations of 1,100 texts written in English by nonnative speakers as part of a high-stakes test of gen- eral academic English proficiency. We trained our system on the new TOEFL11 corpus, which includes 11,000 essays written by nonnative speakers from 11 native-language backgrounds. Our final system used an SVM classifier with over 400,000 unique features consisting of lexical and POS n-grams occur- ring in at least two texts in the training set. Our system …


Aplicabilidad De La Tipología De Funciones Retóricas De Las Citas Al Género De La Memoria De Máster En Un Contexto Transcultural De Enseñanza Universitaria, David Sánchez-Jiménez Jan 2013

Aplicabilidad De La Tipología De Funciones Retóricas De Las Citas Al Género De La Memoria De Máster En Un Contexto Transcultural De Enseñanza Universitaria, David Sánchez-Jiménez

Publications and Research

The aim of this paper is to compare the rhetorical functions gathered from the citations of (14) fourteen master´s theses written by seven Spanish and seven Philippine authors. A typology of nine categories was used in order to identify the cultural rhetorical differences that exist in the use of citation from the contrast between contrasting this element in the Philippine and Spanish cultures. The methodology used is textual analysis of the linguistic context of these citations and its subsequent classification within these nine categories. The results show that there are quantitative and qualitative differences between the cultural conventions of citations …


Lexicalization And De-Lexicalization Processes In Sign Languages: Comparing Depicting Constructions And Viewpoint Gestures, Kearsy Cormier, David Quinto-Pozos, Zed Sehyr, Adam Schembri Nov 2012

Lexicalization And De-Lexicalization Processes In Sign Languages: Comparing Depicting Constructions And Viewpoint Gestures, Kearsy Cormier, David Quinto-Pozos, Zed Sehyr, Adam Schembri

Communication Sciences and Disorders Faculty Articles and Research

In this paper, we compare so-called “classifier” constructions in signed languages (which we refer to as “depicting constructions”) with comparable iconic gestures produced by non-signers. We show clear correspondences between entity constructions and observer viewpoint gestures on the one hand, and handling constructions and character viewpoint gestures on the other. Such correspondences help account for both lexicalisation and de-lexicalisation processes in signed languages and how these processes are influenced by viewpoint. Understanding these processes is crucial when coding and annotating natural sign language data.


What's In A Letter?, Aaron J. Schein Jan 2012

What's In A Letter?, Aaron J. Schein

Masters Theses 1911 - February 2014

Sentiment analysis is a burgeoning field in natural language processing used to extract and categorize opinion in evaluative documents. We look at recommendation letters, which pose unique challenges to standard sentiment analysis systems. Our dataset is eighteen letters from applications to UMass Worcester Memorial Medical Center’s residency program in Obstetrics and Gynecology. Given a small dataset, we develop a method intended for use by domain experts to systematically explore their intuitions about the topical make-up of documents on which they make critical decisions. By leveraging WordNet and the WordNet Propagation algorithm, the method allows a user to develop topic seed …


Evaluation Automatique De Textes Et Cohésion Lexicale, Yves Bestgen Jan 2012

Evaluation Automatique De Textes Et Cohésion Lexicale, Yves Bestgen

Yves Bestgen

(Article in French). Automatic essay grading is currently experiencing a growing popularity because of its importance in the field of education and, particularly, in foreign language learning. While several efficient systems have been developed over the last fifteen years, almost none of them take the discourse level into account. Recently, a few studies proposed to fill this gap by means of automatic indexes of lexical cohesion obtained from Latent Semantic Analysis, but the results were disappointing. Based on a well-known model of writing expertise, the present study proposes a new index of cohesion derived from work on the thematic segmentation …


Beefmoves: Dissemination, Diversity, And Dynamics Of English Borrowings In A German Hip Hop Forum, Matt Garley, Julia Hockenmaier Jan 2012

Beefmoves: Dissemination, Diversity, And Dynamics Of English Borrowings In A German Hip Hop Forum, Matt Garley, Julia Hockenmaier

Publications and Research

We investigate how novel English-derived words (anglicisms) are used in a German-language Internet hip hop forum, and what factors contribute to their uptake.


What's In A Letter?, Aaron J. Schein Dec 2011

What's In A Letter?, Aaron J. Schein

Aaron J Schein

Sentiment analysis is a burgeoning field in natural language processing used to extract and categorize opinion in evaluative documents. We look at recommendation letters, which pose unique challenges to standard sentiment analysis systems. Our dataset is eighteen letters from applications to UMass Worcester Memorial Medical Center’s residency program in Obstetrics and Gynecology. Given a small dataset, we develop a method intended for use by domain experts to systematically explore their intuitions about the topical make-up of documents on which they make critical decisions. By leveraging WordNet and the WordNet Propagation algorithm, the method allows a user to develop topic seed …


Using Textual Features To Predict Popular Content On Digg, Paul H. Miller May 2011

Using Textual Features To Predict Popular Content On Digg, Paul H. Miller

Paul H Miller

Over the past few years, collaborative rating sites, such as Netflix, Digg and Stumble, have become increasingly prevalent sites for users to find trending content. I used various data mining techniques to study Digg, a social news site, to examine the influence of content on popularity. What influence does content have on popularity, and what influence does content have on users’ decisions? Overwhelmingly, prior studies have consistently shown that predicting popularity based on content is difficult and maybe even inherently impossible. The same submission can have multiple outcomes and content neither determines popularity, nor individual user decisions. My results show …


Using Textual Features To Predict Popular Content On Digg, Paul H. Miller Apr 2011

Using Textual Features To Predict Popular Content On Digg, Paul H. Miller

Department of English: Dissertations, Theses, and Student Research

Over the past few years, collaborative rating sites, such as Netflix, Digg and Stumble, have become increasingly prevalent sites for users to find trending content. I used various data mining techniques to study Digg, a social news site, to examine the influence of content on popularity. What influence does content have on popularity, and what influence does content have on users’ decisions? Overwhelmingly, prior studies have consistently shown that predicting popularity based on content is difficult and maybe even inherently impossible. The same submission can have multiple outcomes and content neither determines popularity, nor individual user decisions. My results show …


Linguistics As Structure In Computer Animation: Toward A More Effective Synthesis Of Brow Motion In American Sign Language, Rosalee Wolfe, Peter Cook, John C. Mcdonald, Jerry Schnepp Jan 2011

Linguistics As Structure In Computer Animation: Toward A More Effective Synthesis Of Brow Motion In American Sign Language, Rosalee Wolfe, Peter Cook, John C. Mcdonald, Jerry Schnepp

Visual Communications and Technology Education Faculty Publications

Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but well tolerated by members of the Deaf community. Unfortunately, animation poses several challenges stemming from the necessity of grappling with massive amounts of data. However, the linguistics of ASL can aid in surmounting the challenge by providing structure and rules for organizing animation data. An exploration of the linguistic and extra linguistic behavior of the brows from an animator’s viewpoint yields a new approach for synthesizing nonmanuals that differs from the conventional animation of anatomy and instead offers a different …


Prosodylab-Aligner: A Tool For Forced Alignment Of Laboratory Speech, Kyle Gorman, Jonathan Howell, Michael Wagner Jan 2011

Prosodylab-Aligner: A Tool For Forced Alignment Of Laboratory Speech, Kyle Gorman, Jonathan Howell, Michael Wagner

Department of Linguistics Faculty Scholarship and Creative Works

The Penn Forced Aligner automates the alignment process using the Hidden Markov Model Toolkit (HTK). The core of Prosodylab-Aligner is align.py, a script which performs acoustic model training and alignment. This script automates calls to HTK and SoX, an open-source command-line tool which is capable of resampling audio. The included README file provides instructions for installing HTK and SoX on Linux and Mac OS X, and can also be run on Windows. During training, the model is initialized with flat-start monophones, which are then submitted to a single round of model estimation. Then, a tied-state 'small pause' model is inserted …


The Low Entropy Conjecture: The Challenges Of Modern Irish Nominal Declension, Robert Malouf, Farrell Ackerman Jan 2011

The Low Entropy Conjecture: The Challenges Of Modern Irish Nominal Declension, Robert Malouf, Farrell Ackerman

Robert Malouf

No abstract provided.


Computational Style Processing, Foaad Khosmood Dec 2010

Computational Style Processing, Foaad Khosmood

Foaad Khosmood

Our main thesis is that computational processing of natural language styles can be accomplished using corpus analysis methods and language transformation rules. We demonstrate this first by statistically modeling natural language styles, and second by developing tools that carry out style processing, and finally by running experiments using the tools and evaluating the results. Specifically, we present a model for style in natural languages, and demonstrate style processing in three ways: Our system analyzes styles in quantifiable terms according to our model (analysis), associates documents based on stylistic similarity to known corpora (classification) and manipulates texts to match a desired …


Prosodylab-Aligner: A Tool For Forced Alignment Of Laboratory Speech, Kyle Gorman, Jonathan Howell, Michael Wagner Dec 2010

Prosodylab-Aligner: A Tool For Forced Alignment Of Laboratory Speech, Kyle Gorman, Jonathan Howell, Michael Wagner

Jonathan Howell

The Penn Forced Aligner automates the alignment process using the Hidden Markov Model Toolkit (HTK). The core of Prosodylab-Aligner is align.py, a script which performs acoustic model training and alignment. This script automates calls to HTK and SoX, an open-source command-line tool which is capable of resampling audio. The included README file provides instructions for installing HTK and SoX on Linux and Mac OS X, and can also be run on Windows. During training, the model is initialized with flat-start monophones, which are then submitted to a single round of model estimation. Then, a tied-state 'small pause' model is inserted …


Study Of Stemming Algorithms, Savitha Kodimala Dec 2010

Study Of Stemming Algorithms, Savitha Kodimala

UNLV Theses, Dissertations, Professional Papers, and Capstones

Automated stemming is the process of reducing words to their roots. The stemmed words are typically used to overcome the mismatch problems associated with text searching.


In this thesis, we report on the various methods developed for stemming. In particular, we show the effectiveness of n-gram stemming methods on a collection of documents.


Visual Salience And Reference Resolution In Situated Dialogues: A Corpus-Based Evaluation., Niels Schütte, John D. Kelleher, Brian Mac Namee Nov 2010

Visual Salience And Reference Resolution In Situated Dialogues: A Corpus-Based Evaluation., Niels Schütte, John D. Kelleher, Brian Mac Namee

Conference papers

Dialogues between humans and robots are necessarily situated and so, often, a shared visual context is present. Exophoric references are very frequent in situated dialogues, and are particularly important in the presence of a shared visual context - for example when a human is verbally guiding a tele-operated mobile robot. We present an approach to automatically resolving exophoric referring expressions in a situated dialogue based on the visual salience of possible referents. We evaluate the effectiveness of this approach and a range of different salience metrics using data from the SCARE corpus which we have augmented with visual information. The …


Situating Spatial Templates For Human-Robot Interaction, John D. Kelleher, Robert J. Ross, Brian Mac Namee, Colm Sloan Nov 2010

Situating Spatial Templates For Human-Robot Interaction, John D. Kelleher, Robert J. Ross, Brian Mac Namee, Colm Sloan

Conference papers

People often refer to objects by describing the object's spatial location relative to another object. Due to their ubiquity in situated discourse, the ability to use 'locative expressions' is fundamental to human-robot dialogue systems. A key component of this ability are computational models of spatial term semantics. These models bridge the grounding gap between spatial language and sensor data. Within the Artificial Intelligence and Robotics communities, spatial template based accounts, such as the Attention Vector Sum model (Regier and Carlson, 2001), have found considerable application in mediating situated human-machine communication (Gorniak, 2004; Brenner et a., 2007; Kelleher and Costello, 2009). …


New Trends In Automatic Assessment: Ontology Matching, Maria Mitina, Patricia Magee, John Cardiff Oct 2010

New Trends In Automatic Assessment: Ontology Matching, Maria Mitina, Patricia Magee, John Cardiff

Conference Papers

Instant individual feedback represents a result of assessment which allows for considerable improvements in both teaching and learning. In this paper we present the application of ontology matching techniques in automatic correction of students’ answers for SQL tests, which will provide teachers with instant feedback to facilitate manual correction and marking and which they can pass to the students. Students experience many problems learning SQL due to the necessity to memorise database schemas, unclear feedback from the database engine on the execution of the query, etc. The program environment utilising the described approach is designed to solve the abovementioned problems …


Topology In Composite Spatial Terms, John D. Kelleher, Robert J. Ross Aug 2010

Topology In Composite Spatial Terms, John D. Kelleher, Robert J. Ross

Conference papers

People often refer to objects by describing the object's spatial location relative to another object, e.g. the book on the right of the table. This type of referring expression is called a spatial locative expression. Spatial locatives have three major components: (1) the target object that is being located (the book), (2) the landmark object relative to which the target is being located (the table), and (3) the description of the spatial relationship that exists between the target and the landmark (on the right of ). In English spatial relationships are often described using spatial prepositions. The set of English …


Proceedings Of The Sixth International Natural Language Generation Conference (Inlg 2010)., John D. Kelleher, Brian Mac Namee, Ielka Van Der Sluis Jul 2010

Proceedings Of The Sixth International Natural Language Generation Conference (Inlg 2010)., John D. Kelleher, Brian Mac Namee, Ielka Van Der Sluis

Conference papers

No abstract provided.


Personal Sense And Idiolect: Combining Authorship Attribution And Opinion Analysis, Polina Panicheva, John Cardiff, Paolo Rosso May 2010

Personal Sense And Idiolect: Combining Authorship Attribution And Opinion Analysis, Polina Panicheva, John Cardiff, Paolo Rosso

Conference Papers

Subjectivity analysis and authorship attribution are very popular areas of research. However, work in these two areas has been done separately. We believe that by combining information about subjectivity in texts and authorship, the performance of both tasks can be improved. In the paper a personalized approach to opinion mining is presented, in which the notions of personal sense and idiolect are introduced and used for polarity classification task. The results of applying the personalized approach to opinion mining are presented, confirming that the approach increases the performance of the opinion mining task. Automatic authorship attribution is further applied to …


Enterprise Users And Web Search Behavior, April Ann Lewis May 2010

Enterprise Users And Web Search Behavior, April Ann Lewis

Masters Theses

This thesis describes analysis of user web query behavior associated with Oak Ridge National Laboratory’s (ORNL) Enterprise Search System (Hereafter, ORNL Intranet). The ORNL Intranet provides users a means to search all kinds of data stores for relevant business and research information using a single query. The Global Intranet Trends for 2010 Report suggests the biggest current obstacle for corporate intranets is “findability and Siloed content”. Intranets differ from internets in the way they create, control, and share content which can make it often difficult and sometimes impossible for users to find information. Stenmark (2006) first noted studies of corporate …


Applying Computational Models Of Spatial Prepositions To Visually Situated Dialog, John D. Kelleher, Fintan Costello Jun 2009

Applying Computational Models Of Spatial Prepositions To Visually Situated Dialog, John D. Kelleher, Fintan Costello

Articles

This article describes the application of computational models of spatial prepositions to visually situated dialog systems. In these dialogs, spatial prepositions are important because people often use them to refer to entities in the visual context of a dialog. We first describe a generic architecture for a visually situated dialog system and highlight the interactions between the spatial cognition module, which provides the interface to the models of prepositional semantics, and the other components in the architecture. Following this, we present two new computational models of topological and projective spatial prepositions. The main novelty within these models is the fact …


Distribution Of Complexities In The Vai Script, Andrij Rovenchak, Ján Mačutek Dec 2008

Distribution Of Complexities In The Vai Script, Andrij Rovenchak, Ján Mačutek

Charles L. Riley

In the paper, we analyze the distribution of complexities in the Vai script, an indigenous syllabic writing system from Liberia. It is found that the uniformity hypothesis for complexities fails for this script. The models using Poisson distribution for the number of components and hyper-Poisson distribution for connections provide good fits in the case of the Vai script.


Computational Linguistics For Metadata Building: Aggregating Text Processing Technologies For Enhanced Image Access, Judith Klavans, Carolyn Sheffield, Eileen Abels, Joan E. Beaudoin, Laura Jenemann, Jimmy Lin, Tom Lippincott, Rebecca Passonneau, Tandeep Sidhu, Dagobert Soergel, Tae Yano Aug 2008

Computational Linguistics For Metadata Building: Aggregating Text Processing Technologies For Enhanced Image Access, Judith Klavans, Carolyn Sheffield, Eileen Abels, Joan E. Beaudoin, Laura Jenemann, Jimmy Lin, Tom Lippincott, Rebecca Passonneau, Tandeep Sidhu, Dagobert Soergel, Tae Yano

School of Information Sciences Faculty Research Publications

We present a system which applies text mining using computational linguistic techniques to automatically extract, categorize, disambiguate and filter metadata for image access. Candidate subject terms are identified through standard approaches; novel semantic categorization using machine learning and disambiguation using both WordNet and a domain specific thesaurus are applied. The resulting metadata can be manually edited by image catalogers or filtered by semi-automatic rules. We describe the implementation of this workbench created for, and evaluated by, image catalogers. We discuss the system's current functionality, developed under the Computational Linguistics for Metadata Building (CLiMB) research project. The CLiMB Toolkit has been …


Automated Diagnostic Writing Tests: Why? How?, Elena Cotos, Nick Pendar Jan 2008

Automated Diagnostic Writing Tests: Why? How?, Elena Cotos, Nick Pendar

Elena Cotos

Diagnostic language assessment can greatly benefit from a collaborative union of computer-assisted language testing (CALT) and natural language processing (NLP). Currently, most CALT applications mainly allow for inferences about L2 proficiency based on learners’ recognition and comprehension of linguistic input and hardly concern language production (Holland, Maisano, Alderks, & Martin, 1993). NLP is now at a stage where it can be used or adapted for diagnostic testing of learner production skills. This paper explores the viability of NLP techniques for the diagnosis of L2 writing by analyzing the state of the art in current diagnostic language testing, reviewing the existing …


Automatic Identification Of Discourse Moves In Scientific Article Introductions, Elena Cotos, Nick Pendar Jan 2008

Automatic Identification Of Discourse Moves In Scientific Article Introductions, Elena Cotos, Nick Pendar

Elena Cotos

This paper reports on the first stage of building an educational tool for international graduate students to improve their academic writing skills. Taking a text-categorization approach, we experimented with several models to automatically classify sentences in research article introductions into one of three rhetorical moves. The paper begins by situating the project within the larger framework of intelligent computer-assisted language learning. It then presents the details of the study with very encouraging results. The paper then concludes by commenting on how the system may be improved and how the project is intended to be pursued and evaluated.