Open Access. Powered by Scholars. Published by Universities.®
- Discipline
-
- Applied Linguistics (6)
- Arts and Humanities (3)
- Computer Sciences (3)
- First and Second Language Acquisition (3)
- Physical Sciences and Mathematics (3)
-
- Artificial Intelligence and Robotics (2)
- Education (2)
- Educational Assessment, Evaluation, and Research (2)
- Phonetics and Phonology (2)
- Psychology (2)
- Semantics and Pragmatics (2)
- Typological Linguistics and Linguistic Diversity (2)
- African Languages and Societies (1)
- Anthropological Linguistics and Sociolinguistics (1)
- Art and Design (1)
- Communication (1)
- Comparative and Historical Linguistics (1)
- Computer Engineering (1)
- Digital Communications and Networking (1)
- Discourse and Text Linguistics (1)
- Educational Methods (1)
- Engineering (1)
- English Language and Literature (1)
- Film and Media Studies (1)
- Game Design (1)
- Interactive Arts (1)
- Journalism Studies (1)
- Keyword
-
- Computational linguistics: LSA, Second language assesment (4)
- Automated Writing Evaluation (2)
- Computational linguistics (2)
- Prosody (2)
- A study of the Indus Script (1)
-
- Acoustic prominence (1)
- Alternative Translation Approach – Part I: Labor division (1)
- American Sign Language (1)
- Applied linguistics (1)
- Association measures (1)
- Brahmi (1)
- Brows (1)
- CAT Tools (1)
- Cognate detection (1)
- CollGram (1)
- Collocation (1)
- Computer animation (1)
- Corpus (1)
- Corpus linguistics. (1)
- Corpus lingusitics (1)
- Corpus-based research (1)
- Data mining (1)
- Deception Detection (NLP) (1)
- Deception detection (1)
- Devanagri (1)
- Digg (1)
- Digital humanities (1)
- Edit distance (1)
- Evolution of script (1)
- Harappa (1)
- Publication Year
- Publication
- File Type
Articles 1 - 22 of 22
Full-Text Articles in Computational Linguistics
Phonologically Informed Edit Distance Algorithms For Word Alignment With Low-Resource Languages, Richard T. Mccoy, Robert Frank
Phonologically Informed Edit Distance Algorithms For Word Alignment With Low-Resource Languages, Richard T. Mccoy, Robert Frank
Robert Frank
We present three methods for weighting edit distance algorithms based on linguistic information. These methods base their penalties on (i) phonological features, (ii) distributional character embeddings, or (iii) differences between cognate words. We also introduce a novel method for evaluating edit distance through the task of low-resource word alignment by using edit-distance neighbors in a high-resource pivot language to inform alignments from the low-resource language. At this task, the cognate-based scheme outperforms our other methods and the Levenshtein edit distance baseline, showing that NLP applications can benefit from information about cross-linguistic phonological patterns.
Jabberwocky Parsing: Dependency Parsing With Lexical Noise, Jungo Kasai, Robert Frank
Jabberwocky Parsing: Dependency Parsing With Lexical Noise, Jungo Kasai, Robert Frank
Robert Frank
Parsing models have long benefited from the use of lexical information, and indeed current state-of-the art neural network models for dependency parsing achieve substantial improvements by benefiting from distributed representations of lexical information. At the same time, humans can easily parse sentences with unknown or even novel words, as in Lewis Carroll’s poem Jabberwocky. In this paper, we carry out jabberwocky parsing experiments, exploring how robust a state-of-the-art neural network parser is to the absence of lexical information. We find that current parsing models, at least under usual training regimens, are in fact overly dependent on lexical information, and perform …
Acoustic Classification Of Focus: On The Web And In The Lab, Jonathan Howell, Mats Rooth, Michael Wagner
Acoustic Classification Of Focus: On The Web And In The Lab, Jonathan Howell, Mats Rooth, Michael Wagner
Jonathan Howell
General Analysis Of An Online Language Corpus, Kerwin A. Livingstone
General Analysis Of An Online Language Corpus, Kerwin A. Livingstone
Kerwin A. Livingstone
Corpus-based research is rapidly gaining ground in the field of Applied Linguistics. More interesting is the evidence of many online language corpora which can be easily accessed, with just the click of the mouse. A quick navigation of the Web will produce different kinds of corpora in a vast number of language areas. Given the need to find new and exciting ways to improve the language learning and teaching process, corpus linguistics does have potential for generating significant learner experiences. Taking into consideration the above-mentioned, this paper deals with the general analysis of an online language corpus. The specific corpus …
Linguistics As Structure In Computer Animation: Toward A More Effective Synthesis Of Brow Motion In American Sign Language, Rosalee Wolfe, Peter Cook, John C. Mcdonald, Jerry Schnepp
Linguistics As Structure In Computer Animation: Toward A More Effective Synthesis Of Brow Motion In American Sign Language, Rosalee Wolfe, Peter Cook, John C. Mcdonald, Jerry Schnepp
Jerry C Schnepp
Computer-generated three-dimensional animation holds great promise for synthesizing utterances in American Sign Language (ASL) that are not only grammatical, but well tolerated by members of the Deaf community. Unfortunately, animation poses several challenges stemming from the necessity of grappling with massive amounts of data. However, the linguistics of ASL can aid in surmounting the challenge by providing structure and rules for organizing animation data. An exploration of the linguistic and extra linguistic behavior of the brows from an animator’s viewpoint yields a new approach for synthesizing nonmanuals that differs from the conventional animation of anatomy and instead offers a different …
Towards News Verification: Deception Detection Methods For News Discourse, Victoria Rubin, Niall Conroy, Yimin Chen
Towards News Verification: Deception Detection Methods For News Discourse, Victoria Rubin, Niall Conroy, Yimin Chen
Victoria Rubin
News verification is a process of determining whether a particular news report is truthful or deceptive. Deliberately deceptive (fabricated) news creates false conclusions in the readers’ minds. Truthful (authentic) news matches the writer’s knowledge. How do you tell the difference between the two in an automated way? To investigate this question, we analyzed rhetorical structures, discourse constituent parts and their coherence relations in deceptive and truthful news sample from NPR’s “Bluff the Listener”. Subsequently, we applied a vector space model to cluster the news by discourse feature similarity, achieving 63% accuracy. Our predictive model is not significantly better than chance …
Predicting Survey Responses: How And Why Semantics Shape Survey Statistics On Organizational Behaviour, Ketil Arnulf, Kai R. Larsen, Øyvind Martinsen, Chih How Bong
Predicting Survey Responses: How And Why Semantics Shape Survey Statistics On Organizational Behaviour, Ketil Arnulf, Kai R. Larsen, Øyvind Martinsen, Chih How Bong
Kai R.T. Larsen
Some disciplines in the social sciences rely heavily on collecting survey responses to detect empirical relationships among variables. We explored whether these relationships were a priori predictable from the semantic properties of the survey items, using language processing algorithms which are now available as new research methods. Language processing algorithms were used to calculate the semantic similarity among all items in state-of-the-art surveys from Organisational Behaviour research. These surveys covered areas such as transformational leadership, work motivation and work outcomes. This information was used to explain and predict the response patterns from real subjects. Semantic algorithms explained 60–86% of the …
Alternative Translation Approach – Part I: "Labor Division", Ludvig Glavati
Alternative Translation Approach – Part I: "Labor Division", Ludvig Glavati
Ludvig Glavati
No abstract provided.
Cecl: A New Baseline And A Non-Compositional Approach For The Sick Benchmark., Yves Bestgen
Cecl: A New Baseline And A Non-Compositional Approach For The Sick Benchmark., Yves Bestgen
Yves Bestgen
This paper describes the two procedures for determining the semantic similarities between sentences submitted for the SemEval 2014 Task 1. MeanMaxSim, an unsupervised procedure, is proposed as a new baseline to assess the efficiency gain provided by compositional models. It outperforms a number of other baselines by a wide margin. Compared to the word-overlap baseline, it has the advantage of taking into account the distributional similarity between words that are also involved in compositional models. The second procedure aims at building a predictive model using as predictors MeanMaxSim and (transformed) lexical features describing the differences between each sentence of a …
Quantifying The Development Of Phraseological Competence In L2 English Writing: An Automated Approach, Yves Bestgen, Sylviane Granger
Quantifying The Development Of Phraseological Competence In L2 English Writing: An Automated Approach, Yves Bestgen, Sylviane Granger
Yves Bestgen
Based on the large body of research that shows phraseology to be pervasive in language, this study aims to assess the role played by phraseological competence in the development of L2 writing proficiency and text quality assessment. We propose to use CollGram, a technique that assigns to each pair of contiguous words (bigrams) in a learner text two association scores (mutual information and t-score) computed on the basis of a large reference corpus, the Corpus of Contemporary American English. Applied to the Michigan State University Corpus of second language writing, CollGram shows a longitudinal decrease in the use of collocations …
Relation Between Harappan And Brahmi Scripts, Subhajit Kumar Ganguly
Relation Between Harappan And Brahmi Scripts, Subhajit Kumar Ganguly
Subhajit Kumar Ganguly
Around 45 odd signs out of the total number of Harappan signs found make up almost 100 percent of the inscriptions, in some form or other, as said earlier. Out of these 45 signs, around 40 are readily distinguishable. These form an almost exclusive and unique set. The primary signs are seen to have many variants, as in Brahmi. Many of these provide us with quite a vivid picture of their evolution, depending upon the factors of time, place and usefulness. Even minor adjustments in such signs, depending upon these factors, are noteworthy. Many of the signs in this list …
Maximizing Classification Accuracy In Native Language Identification, Scott Jarvis, Yves Bestgen, Steve Pepper
Maximizing Classification Accuracy In Native Language Identification, Scott Jarvis, Yves Bestgen, Steve Pepper
Yves Bestgen
This paper reports our contribution to the 2013 NLI Shared Task. The purpose of the task was to train a machine-learning system to identify the native-language affiliations of 1,100 texts written in English by nonnative speakers as part of a high-stakes test of gen- eral academic English proficiency. We trained our system on the new TOEFL11 corpus, which includes 11,000 essays written by nonnative speakers from 11 native-language backgrounds. Our final system used an SVM classifier with over 400,000 unique features consisting of lexical and POS n-grams occur- ring in at least two texts in the training set. Our system …
Evaluation Automatique De Textes Et Cohésion Lexicale, Yves Bestgen
Evaluation Automatique De Textes Et Cohésion Lexicale, Yves Bestgen
Yves Bestgen
(Article in French). Automatic essay grading is currently experiencing a growing popularity because of its importance in the field of education and, particularly, in foreign language learning. While several efficient systems have been developed over the last fifteen years, almost none of them take the discourse level into account. Recently, a few studies proposed to fill this gap by means of automatic indexes of lexical cohesion obtained from Latent Semantic Analysis, but the results were disappointing. Based on a well-known model of writing expertise, the present study proposes a new index of cohesion derived from work on the thematic segmentation …
What's In A Letter?, Aaron J. Schein
What's In A Letter?, Aaron J. Schein
Aaron J Schein
Sentiment analysis is a burgeoning field in natural language processing used to extract and categorize opinion in evaluative documents. We look at recommendation letters, which pose unique challenges to standard sentiment analysis systems. Our dataset is eighteen letters from applications to UMass Worcester Memorial Medical Center’s residency program in Obstetrics and Gynecology. Given a small dataset, we develop a method intended for use by domain experts to systematically explore their intuitions about the topical make-up of documents on which they make critical decisions. By leveraging WordNet and the WordNet Propagation algorithm, the method allows a user to develop topic seed …
Using Textual Features To Predict Popular Content On Digg, Paul H. Miller
Using Textual Features To Predict Popular Content On Digg, Paul H. Miller
Paul H Miller
Over the past few years, collaborative rating sites, such as Netflix, Digg and Stumble, have become increasingly prevalent sites for users to find trending content. I used various data mining techniques to study Digg, a social news site, to examine the influence of content on popularity. What influence does content have on popularity, and what influence does content have on users’ decisions? Overwhelmingly, prior studies have consistently shown that predicting popularity based on content is difficult and maybe even inherently impossible. The same submission can have multiple outcomes and content neither determines popularity, nor individual user decisions. My results show …
The Low Entropy Conjecture: The Challenges Of Modern Irish Nominal Declension, Robert Malouf, Farrell Ackerman
The Low Entropy Conjecture: The Challenges Of Modern Irish Nominal Declension, Robert Malouf, Farrell Ackerman
Robert Malouf
No abstract provided.
Computational Style Processing, Foaad Khosmood
Computational Style Processing, Foaad Khosmood
Foaad Khosmood
Our main thesis is that computational processing of natural language styles can be accomplished using corpus analysis methods and language transformation rules. We demonstrate this first by statistically modeling natural language styles, and second by developing tools that carry out style processing, and finally by running experiments using the tools and evaluating the results. Specifically, we present a model for style in natural languages, and demonstrate style processing in three ways: Our system analyzes styles in quantifiable terms according to our model (analysis), associates documents based on stylistic similarity to known corpora (classification) and manipulates texts to match a desired …
Prosodylab-Aligner: A Tool For Forced Alignment Of Laboratory Speech, Kyle Gorman, Jonathan Howell, Michael Wagner
Prosodylab-Aligner: A Tool For Forced Alignment Of Laboratory Speech, Kyle Gorman, Jonathan Howell, Michael Wagner
Jonathan Howell
Distribution Of Complexities In The Vai Script, Andrij Rovenchak, Ján Mačutek
Distribution Of Complexities In The Vai Script, Andrij Rovenchak, Ján Mačutek
Charles L. Riley
Automated Diagnostic Writing Tests: Why? How?, Elena Cotos, Nick Pendar
Automated Diagnostic Writing Tests: Why? How?, Elena Cotos, Nick Pendar
Elena Cotos
Diagnostic language assessment can greatly benefit from a collaborative union of computer-assisted language testing (CALT) and natural language processing (NLP). Currently, most CALT applications mainly allow for inferences about L2 proficiency based on learners’ recognition and comprehension of linguistic input and hardly concern language production (Holland, Maisano, Alderks, & Martin, 1993). NLP is now at a stage where it can be used or adapted for diagnostic testing of learner production skills. This paper explores the viability of NLP techniques for the diagnosis of L2 writing by analyzing the state of the art in current diagnostic language testing, reviewing the existing …
Automatic Identification Of Discourse Moves In Scientific Article Introductions, Elena Cotos, Nick Pendar
Automatic Identification Of Discourse Moves In Scientific Article Introductions, Elena Cotos, Nick Pendar
Elena Cotos
This paper reports on the first stage of building an educational tool for international graduate students to improve their academic writing skills. Taking a text-categorization approach, we experimented with several models to automatically classify sentences in research article introductions into one of three rhetorical moves. The paper begins by situating the project within the larger framework of intelligent computer-assisted language learning. It then presents the details of the study with very encouraging results. The paper then concludes by commenting on how the system may be improved and how the project is intended to be pursued and evaluated.
The Variable Elision Of Unstressed Vowels In European Portuguese: A Case Study, David James Silva
The Variable Elision Of Unstressed Vowels In European Portuguese: A Case Study, David James Silva
David Silva