Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

Linguistics

City University of New York (CUNY)

Theses/Dissertations

Machine learning

Publication Year

Articles 1 - 3 of 3

Full-Text Articles in Entire DC Network

A Machine Learning Approach To Text-Based Sarcasm Detection, Lara I. Novic Jun 2022

A Machine Learning Approach To Text-Based Sarcasm Detection, Lara I. Novic

Dissertations, Theses, and Capstone Projects

Sarcasm and indirect language are commonplace for humans to produce and recognize but difficult for machines to detect. While artificial intelligence can accurately analyze sentiment and emotion in speech and text, it may struggle with insincere and sardonic content, although it is possible to train a machine to identify uttered and written sarcasm. This paper aims to detect sarcasm using logistic regression and a support vector machine (SVM) and compare their results to a baseline.

The models are trained on headlines from a Kaggle dataset containing headlines from the satirical news website The Onion and serious news website Huffpost (formerly …


Label Imputation For Homograph Disambiguation: Theoretical And Practical Approaches, Jennifer M. Seale Sep 2021

Label Imputation For Homograph Disambiguation: Theoretical And Practical Approaches, Jennifer M. Seale

Dissertations, Theses, and Capstone Projects

This dissertation presents the first implementation of label imputation for the task of homograph disambiguation using 1) transcribed audio, and 2) parallel, or translated, corpora. For label imputation from parallel corpora, a hypothesis of interlingual alignment between homograph pronunciations and text word forms is developed and formalized. Both audio and parallel corpora label imputation techniques are tested empirically in experiments that compare homograph disambiguation model performance using: 1) hand-labeled training data, and 2) hand-labeled training data augmented with label-imputed data. Regularized, multinomial logistic regression and pre-trained ALBERT, BERT, and XLNet language models fine-tuned as token classifiers are developed for homograph …


Does The Word "Chien" Bark? Representation Learning In Neural Machine Translation Encoders, Emily Campbell Sep 2020

Does The Word "Chien" Bark? Representation Learning In Neural Machine Translation Encoders, Emily Campbell

Dissertations, Theses, and Capstone Projects

This thesis presents experiments with using representation learning to explore how neural networks learn. Neural networks which take text as input create internal representations of the text during their training. Recent work has found that these representations can be used to perform other downstream linguistic tasks, such as part-of-speech (POS) tagging. This demonstrates that the neural networks are learning linguistic information and storing this information in the representations. We focus on the representations created by neural machine translation (NMT) models and whether they can be used in POS tagging. We train 5 NMT models including an auto-encoder. We extract the …