Open Access. Powered by Scholars. Published by Universities.®

American Sign Language Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 13 of 13

Full-Text Articles in American Sign Language

Exploring Strategies For Modeling Sign Language Phonology, Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sehyr, Naomi Caselli, Jesse Thomason Oct 2023

Exploring Strategies For Modeling Sign Language Phonology, Lee Kezar, Riley Carlin, Tejas Srinivasan, Zed Sehyr, Naomi Caselli, Jesse Thomason

Communication Sciences and Disorders Faculty Articles and Research

Like speech, signs are composed of discrete, recombinable features called phonemes. Prior work shows that models which can recognize phonemes are better at sign recognition, motivating deeper exploration into strategies for modeling sign language phonemes. In this work, we learn graph convolution networks to recognize the sixteen phoneme “types” found in ASL-LEX 2.0. Specifically, we explore how learning strategies like multi-task and curriculum learning can leverage mutually useful information between phoneme types to facilitate better modeling of sign language phonemes. Results on the Sem-Lex Benchmark show that curriculum learning yields an average accuracy of 87% across all phoneme types, outperforming …


Asymmetric Event-Related Potential Priming Effects Between English Letters And American Sign Language Fingerspelling Fonts, Zed Sevcikova Sehyr, Katherine J. Midgley, Karen Emmory, Phillip J. Holcomb Jun 2023

Asymmetric Event-Related Potential Priming Effects Between English Letters And American Sign Language Fingerspelling Fonts, Zed Sevcikova Sehyr, Katherine J. Midgley, Karen Emmory, Phillip J. Holcomb

Communication Sciences and Disorders Faculty Articles and Research

Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL–English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined …


Improving Sign Recognition With Phonology, Lee Kezar, Jesse Thomason, Zed Sevcikova Sehyr May 2023

Improving Sign Recognition With Phonology, Lee Kezar, Jesse Thomason, Zed Sevcikova Sehyr

Communication Sciences and Disorders Faculty Articles and Research

We use insights from research on American Sign Language (ASL) phonology to train models for isolated sign language recognition (ISLR), a step towards automatic sign language understanding. Our key insight is to explicitly recognize the role of phonology in sign production to achieve more accurate ISLR than existing work which does not consider sign language phonology. We train ISLR models that take in pose estimations of a signer producing a single sign to predict not only the sign but additionally its phonological characteristics, such as the handshape. These auxiliary predictions lead to a nearly 9% absolute gain in sign recognition …


An Interactive Visual Database For American Sign Language Reveals How Signs Are Organized In The Mind, Zed Sevcikova Sehyr, Ariel Goldberg, Karen Emmory, Naomi Caselli Apr 2021

An Interactive Visual Database For American Sign Language Reveals How Signs Are Organized In The Mind, Zed Sevcikova Sehyr, Ariel Goldberg, Karen Emmory, Naomi Caselli

Communication Sciences and Disorders Faculty Articles and Research

"We are four researchers who study psycholinguistics, linguistics, neuroscience and deaf education. Our team of deaf and hearing scientists worked with a group of software engineers to create the ASL-LEX database that anyone can use for free. We cataloged information on nearly 3,000 signs and built a visual, searchable and interactive database that allows scientists and linguists to work with ASL in entirely new ways."


The Asl-Lex 2.0 Project: A Database Of Lexical And Phonological Properties For 2,723 Signs In American Sign Language, Zed Sevcikova Sehyr, Naomi Caselli, Ariel M. Cohen-Goldberg, Karen Emmory Feb 2021

The Asl-Lex 2.0 Project: A Database Of Lexical And Phonological Properties For 2,723 Signs In American Sign Language, Zed Sevcikova Sehyr, Naomi Caselli, Ariel M. Cohen-Goldberg, Karen Emmory

Communication Sciences and Disorders Faculty Articles and Research

ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation …


Cross-Linguistic Metaphor Priming In Asl-English Bilinguals: Effects Of The Double Mapping Constraint, Franziska Schaller, Brittany Lee, Zed Sevcikova Sehyr, Lucinda O'Grady Farnady, Karen Emmorey Oct 2020

Cross-Linguistic Metaphor Priming In Asl-English Bilinguals: Effects Of The Double Mapping Constraint, Franziska Schaller, Brittany Lee, Zed Sevcikova Sehyr, Lucinda O'Grady Farnady, Karen Emmorey

Communication Sciences and Disorders Faculty Articles and Research

Meir’s (2010) Double Mapping Constraint (DMC) states the use of iconic signs in metaphors is restricted to signs that preserve the structural correspondence between the articulators and the concrete source domain and between the concrete and metaphorical domains. We investigated ASL signers’ comprehension of English metaphors whose translations complied with the DMC (Communication collapsed during the meeting) or violated the DMC (The acid ate the metal). Metaphors were preceded by the ASL translation of the English verb, an unrelated sign, or a still video. Participants made sensibility judgments. Response times (RTs) were faster for DMC-Compliant sentences …


Unique N170 Signatures To Words And Faces In Deaf Asl Signers Reflect Experience-Specific Adaptations During Early Visual Processing, Zed Sevcikova Sehyr, Katherine J. Midgley, Phillipp J. Holcomb, Karen Emmorey, David C. Plaut, Marlene Behrmann Mar 2020

Unique N170 Signatures To Words And Faces In Deaf Asl Signers Reflect Experience-Specific Adaptations During Early Visual Processing, Zed Sevcikova Sehyr, Katherine J. Midgley, Phillipp J. Holcomb, Karen Emmorey, David C. Plaut, Marlene Behrmann

Communication Sciences and Disorders Faculty Articles and Research

Previous studies with deaf adults reported reduced N170 waveform asymmetry to visual words, a finding attributed to reduced phonological mapping in left-hemisphere temporal regions compared to hearing adults. An open question remains whether this pattern indeed results from reduced phonological processing or from general neurobiological adaptations in visual processing of deaf individuals. Deaf ASL signers and hearing nonsigners performed a same-different discrimination task with visually presented words, faces, or cars, while scalp EEG time-locked to the onset of the first item in each pair was recorded. For word recognition, the typical left-lateralized N170 in hearing participants and reduced left-sided asymmetry …


A Data-Driven Approach To The Semantics Of Iconicity In American Sign Language And English, Bill Thompson, Marcus Perlman, Gary Lupyan, Zed Sevcikova Sehyr, Karen Emmory Mar 2020

A Data-Driven Approach To The Semantics Of Iconicity In American Sign Language And English, Bill Thompson, Marcus Perlman, Gary Lupyan, Zed Sevcikova Sehyr, Karen Emmory

Communication Sciences and Disorders Faculty Articles and Research

A growing body of research shows that both signed and spoken languages display regular patterns of iconicity in their vocabularies. We compared iconicity in the lexicons of American Sign Language (ASL) and English by combining previously collected ratings of ASL signs (Caselli, Sevcikova Sehyr, Cohen-Goldberg, & Emmorey, 2017) and English words (Winter, Perlman, Perry, & Lupyan, 2017) with the use of data-driven semantic vectors derived from English. Our analyses show that models of spoken language lexical semantics drawn from large text corpora can be useful for predicting the iconicity of signs as well as words. Compared to English, ASL has …


The Perceived Mapping Between Form And Meaning In American Sign Language Depends On Linguistic Knowledge And Task: Evidence From Iconicity And Transparency Judgments, Zed Sevcikova Sehyr, Karen Emmorey Jul 2019

The Perceived Mapping Between Form And Meaning In American Sign Language Depends On Linguistic Knowledge And Task: Evidence From Iconicity And Transparency Judgments, Zed Sevcikova Sehyr, Karen Emmorey

Communication Sciences and Disorders Faculty Articles and Research

Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), ‘perceived transparency’ (transparency ratings of the guesses), and ‘semantic potential’ (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers’ ratings were highly correlated; however, …


Second Language Acquisition Of American Sign Language Influences Co-Speech Gesture Production, Jill Weisberg, Shannon Casey, Zed Sevcikova Sehyr, Karen Emmorey May 2019

Second Language Acquisition Of American Sign Language Influences Co-Speech Gesture Production, Jill Weisberg, Shannon Casey, Zed Sevcikova Sehyr, Karen Emmorey

Communication Sciences and Disorders Faculty Articles and Research

Previous work indicates that 1) adults with native sign language experience produce more manual co-speech gestures than monolingual non-signers, and 2) one year of ASL instruction increases gesture production in adults, but not enough to differentiate them from non-signers. To elucidate these effects, we asked early ASL–English bilinguals, fluent late second language (L2) signers (≥ 10 years of experience signing), and monolingual non-signers to retell a story depicted in cartoon clips to a monolingual partner. Early and L2 signers produced manual gestures at higher rates compared to non-signers, particularly iconic gestures, and used a greater variety of handshapes. These results …


Referring Strategies In American Sign Language And English (With Co-Speech Gesture): The Role Of Modality In Referring To Non-Nameable Objects, Zed Sevcikova Sehyr, Brenda Nicodemus, Jennifer Petrich, Karen Emmorey Apr 2018

Referring Strategies In American Sign Language And English (With Co-Speech Gesture): The Role Of Modality In Referring To Non-Nameable Objects, Zed Sevcikova Sehyr, Brenda Nicodemus, Jennifer Petrich, Karen Emmorey

Communication Sciences and Disorders Faculty Articles and Research

American Sign Language (ASL) and English differ in linguistic resources available to express visual–spatial information. In a referential communication task, we examined the effect of language modality on the creation and mutual acceptance of reference to non-nameable figures. In both languages, description times reduced over iterations and references to the figures’ geometric properties (“shape-based reference”) declined over time in favor of expressions describing the figures’ resemblance to nameable objects (“analogy-based reference”). ASL signers maintained a preference for shape-based reference until the final (sixth) round, while English speakers transitioned toward analogy-based reference by Round 3. Analogy-based references were more time efficient …


The N170 Erp Component Differs In Laterality, Distribution, And Association With Continuous Reading Measures For Deaf And Hearing Readers, Karen Emmorey, Katherine J. Midgley, Casey B. Kohen, Zed Sevcikova Sehyr, Phillipp J. Holcomb Oct 2017

The N170 Erp Component Differs In Laterality, Distribution, And Association With Continuous Reading Measures For Deaf And Hearing Readers, Karen Emmorey, Katherine J. Midgley, Casey B. Kohen, Zed Sevcikova Sehyr, Phillipp J. Holcomb

Communication Sciences and Disorders Faculty Articles and Research

The temporo-occipitally distributed N170 ERP component is hypothesized to reflect print-tuning in skilled readers. This study investigated whether skilled deaf and hearing readers (matched on reading ability, but not phonological awareness) exhibit similar N170 patterns, given their distinct experiences learning to read. Thirty-two deaf and 32 hearing adults viewed words and symbol strings in a familiarity judgment task. In the N170 epoch (120–240 ms) hearing readers produced greater negativity for words than symbols at left hemisphere (LH) temporo-parietal and occipital sites, while deaf readers only showed this asymmetry at occipital sites. Linear mixed effects regression was used to examine the …


Implicit Co-Activation Of American Sign Language In Deaf Readers: An Erp Study, Gabriela Meade, Katherine J. Midgley, Zed Sevcikova Sehyr, Phillipp J. Holcomb, Karen Emmorey Apr 2017

Implicit Co-Activation Of American Sign Language In Deaf Readers: An Erp Study, Gabriela Meade, Katherine J. Midgley, Zed Sevcikova Sehyr, Phillipp J. Holcomb, Karen Emmorey

Communication Sciences and Disorders Faculty Articles and Research

In an implicit phonological priming paradigm, deaf bimodal bilinguals made semantic relatedness decisions for pairs of English words. Half of the semantically unrelated pairs had phonologically related translations in American Sign Language (ASL). As in previous studies with unimodal bilinguals, targets in pairs with phonologically related translations elicited smaller negativities than targets in pairs with phonologically unrelated translations within the N400 window. This suggests that the same lexicosemantic mechanism underlies implicit co-activation of a non-target language, irrespective of language modality. In contrast to unimodal bilingual studies that find no behavioral effects, we observed phonological interference, indicating that bimodal bilinguals may …