Open Access. Powered by Scholars. Published by Universities.®

Digital Commons Network

Open Access. Powered by Scholars. Published by Universities.®

PDF

Theses

2023

Automation--Human factors

Articles 1 - 3 of 3

Full-Text Articles in Entire DC Network

Trust In Automation And The Consequences Of Reliance On Monitoring Checks, Jenna E. Cotter Jan 2023

Trust In Automation And The Consequences Of Reliance On Monitoring Checks, Jenna E. Cotter

Theses

As automation becomes increasingly prevalent across society, it is crucial to further understand the benefits or consequences of the solutions implemented in automated systems that support user interaction and the overall system design. While previous literature has provided valuable knowledge to the understanding of automation and trust, there is still uncertainty about how individuals perform and respond to automated systems that provide monitoring checks to maintain users’ attention. To address this gap, this study assessed the effects of varied reliability and monitoring checks on individuals interacting with an automated system. This study is among the first to examine the consequences …


The Effectiveness Of System-Wide Trust Repair Strategies, J. Andrew Atchley Jan 2023

The Effectiveness Of System-Wide Trust Repair Strategies, J. Andrew Atchley

Theses

This study considered whether contagion effects for trust extend to an interactive task that includes tasks that are visually incongruent and perform dissimilar task functions, and if trust repair strategies operate on a system-wide basis. Participants supervised three automated tasks over three blocks. The animal task decreased in reliability in block 2, and a trust repair strategy was issued prior to block 3. The findings indicated that there was a decrease in trust for the animal task only. The trust repair strategies were also ineffective for restoring trust. This study supported the idea that contagion effects do not extend to …


Learning To Calibrate Trust Through Explainability, Amber F. Chesser Jan 2023

Learning To Calibrate Trust Through Explainability, Amber F. Chesser

Theses

Automation has been implemented in a range of machinery. Providing supplementary information about system processes (i.e., explainability) could mitigate over-reliance and enhance operator awareness of potential anomalies. Trust plays a critical role in human- automation collaboration, as over-trust can lead to misuse or over-reliance, while under-trust can result in disuse or the failure to engage automation when it could enhance performance. Dynamic learned trust and situational trust fluctuate during an operator's interaction with a system and can be influenced by the rate of system failures or workload respectively. Design features like explainability can impact perceived usefulness and help users identify …