Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Artificial Intelligence and Robotics

PDF

Research Collection School Of Computing and Information Systems

2023

Explainable AI

Articles 1 - 2 of 2

Full-Text Articles in Physical Sciences and Mathematics

Understanding The Effect Of Counterfactual Explanations On Trust And Reliance On Ai For Human-Ai Collaborative Clinical Decision Making, Min Hun Lee, Chong Jun Chew Oct 2023

Understanding The Effect Of Counterfactual Explanations On Trust And Reliance On Ai For Human-Ai Collaborative Clinical Decision Making, Min Hun Lee, Chong Jun Chew

Research Collection School Of Computing and Information Systems

Artificial intelligence (AI) is increasingly being considered to assist human decision-making in high-stake domains (e.g. health). However, researchers have discussed an issue that humans can over-rely on wrong suggestions of the AI model instead of achieving human AI complementary performance. In this work, we utilized salient feature explanations along with what-if, counterfactual explanations to make humans review AI suggestions more analytically to reduce overreliance on AI and explored the effect of these explanations on trust and reliance on AI during clinical decision-making. We conducted an experiment with seven therapists and ten laypersons on the task of assessing post-stroke survivors' quality …


Exploring A Gradient-Based Explainable Ai Technique For Time-Series Data: A Case Study Of Assessing Stroke Rehabilitation Exercises, Min Hun Lee, Yi Jing Choy May 2023

Exploring A Gradient-Based Explainable Ai Technique For Time-Series Data: A Case Study Of Assessing Stroke Rehabilitation Exercises, Min Hun Lee, Yi Jing Choy

Research Collection School Of Computing and Information Systems

Explainable artificial intelligence (AI) techniques are increasingly being explored to provide insights into why AI and machine learning (ML) models provide a certain outcome in various applications. However, there has been limited exploration of explainable AI techniques on time-series data, especially in the healthcare context. In this paper, we describe a threshold-based method that utilizes a weakly supervised model and a gradient-based explainable AI technique (i.e. saliency map) and explore its feasibility to identify salient frames of time-series data. Using the dataset from 15 post-stroke survivors performing three upper-limb exercises and labels on whether a compensatory motion is observed or …