Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 13 of 13

Full-Text Articles in Physical Sciences and Mathematics

Explaining Deep Learning Models For Tabular Data Using Layer-Wise Relevance Propagation, Ihsan Ullah, Andre Rios, Vaibhov Gala, Susan Mckeever Dec 2021

Explaining Deep Learning Models For Tabular Data Using Layer-Wise Relevance Propagation, Ihsan Ullah, Andre Rios, Vaibhov Gala, Susan Mckeever

Articles

Trust and credibility in machine learning models are bolstered by the ability of a model to explain its decisions. While explainability of deep learning models is a well-known challenge, a further challenge is clarity of the explanation itself for relevant stakeholders of the model. Layer-wise Relevance Propagation (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images. We present the novel application of LRP with tabular datasets containing mixed data (categorical and numerical) using a deep neural network (1D-CNN), for Credit Card Fraud detection and Telecom Customer Churn prediction use …


Notions Of Explainability And Evaluation Approaches For Explainable Artificial Intelligence, Giulia Vilone, Luca Longo Dec 2021

Notions Of Explainability And Evaluation Approaches For Explainable Artificial Intelligence, Giulia Vilone, Luca Longo

Articles

Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability …


A Quantitative Evaluation Of Global, Rule-Based Explanations Of Post-Hoc, Model Agnostic Methods, Giulia Vilone, Luca Longo Nov 2021

A Quantitative Evaluation Of Global, Rule-Based Explanations Of Post-Hoc, Model Agnostic Methods, Giulia Vilone, Luca Longo

Articles

Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the …


Exploring The Personality Of Virtual Tutors In Conversational Foreign Language Practice, Johanna Dobbriner, Cathy Ennis, Robert J. Ross Sep 2021

Exploring The Personality Of Virtual Tutors In Conversational Foreign Language Practice, Johanna Dobbriner, Cathy Ennis, Robert J. Ross

Conference papers

Fluid interaction between virtual agents and humans requires the understanding of many issues of conversational pragmatics. One such issue is the interaction between communication strategy and personality. As a step towards developing models of personality driven pragmatics policies, in this paper, we present our initial experiment to explore differences in user interaction with two contrasting avatar personalities. Each user saw a single personality in a video-call setting and gave feedback on the interaction. Our expectations, that a more extroverted outgoing positive personality would be a more successful tutor, were only partially confirmed. While this personality did induce longer conversations in …


Classification Of Explainable Artificial Intelligence Methods Through Their Output Formats, Giulia Vilone, Luca Longo Aug 2021

Classification Of Explainable Artificial Intelligence Methods Through Their Output Formats, Giulia Vilone, Luca Longo

Articles

Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. …


Multi-Modal Self-Supervised Representation Learning For Earth Observation, Pallavi Jain, Bianca Schoen Phelan, Robert J. Ross Jul 2021

Multi-Modal Self-Supervised Representation Learning For Earth Observation, Pallavi Jain, Bianca Schoen Phelan, Robert J. Ross

Conference papers

Self-Supervised learning (SSL) has reduced the performance gap between supervised and unsupervised learning, due to its ability to learn invariant representations. This is a boon to the domains like Earth Observation (EO), where labelled data availability is scarce but unlabelled data is freely available. While Transfer Learning from generic RGB pre-trained models is still common-place in EO, we argue that, it is essential to have good EO domain specific pre-trained model in order to use with downstream tasks with limited labelled data. Hence, we explored the applicability of SSL with multi-modal satellite imagery for downstream tasks. For this we utilised …


Flying Free: A Research Overview Of Deep Learning In Drone Navigation Autonomy, Thomas Lee, Susan Mckeever, Jane Courtney Jun 2021

Flying Free: A Research Overview Of Deep Learning In Drone Navigation Autonomy, Thomas Lee, Susan Mckeever, Jane Courtney

Articles

With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the …


An Analysis Of The Interpretability Of Neural Networks Trained On Magnetic Resonance Imaging For Stroke Outcome Prediction, Esra Zihni, John D. Kelleher, Bryony Mcgarry Apr 2021

An Analysis Of The Interpretability Of Neural Networks Trained On Magnetic Resonance Imaging For Stroke Outcome Prediction, Esra Zihni, John D. Kelleher, Bryony Mcgarry

Conference papers

Applying deep learning models to MRI scans of acute stroke patients to extract features that are indicative of short-term outcome could assist a clinician’s treatment decisions. Deep learning models are usually accurate but are not easily interpretable. Here, we trained a convolutional neural network on ADC maps from hyperacute ischaemic stroke patients for prediction of short-term functional outcome and used an interpretability technique to highlight regions in the ADC maps that were most important in the prediction of a bad outcome. Although highly accurate, the model’s predictions were not based on aspects of the ADC maps related to stroke pathophysiology.


Wider Vision: Enriching Convolutional Neural Networks Via Alignment To External Knowledge Bases, Xuehao Liu, Sarah Jane Delany, Susan Mckeever Mar 2021

Wider Vision: Enriching Convolutional Neural Networks Via Alignment To External Knowledge Bases, Xuehao Liu, Sarah Jane Delany, Susan Mckeever

Conference papers

Deep learning models suffer from opaqueness. For Convolutional Neural Networks (CNNs), current research strategies for explaining models focus on the target classes within the associated training dataset. As a result, the understanding of hidden feature map activations is limited by the discriminative knowledge gleaned during training. The aim of our work is to explain and expand CNNs models via the mirroring or alignment of the network to an external knowledge base. This will allow us to give a semantic context or label for each visual feature. Using the resultant aligned embedding space, we can match CNN feature activations to nodes …


Fairer Evaluation Of Zero Shot Action Recognition In Videos, Kaiqiang Huang, Sarah Jane Delany, Susan Mckeever Jan 2021

Fairer Evaluation Of Zero Shot Action Recognition In Videos, Kaiqiang Huang, Sarah Jane Delany, Susan Mckeever

Conference Papers

Zero-shot learning (ZSL) for human action recognition (HAR) aims to recognise video action classes that have never been seen during model training. This is achieved by building mappings between visual and semantic embeddings. These visual embeddings are typically provided via a pre-trained deep neural network (DNN). The premise of ZSL is that the training and testing classes should be disjoint. In the parallel domain of ZSL for image input, the widespread poor evaluation protocol of pre-training on ZSL test classes has been highlighted. This is akin to providing a sneak preview of the evaluation classes. In this work, we investigate …


Virtual Tutor Personality In Computer Assisted Language Learning, Johanna Dobbriner, Cathy Ennis, Robert J. Ross Jan 2021

Virtual Tutor Personality In Computer Assisted Language Learning, Johanna Dobbriner, Cathy Ennis, Robert J. Ross

Conference papers

The use of intelligent virtual agents in language learning has increased in recent years. Studies into several aspects of personalisation aiming to increase user engagement are an ongoing research topic with avatar personality being one such aspect. As a step towards our development of intelligent virtual avatars, we present two of our initial experiments to explore differences in user interaction with two contrasting avatar personalities -- P1: open-minded, friendly and sociable and P2: closed-off, curt and distant. Each user interacted with a single personality in a video-call setting and gave feedback on the interaction. Our expectations, that P1 would be …


K-Nearest Neighbour Classifiers - A Tutorial, Padraig Cunningham, Sarah Jane Delany Jan 2021

K-Nearest Neighbour Classifiers - A Tutorial, Padraig Cunningham, Sarah Jane Delany

Conference papers

Perhaps the most straightforward classifier in the arsenal or Machine Learning techniques is the Nearest Neighbour Classifier – classification is achieved by identifying the nearest neighbours to a query example and using those neighbours to determine the class of the query. This approach to classification is of particular importance because issues of poor run-time performance is not such a problem these days with the computational power that is available. This paper presents an overview of techniques for Nearest Neighbour classification focusing on; mechanisms for assessing similarity (distance), computational issues in identifying nearest neighbours and mechanisms for reducing the dimension of …


Zero-Shot Action Recognition With Knowledge Enhanced Generative Adversarial Networks, Kaiqiang Huang, Luis Miralles-Pechuán, Susan Mckeever Jan 2021

Zero-Shot Action Recognition With Knowledge Enhanced Generative Adversarial Networks, Kaiqiang Huang, Luis Miralles-Pechuán, Susan Mckeever

Conference papers

Zero-Shot Action Recognition (ZSAR) aims to recognise action classes in videos that have never been seen during model training. In some approaches, ZSAR has been achieved by generating visual features for unseen classes based on the semantic information of the unseen class labels using generative adversarial networks (GANs). Therefore, the problem is converted to standard supervised learning since the unseen visual features are accessible. This approach alleviates the lack of labelled samples of unseen classes. In addition, objects appearing in the action instances could be used to create enriched semantics of action classes and therefore, increase the accuracy of ZSAR. …