Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 11 of 11

Full-Text Articles in Engineering

Attention In The Faithful Self-Explanatory Nlp Models, Mostafa Rafaiejokandan Dec 2022

Attention In The Faithful Self-Explanatory Nlp Models, Mostafa Rafaiejokandan

Department of Computer Science and Engineering: Dissertations, Theses, and Student Research

Deep neural networks (DNNs) can perform impressively in many natural language processing (NLP) tasks, but their black-box nature makes them inherently challenging to explain or interpret. Self-Explanatory models are a new approach to overcoming this challenge, generating explanations in human-readable languages besides task objectives like answering questions. The main focus of this thesis is the explainability of NLP tasks, as well as how attention methods can help enhance performance. Three different attention modules are proposed, SimpleAttention, CrossSelfAttention, and CrossModality. It also includes a new dataset transformation method called Two-Documents that converts every dataset into two separate documents required by the …


Development Of An Explainability Scale To Evaluate Explainable Artificial Intelligence (Xai) Methods, Stephen Mccarthy Jan 2022

Development Of An Explainability Scale To Evaluate Explainable Artificial Intelligence (Xai) Methods, Stephen Mccarthy

Dissertations

Explainable Artificial Intelligence (XAI) is an area of research that develops methods and techniques to make the results of artificial intelligence understood by humans. In recent years, there has been an increased demand for XAI methods to be developed due to model architectures getting more complicated and government regulations requiring transparency in machine learning models. With this increased demand has come an increased need for instruments to evaluate XAI methods. However, there are few, if none, valid and reliable instruments that take into account human opinion and cover all aspects of explainability. Therefore, this study developed an objective, human-centred questionnaire …


Development Of An Explainability Scale To Evaluate Explainable Artificial Intelligence (Xai) Methods, Stephen Mccarthy Jan 2022

Development Of An Explainability Scale To Evaluate Explainable Artificial Intelligence (Xai) Methods, Stephen Mccarthy

Dissertations

Explainable Artificial Intelligence (XAI) is an area of research that develops methods and techniques to make the results of artificial intelligence understood by humans. In recent years, there has been an increased demand for XAI methods to be developed due to model architectures getting more complicated and government regulations requiring transparency in machine learning models. With this increased demand has come an increased need for instruments to evaluate XAI methods. However, there are few, if none, valid and reliable instruments that take into account human opinion and cover all aspects of explainability. Therefore, this study developed an objective, human-centred questionnaire …


Proknow: Process Knowledge For Safety Constrained And Explainable Question Generation For Mental Health Diagnostic Assistance, Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Allen, Amit Sheth Jan 2022

Proknow: Process Knowledge For Safety Constrained And Explainable Question Generation For Mental Health Diagnostic Assistance, Kaushik Roy, Manas Gaur, Misagh Soltani, Vipula Rawte, Ashwin Allen, Amit Sheth

Publications

Virtual Mental Health Assistants (VMHAs) are utilized in health care to provide patient services such as counseling and suggestive care. They are not used for patient diagnostic assistance because they cannot adhere to safety constraints and specialized clinical process knowledge (ProKnow) used to obtain clinical diagnoses. In this work, we define ProKnow as an ordered set of information that maps to evidence-based guidelines or categories of conceptual understanding to experts in a domain. We also introduce a new dataset of diagnostic conversations guided by safety constraints and ProKnow that healthcare professionals use (ProKnow-data). We develop a method for natural language …


Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy May 2021

Designing Ai For Explainability And Verifiability: A Value Sensitive Design Approach To Avoid Artificial Stupidity In Autonomous Vehicles, Steven Umbrello, Roman V. Yampolskiy

Faculty Scholarship

One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents. This paper explores how decision matrix algorithms, via the belief-desire-intention model for autonomous vehicles, can be designed to minimize the risks of opaque architectures. Primarily through an explicit orientation towards designing for …


Teachability And Interpretability In Reinforcement Learning, Jeevan Rajagopal May 2021

Teachability And Interpretability In Reinforcement Learning, Jeevan Rajagopal

Department of Computer Science and Engineering: Dissertations, Theses, and Student Research

There have been many recent advancements in the field of reinforcement learning, starting from the Deep Q Network playing various Atari 2600 games all the way to Google Deempind's Alphastar playing competitively in the game StarCraft. However, as the field challenges more complex environments, the current methods of training models and understanding their decision making become less effective. Currently, the problem is partially dealt with by simply adding more resources, but the need for a better solution remains.

This thesis proposes a reinforcement learning framework where a teacher or entity with domain knowledge of the task to complete can assist …


A Brief Bibliometric Survey Of Explainable Ai In Medical Field, Nilkanth Mukund Deshpande, Shilpa Shailesh Gite Apr 2021

A Brief Bibliometric Survey Of Explainable Ai In Medical Field, Nilkanth Mukund Deshpande, Shilpa Shailesh Gite

Library Philosophy and Practice (e-journal)

Background: This study aims to analyze the work done in the field of explainability related to artificial intelligence, especially in the medical field from 2004 onwards using the bibliometric methods.

Methods: different articles based on the topic leukemia detection were retrieved using one of the most popular database- Scopus. The articles are considered from 2004 onwards. Scopus analyzer is used for different types of analysis including documents by year, source, county and so on. There are other different analysis tools such as VOSviewer Version 1.6.15. This is used for the analysis of different units such as co-authorship, co-occurrences, citation analysis …


Semantics Of The Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable And Explainable?, Manas Gaur, Keyur Faldu, Amit Sheth Jan 2021

Semantics Of The Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable And Explainable?, Manas Gaur, Keyur Faldu, Amit Sheth

Publications

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to …


Seer: An Explainable Deep Learning Midi-Based Hybrid Song Recommender System, Khalil Damak, Olfa Nasraoui Dec 2019

Seer: An Explainable Deep Learning Midi-Based Hybrid Song Recommender System, Khalil Damak, Olfa Nasraoui

Faculty Scholarship

State of the art music recommender systems mainly rely on either matrix factorization-based collaborative filtering approaches or deep learning architectures. Deep learning models usually use metadata for content-based filtering or predict the next user interaction by learning from temporal sequences of user actions. Despite advances in deep learning for song recommendation, none has taken advantage of the sequential nature of songs by learning sequence models that are based on content. Aside from the importance of prediction accuracy, other significant aspects are important, such as explainability and solving the cold start problem. In this work, we propose a hybrid deep learning …


Transparency And Algorithmic Governance, Cary Coglianese, David Lehr Jan 2019

Transparency And Algorithmic Governance, Cary Coglianese, David Lehr

All Faculty Scholarship

Machine-learning algorithms are improving and automating important functions in medicine, transportation, and business. Government officials have also started to take notice of the accuracy and speed that such algorithms provide, increasingly relying on them to aid with consequential public-sector functions, including tax administration, regulatory oversight, and benefits administration. Despite machine-learning algorithms’ superior predictive power over conventional analytic tools, algorithmic forecasts are difficult to understand and explain. Machine learning’s “black-box” nature has thus raised concern: Can algorithmic governance be squared with legal principles of governmental transparency? We analyze this question and conclude that machine-learning algorithms’ relative inscrutability does not pose a …


An Explainable Autoencoder For Collaborative Filtering Recommendation, Pegah Sagheb Haghighi, Olurotimi Seton, Olfa Nasraoui Jan 2019

An Explainable Autoencoder For Collaborative Filtering Recommendation, Pegah Sagheb Haghighi, Olurotimi Seton, Olfa Nasraoui

Faculty Scholarship

Autoencoders are a common building block of Deep Learning architectures, where they are mainly used for representation learning. They have also been successfully used in Collaborative Filtering (CF) recommender systems to predict missing ratings. Unfortunately, like all black box machine learning models, they are unable to explain their outputs. Hence, while predictions from an Autoencoderbased recommender system might be accurate, it might not be clear to the user why a recommendation was generated. In this work, we design an explainable recommendation system using an Autoencoder model whose predictions can be explained using the neighborhood based explanation style. Our preliminary work …