Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Articles

Explainable artificial intelligence

Publication Year

Articles 1 - 5 of 5

Full-Text Articles in Physical Sciences and Mathematics

Notions Of Explainability And Evaluation Approaches For Explainable Artificial Intelligence, Giulia Vilone, Luca Longo Dec 2021

Notions Of Explainability And Evaluation Approaches For Explainable Artificial Intelligence, Giulia Vilone, Luca Longo

Articles

Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability …


A Quantitative Evaluation Of Global, Rule-Based Explanations Of Post-Hoc, Model Agnostic Methods, Giulia Vilone, Luca Longo Nov 2021

A Quantitative Evaluation Of Global, Rule-Based Explanations Of Post-Hoc, Model Agnostic Methods, Giulia Vilone, Luca Longo

Articles

Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the …


Classification Of Explainable Artificial Intelligence Methods Through Their Output Formats, Giulia Vilone, Luca Longo Aug 2021

Classification Of Explainable Artificial Intelligence Methods Through Their Output Formats, Giulia Vilone, Luca Longo

Articles

Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. …


A Comparative Analysis Of Rule-Based, Model-Agnostic Methods For Explainable Artificial Intelligence, Giulia Vilone, Lucas Rizzo, Luca Longo Dec 2020

A Comparative Analysis Of Rule-Based, Model-Agnostic Methods For Explainable Artificial Intelligence, Giulia Vilone, Lucas Rizzo, Luca Longo

Articles

The ultimate goal of Explainable Artificial Intelligence is to build models that possess both high accuracy and degree of explainability. Understanding the inferences of such models can be seen as a process that discloses the relationships between their input and output. These relationships can be represented as a set of inference rules which are usually not explicit within a model. Scholars have proposed several methods for extracting rules from data-driven machine-learned models. However, limited work exists on their comparison. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by four post-hoc rule extractors by employing …


An Empirical Evaluation Of The Inferential Capacity Of Defeasible Argumentation, Non-Monotonic Fuzzy Reasoning And Expert Systems, Lucas Rizzo, Luca Longo Jan 2020

An Empirical Evaluation Of The Inferential Capacity Of Defeasible Argumentation, Non-Monotonic Fuzzy Reasoning And Expert Systems, Lucas Rizzo, Luca Longo

Articles

Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning …