Open Access. Powered by Scholars. Published by Universities.®

Physical Sciences and Mathematics Commons

Open Access. Powered by Scholars. Published by Universities.®

Computer Sciences

Technological University Dublin

Series

Explainable artificial intelligence

Articles 1 - 7 of 7

Full-Text Articles in Physical Sciences and Mathematics

Xai Analysis Of Online Activism To Capture Integration In Irish Society Through Twitter, Arjumand Younus, Muhammad Atif Qureshi, Mingyeong Jeon, Arefeh Kazemi, Simon Caton Oct 2022

Xai Analysis Of Online Activism To Capture Integration In Irish Society Through Twitter, Arjumand Younus, Muhammad Atif Qureshi, Mingyeong Jeon, Arefeh Kazemi, Simon Caton

Books/Book Chapters

Online activism over Twitter has assumed a multidimensional nature, especially in societies with abundant multicultural identities. In this paper, we pursue a case study of Ireland’s Twitter landscape and specifically migrant and native activists on this platform. We aim to capture the level to which immigrants are integrated into Irish society and study the similarities and differences between their characteristic patterns by delving into the features that play a significant role in classifying a Twitterer as a migrant or a native. A study such as ours can provide a window into the level of integration and harmony in society.


Notions Of Explainability And Evaluation Approaches For Explainable Artificial Intelligence, Giulia Vilone, Luca Longo Dec 2021

Notions Of Explainability And Evaluation Approaches For Explainable Artificial Intelligence, Giulia Vilone, Luca Longo

Articles

Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models that lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested, coupled with several studies attempting to define the concept of explainability and its evaluation. This systematic review contributes to the body of knowledge by clustering all the scientific studies via a hierarchical system that classifies theories and notions related to the concept of explainability …


A Quantitative Evaluation Of Global, Rule-Based Explanations Of Post-Hoc, Model Agnostic Methods, Giulia Vilone, Luca Longo Nov 2021

A Quantitative Evaluation Of Global, Rule-Based Explanations Of Post-Hoc, Model Agnostic Methods, Giulia Vilone, Luca Longo

Articles

Understanding the inferences of data-driven, machine-learned models can be seen as a process that discloses the relationships between their input and output. These relationships consist and can be represented as a set of inference rules. However, the models usually do not explicit these rules to their end-users who, subsequently, perceive them as black-boxes and might not trust their predictions. Therefore, scholars have proposed several methods for extracting rules from data-driven machine-learned models to explain their logic. However, limited work exists on the evaluation and comparison of these methods. This study proposes a novel comparative approach to evaluate and compare the …


Classification Of Explainable Artificial Intelligence Methods Through Their Output Formats, Giulia Vilone, Luca Longo Aug 2021

Classification Of Explainable Artificial Intelligence Methods Through Their Output Formats, Giulia Vilone, Luca Longo

Articles

Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. …


A Comparative Analysis Of Rule-Based, Model-Agnostic Methods For Explainable Artificial Intelligence, Giulia Vilone, Lucas Rizzo, Luca Longo Dec 2020

A Comparative Analysis Of Rule-Based, Model-Agnostic Methods For Explainable Artificial Intelligence, Giulia Vilone, Lucas Rizzo, Luca Longo

Articles

The ultimate goal of Explainable Artificial Intelligence is to build models that possess both high accuracy and degree of explainability. Understanding the inferences of such models can be seen as a process that discloses the relationships between their input and output. These relationships can be represented as a set of inference rules which are usually not explicit within a model. Scholars have proposed several methods for extracting rules from data-driven machine-learned models. However, limited work exists on their comparison. This study proposes a novel comparative approach to evaluate and compare the rulesets produced by four post-hoc rule extractors by employing …


An Empirical Evaluation Of The Inferential Capacity Of Defeasible Argumentation, Non-Monotonic Fuzzy Reasoning And Expert Systems, Lucas Rizzo, Luca Longo Jan 2020

An Empirical Evaluation Of The Inferential Capacity Of Defeasible Argumentation, Non-Monotonic Fuzzy Reasoning And Expert Systems, Lucas Rizzo, Luca Longo

Articles

Several non-monotonic formalisms exist in the field of Artificial Intelligence for reasoning under uncertainty. Many of these are deductive and knowledge-driven, and also employ procedural and semi-declarative techniques for inferential purposes. Nonetheless, limited work exist for the comparison across distinct techniques and in particular the examination of their inferential capacity. Thus, this paper focuses on a comparison of three knowledge-driven approaches employed for non-monotonic reasoning, namely expert systems, fuzzy reasoning and defeasible argumentation. A knowledge-representation and reasoning problem has been selected: modelling and assessing mental workload. This is an ill-defined construct, and its formalisation can be seen as a reasoning …


Explainable Artificial Intelligence: Concepts, Applications, Research Challenges And Visions, Luca Longo, Randy Goebel, Freddy Lecue, Peter Kieseberg, Andreas Holzinger Jan 2020

Explainable Artificial Intelligence: Concepts, Applications, Research Challenges And Visions, Luca Longo, Randy Goebel, Freddy Lecue, Peter Kieseberg, Andreas Holzinger

Conference papers

The development of theory, frameworks and tools for Explainable AI (XAI) is a very active area of research these days, and articulating any kind of coherence on a vision and challenges is itself a challenge. At least two sometimes complementary and colliding threads have emerged. The first focuses on the development of pragmatic tools for increasing the transparency of automatically learned prediction models, as for instance by deep or reinforcement learning. The second is aimed at anticipating the negative impact of opaque models with the desire to regulate or control impactful consequences of incorrect predictions, especially in sensitive areas like …