Open Access. Powered by Scholars. Published by Universities.®

Computer Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Electrical and Computer Engineering

University of South Carolina

Interpretability

Articles 1 - 2 of 2

Full-Text Articles in Computer Engineering

Knowledge Infused Policy Gradients For Adaptive Pandemic Control, Kaushik Roy, Qi Zhang, Manas Gaur, Amit P. Sheth Mar 2021

Knowledge Infused Policy Gradients For Adaptive Pandemic Control, Kaushik Roy, Qi Zhang, Manas Gaur, Amit P. Sheth

Publications

COVID-19 has impacted nations differently based on their policy implementations. The effective policy requires taking into account public information and adaptability to new knowledge. Epidemiological models built to understand COVID-19 seldom provide the policymaker with the capability for adaptive pandemic control (APC). Among the core challenges to be overcome include (a) inability to handle a high degree of non-homogeneity in different contributing features across the pandemic timeline, (b) lack of an approach that enables adaptive incorporation of public health expert knowledge, and (c) transparent models that enable understanding of the decision-making process in suggesting policy. In this work, we take …


Semantics Of The Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable And Explainable?, Manas Gaur, Keyur Faldu, Amit Sheth Jan 2021

Semantics Of The Black-Box: Can Knowledge Graphs Help Make Deep Learning Systems More Interpretable And Explainable?, Manas Gaur, Keyur Faldu, Amit Sheth

Publications

The recent series of innovations in deep learning (DL) have shown enormous potential to impact individuals and society, both positively and negatively. The DL models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of DL models and their over-reliance on massive amounts of data condensed into labels and dense representations poses challenges for interpretability and explainability of the system. Furthermore, DLs have not yet been proven in their ability to …