Open Access. Powered by Scholars. Published by Universities.®

Engineering Commons

Open Access. Powered by Scholars. Published by Universities.®

Articles 1 - 5 of 5

Full-Text Articles in Engineering

Seer: An Explainable Deep Learning Midi-Based Hybrid Song Recommender System, Khalil Damak, Olfa Nasraoui Dec 2019

Seer: An Explainable Deep Learning Midi-Based Hybrid Song Recommender System, Khalil Damak, Olfa Nasraoui

Faculty Scholarship

State of the art music recommender systems mainly rely on either matrix factorization-based collaborative filtering approaches or deep learning architectures. Deep learning models usually use metadata for content-based filtering or predict the next user interaction by learning from temporal sequences of user actions. Despite advances in deep learning for song recommendation, none has taken advantage of the sequential nature of songs by learning sequence models that are based on content. Aside from the importance of prediction accuracy, other significant aspects are important, such as explainability and solving the cold start problem. In this work, we propose a hybrid deep learning ...


Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders Aug 2019

Mining Semantic Knowledge Graphs To Add Explainability To Black Box Recommender Systems, Mohammed Alshammari, Olfa Nasraoui, Scott Sanders

Faculty Scholarship

Recommender systems are being increasingly used to predict the preferences of users on online platforms and recommend relevant options that help them cope with information overload. In particular, modern model-based collaborative filtering algorithms, such as latent factor models, are considered state-of-the-art in recommendation systems. Unfortunately, these black box systems lack transparency, as they provide little information about the reasoning behind their predictions. White box systems, in contrast, can, by nature, easily generate explanations. However, their predictions are less accurate than sophisticated black box models. Recent research has demonstrated that explanations are an essential component in bringing the powerful predictions of ...


Debiasing The Human-Recommender System Feedback Loop In Collaborative Filtering, Wenlong Sun, Sami Khenissi, Olfa Nasraoui, Patrick Shafto May 2019

Debiasing The Human-Recommender System Feedback Loop In Collaborative Filtering, Wenlong Sun, Sami Khenissi, Olfa Nasraoui, Patrick Shafto

Faculty Scholarship

Recommender Systems (RSs) are widely used to help online users discover products, books, news, music, movies, courses, restaurants,etc. Because a traditional recommendation strategy always shows the most relevant items (thus with highest predicted rating), traditional RS’s are expected to make popular items become even more popular and non-popular items become even less popular which in turn further divides the haves (popular) from the have-nots (un-popular). Therefore, a major problem with RSs is that they may introduce biases affecting the exposure of items, thus creating a popularity divide of items during the feedback loop that occurs with users, and ...


Personal Universes: A Solution To The Multi-Agent Value Alignment Problem, Roman V. Yampolskiy Jan 2019

Personal Universes: A Solution To The Multi-Agent Value Alignment Problem, Roman V. Yampolskiy

Faculty Scholarship

AI Safety researchers attempting to align values of highly capable intelligent systems with those of humanity face a number of challenges including personal value extraction, multi-agent value merger and finally in-silico encoding. State-of-the-art research in value alignment shows difficulties in every stage in this process, but merger of incompatible preferences is a particularly difficult challenge to overcome. In this paper we assume that the value extraction problem will be solved and propose a possible way to implement an AI solution which optimally aligns with individual preferences of each user. We conclude by analyzing benefits and limitations of the proposed approach.


An Explainable Autoencoder For Collaborative Filtering Recommendation, Pegah Sagheb Haghighi, Olurotimi Seton, Olfa Nasraoui Jan 2019

An Explainable Autoencoder For Collaborative Filtering Recommendation, Pegah Sagheb Haghighi, Olurotimi Seton, Olfa Nasraoui

Faculty Scholarship

Autoencoders are a common building block of Deep Learning architectures, where they are mainly used for representation learning. They have also been successfully used in Collaborative Filtering (CF) recommender systems to predict missing ratings. Unfortunately, like all black box machine learning models, they are unable to explain their outputs. Hence, while predictions from an Autoencoderbased recommender system might be accurate, it might not be clear to the user why a recommendation was generated. In this work, we design an explainable recommendation system using an Autoencoder model whose predictions can be explained using the neighborhood based explanation style. Our preliminary work ...