Abstract:
Recommender systems (RSs) are rapidly evolving with increasing personalization to meet new constraints and improve performance on digital platforms. However, a significant issue remains: the lack of transparency in their decision-making, particularly with black-box approaches. Integrating logical reasoning and symbolic methods offers a promising solution for enhancing interpretability, but these methods are often underutilized. This thesis proposes a novel RS model that enhances interpretability for end users. Our architecture integrates a logical layer for generating rules from user and item attributes, alongside a graph convolutional network for collaborative filtering. By combining these components, our model generates recommendation scores with improved transparency and interpretability.