Abstract is missing.
- Understanding Deep RL Agent Decisions: a Novel Interpretable Approach with Trainable PrototypesCaterina Borzillo, Alessio Ragno, Roberto Capobianco. 1-14 [doi]
- Unboxing the Black-Box of Deep Learning Based Reconstruction of Undersampled MRIssSoumick Chatterjee, Arnab Das, Rupali Khatun, Andreas Nürnberger. 15-28 [doi]
- Rationale Trees: Towards a Formalization of Human Knowledge for Explainable Natural Language ProcessingAndrea Tocchetti, Jie Yang 0028, Marco Brambilla 0001. 29-46 [doi]
- Investigating Human-Centered Perspectives in Explainable Artificial IntelligenceMuhammad Suffian Nizami, Ilia Stepin, Jose Maria Alonso-Moral, Alessandro Bogliolo. 47-66 [doi]
- Irrelevant Explanations: a Logical Formalization and a Case StudySimona Colucci, Tommaso Di Noia, Francesco M. Donini, Claudio Pomo, Eugenio Di Sciascio. 67-75 [doi]
- SHAP-based Explanations to Improve Classification SystemsAndrea Apicella 0001, Salvatore Giugliano, Francesco Isgrò, Roberto Prevete. 76-86 [doi]
- A Flexible Metric-Based Approach to Assess Neural Network Interpretability in Image ClassificationAndrea Colombo, Laura Fiorenza, Sofia Mongardi. 87-98 [doi]