Abstract is missing.
- PrefaceLuca Longo. [doi]
- Why Industry 5.0 Needs XAI 2.0?Szymon Bobek, Slawomir Nowaczyk, João Gama 0001, Sepideh Pashami, Rita P. Ribeiro, Zahra Taghiyarrenani, Bruno Veloso, Lala H. Rajaoarisoa, Maciej Szelazek, Grzegorz J. Nalepa. 1-6 [doi]
- Trustworthy Enough? Evaluation of an AI Decision Support System for Healthcare ProfessionalsKristýna Sirka Kacafírková, Sara Polak, Myriam Sillevis Smitt, Shirley A. Elprama, An Jacobs. 7-11 [doi]
- Interpreting Forecasted Vital Signs Using N-BEATS in Sepsis PatientsAnubhav Bhatti, Naveen Thangavelu, Marium Hassan, Choongmin Kim, San Lee, YongHwan Kim, Jang Yong Kim. 12-17 [doi]
- AutoXplain: Towards Automated Interpretable Model SelectionTessel Haagen, Heysem Kaya, Joop Snijder, Melchior Nierman. 18-23 [doi]
- Explaining ANN-modeled fMRI Data with Path-Weights and Layer-Wise Relevance PropagationJosé Diogo Marques dos Santos, José Paulo Marques dos Santos. 24-29 [doi]
- Examining the Nexus between Explainability of AI Systems and User's Trust: A Preliminary Scoping ReviewSofia Morandini, Federico Fraboni, Gabriele Puzzo, Davide Giusino, Lucia Volpi, Hannah Brendel, Enzo Balatti, Marco de Angelis 0001, Andrea De Cesarei, Luca Pietrantoni. 30-35 [doi]
- A Prototype of an Interactive Clinical Decision Support System with Counterfactual ExplanationsFelix Liedeker, Philipp Cimiano. 36-41 [doi]
- Is the Common Approach used to Identify Social Biases in Artificial Intelligence also Biased?Ana Bucchi, Gabriel M. Fonseca. 42-46 [doi]
- Local Interpretable Model-Agnostic Explanations for Multitarget Image RegressionKira Vinogradova, Gene Myers. 47-52 [doi]
- An Examination of the Effect of the Inconsistency Budget in Weighted Argumentation Frameworks and their Impact on the Interpretation of Deep Neural NetworksGiulia Vilone, Luca Longo. 53-58 [doi]
- Machine Learning Explanations by Surrogate Causal Models (MaLESCaMo)Alberto Termine, Alessandro Antonucci 0001, Alessandro Facchini. 59-64 [doi]
- Latent Space Interpretation and Visualisation for Understanding the Decisions of Convolutional Variational Autoencoders Trained with EEG Topographic MapsTaufique Ahmed, Luca Longo. 65-70 [doi]
- Explaining Deep Learning Time Series Classification Models using a Decision Tree-Based Post-Hoc XAI MethodEphrem Tibebe Mekonnen, Pierpaolo Dondio, Luca Longo. 71-76 [doi]
- Evaluation of Explainable AI methods for Classification Tasks in Visual InspectionBjörn Forcher, Patrick Menold, Moritz Weixler, Jörg Schmitt, Samuel Wagner. 77-82 [doi]
- User-Driven Counterfactual Generator: A Human Centered ExplorationIsacco Beretta, Eleonora Cappuccio, Marta Marchiori Manerba. 83-88 [doi]
- Optimizing Deep Q-Learning Experience Replay with SHAP Explanations: Exploring Minimum Experience Replay Buffer Sizes in Reinforcement LearningRobert S. Sullivan, Luca Longo. 89-94 [doi]
- Uncovering Decision-making Process of Cost-sensitive Tree-based Classifiers using the Adaptation of TreeSHAPMarija Kopanja, Sanja Brdar, Stefan Hacko. 95-100 [doi]
- Explaining the Transfer Learning Ability of a Deep Neural Networks by Means of RepresentationsGerman Magai, Artem Soroka. 101-106 [doi]
- Investigating Poor Performance Regions of Black Boxes: LIME-based Exploration in Sepsis DetectionMozhgan Salimiparsa, Surajsinh Parmar, San Lee, Choongmin Kim, YongHwan Kim, Jang Yong Kim. 107-111 [doi]
- An Explainable AI User Interface for Facilitating Collaboration between Domain Experts and AI ResearchersMeng Shi, Celal Savur, Elizabeth Watkins, Ramesh Manuvinakurike, Gesem Gudino Mejia, Richard Beckwith 0001, Giuseppe Raffa. 112-116 [doi]
- The Metric-aware Kernel-width Choice for LIMEAurelio Barrera-Vicent, Eduardo Paluzo-Hidalgo, Miguel A. Gutiérrez-Naranjo. 117-122 [doi]
- Integration of Explainable Deep Neural Network with Blockchain Technology: Medical Indemnity InsuranceSwati Sachan, Jericho Muwanga. 123-128 [doi]
- When Attention Turn To Be Explanation. A Case Study in Recommender SystemsRicardo Anibal Matamoros Aragon, Italo Zoppis, Sara Manzoni. 129-134 [doi]
- Low-Impact Feature Reduction Regularization Term: How to Improve Artificial Intelligence with ExplainabilityIván Sevillano-García, Julián Luengo, Francisco Herrera. 135-139 [doi]
- Revitalize the Potential of Radiomics: Interpretation and Feature Stability in Medical Imaging Analyses through Groupwise Feature ImportanceAnna Theresa Stüber, Stefan Coors, Michael Ingrisch. 140-145 [doi]
- eXplego: An interactive Tool that Helps you Select Appropriate XAI-methods for your Explainability NeedsMartin Jullum, Jacob Sjødin, Robindra Prabhu, Anders Løland. 146-151 [doi]
- FCAS Ethical AI DemonstratorFlorian Osswald, Roman Bartolosch, Torsten Fiolka, Engelbert Hartmann, Bernhard Krach, Jan Feil, Martin Lederer. 152-157 [doi]
- Argumentation-based Explainable Machine Learning ArgEML: α-Version Technical DetailsNicoletta Prentzas. 158-163 [doi]
- Federated Learning of Explainable Artificial Intelligence Models: A Proof-of-Concept for Video-streaming Quality Forecasting in B5G/6G networksJosé Luis Corcuera Bárcena, Mattia Daole, Pietro Ducange, Francesco Marcelloni, Giovanni Nardini, Alessandro Renda, Giovanni Stea. 164-168 [doi]
- Probabilistic Modelling for Design and Verification of Trustworthy Autonomous SystemsFranca Corradini. 169-176 [doi]
- Deep Clustering as a Unified Method for Representation Learning and Clustering of EEG Data for Microstate TheoryArjun Vinayak Chikkankod. 177-184 [doi]
- Real-Time Explainable Plausibility Verification for DNN-based Automotive PerceptionMert Keser. 185-192 [doi]
- Extending Merlin-Arthur Classifiers for Improved InterpretabilityBerkant Turan. 193-200 [doi]
- Designing an Evaluation Framework for eXplainable AI in the Healthcare DomainIvania Donoso-Guzmán. 201-208 [doi]
- Personalized Human-Robot Interaction in Companion Social RobotsBahram Salamat Ravandi. 209-216 [doi]
- Accelerating Implementation of Artificial Intelligence in Radiotherapy through ExplainabilityLuca Heising. 217-224 [doi]
- Lung images Classification with Textural Characteristics and Hybrid Classification SchemesOleksandr Davydko. 225-232 [doi]
- Explain and Interpret Few-Shot LearningAndrea Fedele. 233-240 [doi]
- Fairness Auditing, Explanation and Debiasing in Linguistic Data and Language ModelsMarta Marchiori Manerba. 241-248 [doi]
- Post Hoc Explanations for RNNs using State Transition Representations for Time Series DataGargi Gupta. 249-255 [doi]