Abstract is missing.
- Towards Explainable Artificial IntelligenceWojciech Samek, Klaus-Robert Müller. 5-22 [doi]
- Transparency: Motivations and ChallengesAdrian Weller. 23-40 [doi]
- Interpretability in Intelligent Systems - A New Concept?Lars Kai Hansen, Laura Rieger. 41-49 [doi]
- Understanding Neural Networks via Feature Visualization: A SurveyAnh Nguyen, Jason Yosinski, Jeff Clune. 55-76 [doi]
- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout GenerationSeunghoon Hong, Dingdong Yang, Jongwook Choi, Honglak Lee. 77-95 [doi]
- Unsupervised Discrete Representation LearningWeihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto, Masashi Sugiyama. 97-119 [doi]
- Towards Reverse-Engineering Black-Box Neural NetworksSeong Joon Oh, Bernt Schiele, Mario Fritz. 121-144 [doi]
- Explanations for Attributing Deep Neural Network PredictionsRuth Fong, Andrea Vedaldi. 149-167 [doi]
- Gradient-Based Attribution MethodsMarco Ancona, Enea Ceolini, Cengiz Öztireli, Markus H. Gross. 169-191 [doi]
- Layer-Wise Relevance Propagation: An OverviewGrégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller. 193-209 [doi]
- Explaining and Interpreting LSTMsLeila Arras, Jose A. Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek. 211-238 [doi]
- Comparing the Interpretability of Deep Networks via Network DissectionBolei Zhou, David Bau, Aude Oliva, Antonio Torralba 0001. 243-252 [doi]
- Gradient-Based Vs. Propagation-Based Explanations: An Axiomatic ComparisonGrégoire Montavon. 253-265 [doi]
- The (Un)reliability of Saliency MethodsPieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim. 267-280 [doi]
- Visual Scene Understanding for Autonomous Driving Using Semantic SegmentationMarkus Hofmarcher, Thomas Unterthiner, Jose A. Arjona-Medina, Günter Klambauer, Sepp Hochreiter, Bernhard Nessler. 285-296 [doi]
- Understanding Patch-Based Learning of Video Data by Explaining PredictionsChristopher J. Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller. 297-309 [doi]
- Quantum-Chemical Insights from Interpretable Atomistic Neural NetworksKristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller. 311-330 [doi]
- Interpretable Deep Learning in Drug DiscoveryKristina Preuer, Günter Klambauer, Friedrich Rippmann, Sepp Hochreiter, Thomas Unterthiner. 331-345 [doi]
- NeuralHydrology - Interpreting LSTMs in HydrologyFrederik Kratzert, Mathew Herrnegger, Daniel Klotz, Sepp Hochreiter, Günter Klambauer. 347-362 [doi]
- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRIPamela K. Douglas, Ariana E. Anderson. 363-378 [doi]
- Current Advances in Neural DecodingMarcel A. J. van Gerven, Katja Seeliger, Umut Güçlü, Yagmur Güçlütürk. 379-394 [doi]
- Software and Application Patterns for Explanation MethodsMaximilian Alber. 399-433 [doi]