Abstract is missing.
- Bio-Inspired Adversarial Attack Against Deep Neural NetworksBowei Xi, Yujie Chen, Fei Fan, Zhan Tu, Xinyan Deng. 1-5 [doi]
- Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition SystemsKazuya Kakizaki, Kosuke Yoshida. 6-13 [doi]
- Hazard Contribution Modes of Machine Learning ComponentsEwen Denney, Ganesh Pai, Colin Smith. 14-22 [doi]
- Assurance Argument Patterns and Processes for Machine Learning in Safety-Related SystemsChiara Picardi, Colin Paterson, Richard Hawkins, Radu Calinescu, Ibrahim Habli. 23-30 [doi]
- Founding The Domain of AI ForensicsVahid Behzadan, Ibrahim M. Baggili. 31-35 [doi]
- Exploring AI Safety in Degrees: Generality, Capability and ControlJohn Burden, José Hernández-Orallo. 36-40 [doi]
- Fair Enough: Improving Fairness in Budget-Constrained Decision Making Using Confidence ThresholdsMichiel A. Bakker, Humberto Riverón Valdés, Duy Patrick Tu, Krishna P. Gummadi, Kush R. Varshney, Adrian Weller, Alex Pentland. 41-53 [doi]
- A Study on Multimodal and Interactive Explanations for Visual Question AnsweringKamran Alipour, Jürgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius Burachas. 54-62 [doi]
- You Shouldn't Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation MethodsBotty Dimanov, Umang Bhatt, Mateja Jamnik, Adrian Weller. 63-73 [doi]
- A High Probability Safety Guarantee for Shifted Neural Network SurrogatesMelanie Ducoffe, Sébastien Gerchinovitz, Jayant Sen Gupta. 74-82 [doi]
- Benchmarking Uncertainty Estimation Methods for Deep Learning With Safety-Related MetricsMaximilian Henne, Adrian Schwaiger, Karsten Roscher, Gereon Weiss. 83-90 [doi]
- PURSS: Towards Perceptual Uncertainty Aware Responsibility Sensitive Safety with MLRick Salay, Krzysztof Czarnecki, Maria Soledad Elli, Ignacio J. Alvarez, Sean Sedwards, Jack Weast. 91-95 [doi]
- Simple Continual Learning Strategies for Safer ClassifersAshish Gaurav, Sachin Vernekar, Jaeyoung Lee, Vahdat Abdelzad, Krzysztof Czarnecki, Sean Sedwards. 96-104 [doi]
- Fair Representation for Safe Artificial Intelligence via Adversarial Learning of Unbiased Information BottleneckJin Young Kim, Sung-Bae Cho. 105-112 [doi]
- Out-of-Distribution Detection with Likelihoods Assigned by Deep Generative Models Using Multimodal Prior DistributionsRyo Kamoi, Kei Kobayashi. 113-116 [doi]
- SafeLife 1.0: Exploring Side Effects in Complex EnvironmentsCarroll L. Wainwright, Peter Eckersley. 117-127 [doi]
- (When) Is Truth-telling Favored in AI Debate?Vojtech Kovarík, Ryan Carey. 128-137 [doi]
- NewsBag: A Benchmark Multimodal Dataset for Fake News DetectionSarthak Jindal, Raghav Sood, Richa Singh 0001, Mayank Vatsa, Tanmoy Chakraborty 0002. 138-145 [doi]
- Algorithmic Discrimination: Formulation and Exploration in Deep Learning-based Face BiometricsIgnacio Serna, Aythami Morales, Julian Fiérrez, Manuel Cebrián, Nick Obradovich, Iyad Rahwan. 146-152 [doi]
- Guiding Safe Reinforcement Learning Policies Using Structured Language ConstraintsBharat Prakash, Nicholas R. Waytowich, Ashwinkumar Ganesan, Tim Oates, Tinoosh Mohsenin. 153-161 [doi]
- Practical Solutions for Machine Learning Safety in Autonomous VehiclesSina Mohseni, Mandar Pitale, Vasu Singh, Zhangyang Wang. 162-169 [doi]
- Continuous Safe Learning Based on First Principles and Constraints for Autonomous DrivingLifeng Liu, Yingxuan Zhu, Tim Yuan, Jian Li. 170-177 [doi]
- Recurrent Neural Network Properties and their Verification with Monte Carlo TechniquesDmitry Vengertsev, Elena Sherman. 178-185 [doi]
- Toward Operational Safety Verification Via Hybrid Automata Mining Using I/O Traces of AI-Enabled CPSImane Lamrani, Ayan Banerjee, Sandeep K. S. Gupta. 186-194 [doi]