Why Rectified Power (RePU) Activation Functions are Efficient in Deep Learning: A Theoretical Explanation

Laxman Bokati, Vladik Kreinovich, Joseph Baca, Natasha Rovelli. Why Rectified Power (RePU) Activation Functions are Efficient in Deep Learning: A Theoretical Explanation. In Martine Ceberio, Vladik Kreinovich, editors, Uncertainty, Constraints, and Decision Making. Volume 484 of Studies in Systems, Decision and Control, pages 7-13, Springer, 2023. [doi]

Abstract

Abstract is missing.