Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently

Lis Kanashiro Pereira, Yuki Taya, Ichiro Kobayashi. Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently. In Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad, editors, Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2021, Punta Cana, Dominican Republic, November 11, 2021. pages 303-310, Association for Computational Linguistics, 2021. [doi]

Abstract

Abstract is missing.