Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently

Lis Kanashiro Pereira, Yuki Taya, Ichiro Kobayashi. Multi-Layer Random Perturbation Training for improving Model Generalization Efficiently. In Jasmijn Bastings, Yonatan Belinkov, Emmanuel Dupoux, Mario Giulianelli, Dieuwke Hupkes, Yuval Pinter, Hassan Sajjad, editors, Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, BlackboxNLP@EMNLP 2021, Punta Cana, Dominican Republic, November 11, 2021. pages 303-310, Association for Computational Linguistics, 2021. [doi]

Authors

Lis Kanashiro Pereira

This author has not been identified. Look up 'Lis Kanashiro Pereira' in Google

Yuki Taya

This author has not been identified. Look up 'Yuki Taya' in Google

Ichiro Kobayashi

This author has not been identified. Look up 'Ichiro Kobayashi' in Google