You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.

The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.

For the reports in this repository we specifically note that

  • the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
  • the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
  • technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
  • in other cases, please contact the copyright owner for detailed information

By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.

If you are in doubt, feel free to contact webmaster@ide.mdh.se

Analysing robustness of tiny deep neural networks

Fulltext:


Publication Type:

Conference/Workshop Paper

Venue:

Advances in Databases and Information Systems


Abstract

Real-world applications that are safety-critical and resource constrained necessitate using compact and robust Deep Neural Networks (DNNs) against adversarial data perturbation. MobileNet-tiny has been introduced as a compact DNN to deploy on edge devices to reduce the size of networks. To make DNNs more robust against adversarial data, adversarial training methods have been proposed. However, recent research has investigated the robustness of large-scale DNNs (such as WideResNet), but the robustness of tiny DNNs has not been analysed. In this paper, we analyse how the width of the blocks in MobileNet-tiny affects the robustness of the network against adversarial data perturbation. Specifically, we evaluate natural accuracy, robust accuracy, and perturbation instability metrics on the MobileNet-tiny with various inverted bottleneck Blocks with different configurations. We generate configurations for inverted bottleneck blocks using different width-multipliers and expandratio hyper-parameters. We discover that expanding the width of the blocks in MobileNet-tiny can improve the natural and robust accuracy but increases perturbation instability. In addition, after a certain threshold, increasing the width of the network does not have significant gains in robust accuracy and increases perturbation instability. We also analyse the relationship between the width-multipliers and expand-ratio hyperparameters with the Lipchitz constant, both theoretically and empirically. It shows that wider inverted bottleneck blocks tend to have significant perturbation instability. These architectural insights can be useful in developing adversarially robust tiny DNNs for edge devices

Bibtex

@inproceedings{Mousavi6711,
author = {Seyedhamidreza Mousavi and Ali Zoljodi and Masoud Daneshtalab},
title = {Analysing robustness of tiny deep neural networks},
booktitle = {Advances in Databases and Information Systems},
url = {http://www.es.mdu.se/publications/6711-}
}