Skip to main content

A Relevance-Based CNN Trimming Method for Low-Resources Embedded Vision

  • Conference paper
  • First Online:
AIxIA 2021 – Advances in Artificial Intelligence (AIxIA 2021)

Abstract

A significant amount of Deep Learning research deals with the reduction of network complexity. In most scenarios the preservation of very high performance has priority over size reduction. However, when dealing with embedded systems, the limited amount of resources forces a switch in perspective. In fact, being able to dramatically reduce complexity could be a stronger requisite for overall feasibility than excellent performance. In this paper we propose a simple to implement yet effective method to largely reduce the size of Convolutional Neural Networks with minimal impact on their performance. The key idea is to assess the relevance of each kernel with respect to a representative dataset by computing the output of its activation function and to trim them accordingly. The resulting network becomes small enough to be adopted on embedded hardware, such as smart cameras or lightweight edge processing units. In order to assess the capability of our method with respect to real-world scenarios, we adopted it to shrink two different pre-trained networks to be hosted on general purpose low-end FPGA hardware to be found in embedded cameras. Our experiments demonstrated both the overall feasibility of the method and its superior performance when compared with similar size-reducing techniques introduced in recent literature.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    this is actually the case for 1D convolutional layers, but the extension to 2D layers is straightforward.

References

  1. Blalock, D., et al.: What is the state of neural network pruning? In: arXiv preprint arXiv:2003.03033 (2020)

  2. Cao, S., et al.: ThinNet: an efficient convolutional neural network for object detection. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 836–841 (2018). https://doi.org/10.1109/ICPR.2018.8545809

  3. Courbariaux, M., Bengio, Y., David, J.P.: Training deep neural networks with low precision multiplications. In: arXiv preprint arXiv:1412.7024 (2014)

  4. Garg, I., Panda, P., Roy, K.: A low effort approach to structured CNN design using PCA. IEEE Access 8, 1347–1360 (2019)

    Article  Google Scholar 

  5. Gasparetto, A. et al.: Cross-dataset data augmentation for convolutional neural networks training. In: vol. 2018-August, pp. 910–915 (2018). https://doi.org/10.1109/ICPR.2018.8545812

  6. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: arXiv preprint arXiv:1510.00149 (2015)

  7. Han, S., et al.: Learning both weights and connections for efficient neural networks. arXiv preprint arXiv:1506.02626 (2015)

  8. He, Y., et al.: Filter pruning via geometric median for deep convolutional neural networks acceleration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4340–4349 (2019)

    Google Scholar 

  9. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: arXiv preprint arXiv:1503.02531 (2015)

  10. Howard, A.G., et al.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. In: arXiv preprint arXiv:1704.04861 (2017)

  11. Hu, H., et al.: Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. In: arXiv preprint arXiv:1607.03250 (2016)

  12. Iandola, F.N., et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\) 0.5 MB model size (2016)

    Google Scholar 

  13. Jahanshahi, A.: TinyCNN: a tiny modular CNN accelerator for embedded FPGA. In: arXiv preprint arXiv:1911.06777 (2019)

  14. Jin, J., Dundar, A., Culurciello, E.: Flattened convolutional neural networks for feedforward acceleration. In: arXiv preprint arXiv:1412.5474 (2014)

  15. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian Institute for Advanced Research). http://www.cs.toronto.edu/~kriz/cifar.html

  16. Li, H., et al.: Pruning filters for efficient convnets. In: arXiv preprint arXiv:1608.08710 (2016)

  17. Lin, M., et al.: Hrank: Filter pruning using high-rank feature map. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1529–1538 (2020)

    Google Scholar 

  18. Lin, S., et al.: Towards optimal structured cnn pruning via generative adversarial learning (2019)

    Google Scholar 

  19. Liu, Z., et al.: Learning efficient convolutional networks through network slimming. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2736–2744 (2017)

    Google Scholar 

  20. Liu, Z., et al.: Rethinking the value of network pruning. In: arXiv preprint arXiv:1810.05270 (2018)

  21. Pistellato, M., Bergamasco, F., Albarelli, A., Torsello, A.: Dynamic optimal path selection for 3D triangulation with multiple cameras. In: Murino, V., Puppo, E. (eds.) ICIAP 2015. LNCS, vol. 9279, pp. 468–479. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23231-7_42

    Chapter  Google Scholar 

  22. Pistellato, M., et al.: Robust joint selection of camera orientations and feature projections over multiple views, pp. 3703–3708 (2016). https://doi.org/10.1109/ICPR.2016.7900210

  23. Qiu, J., et al.: Going deeper with embedded fpga platform for convolutional neural network. In: Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (2016)

    Google Scholar 

  24. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: Imagenet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32

    Chapter  Google Scholar 

  25. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). CoRR abs/1409.1556

    Google Scholar 

  26. Suau, X., Apostoloff, N., et al.: Filter distillation for network compression. In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, pp. 3129–3138 (2020)

    Google Scholar 

  27. Sze, V., et al.: Efficient processing of deep neural networks: a tutorial and survey. In: Proceedings of the IEEE, vol. 105(12) (2017)

    Google Scholar 

  28. Zhang, C., Patras, P., Haddadi, H.: Deep learning in mobile and wireless networking: a survey. In: IEEE Communications surveys & tutorials, vol. 21 no. 3, pp. 2224–2287 (2019)

    Google Scholar 

  29. Zhang, X., et al.: Shuffenet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 848–6856 (2018)

    Google Scholar 

  30. Zhu, M., Gupta, S.: To prune, or not to prune: exploring the efficacy of pruning for model compression. In: arXiv preprint arXiv:1710.01878 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dalila Ressi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ressi, D., Pistellato, M., Albarelli, A., Bergamasco, F. (2022). A Relevance-Based CNN Trimming Method for Low-Resources Embedded Vision. In: Bandini, S., Gasparini, F., Mascardi, V., Palmonari, M., Vizzari, G. (eds) AIxIA 2021 – Advances in Artificial Intelligence. AIxIA 2021. Lecture Notes in Computer Science(), vol 13196. Springer, Cham. https://doi.org/10.1007/978-3-031-08421-8_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08421-8_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08420-1

  • Online ISBN: 978-3-031-08421-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics