Energy-Efficient Hardware Accelerator for Embedded Deep Learning



Start date:


In this joint project, we aim at decreasing the power consumption and computation load of the current image processing platform by employing the concept of computation reuse. Computation reuse suggests temporarily storing and reusing the result of a recent arithmetic operation for anticipated subsequent operations with the same operands. Our proposal is motivated by the high degree of redundancy that we observed in arithmetic operations of neural networks where we show that an approximate computation reuse can eliminate up to 94% of arithmetic operation of simple neural networks. This leads to up to 80% reduction in power consumption, which directly translates to a considerable increase in battery life time. We further presented a mechanism to make a large neural network by connecting basic units in two UT-MDH joint works. 

[Show all publications]

Computation reuse-aware accelerator for neural networks (May 2020)
Hoda Mahdiani , Alireza Khadem , Ali Yasoubi , Azam Ghanbari , Mehdi Modarressi, Masoud Daneshtalab
Institution of Engineering and Technology (IET)

Hardware Acceleration for Recurrent Neural Networks (May 2020)
Sima Sinaei, Masoud Daneshtalab
Institution of Engineering and Technology (IET)

Feedforward Neural Networks on Massively Parallel Architectures (May 2020)
Reza Hojabr , Ahmad Khonsari , Mehdi Modarressi, Masoud Daneshtalab
Institution of Engineering and Technology (IET)

Hardware Accelerators for Deep Learning (May 2020)
Masoud Daneshtalab, Mehdi Modarressi
Institution of Engineering and Technology (IET)

ΔNN: Power-efficient Neural Network Acceleration using Differential Weights (Dec 2019)
Hoda Mahdiani , Alireza Khadem , Azam Ghanbari , Mehdi Modarressi, Farima Fattahi-bayat , Masoud Daneshtalab

Multiobjectivism in Dark Silicon Age (Apr 2018)
Amin Rezaei , Masoud Daneshtalab, Hai Zhou
Elsevier Advances in Computers (Computers)

Masoud Daneshtalab, Associate Professor,Docent

Phone: +4621103111