Publication Type : Journal
Publisher : Wiley Online Library
Source : Computational Intelligence and Neuroscience
Url : https://onlinelibrary.wiley.com/doi/full/10.1155/2022/2213273
Campus : Coimbatore
School : School of Artificial Intelligence
Center : Center for Computational Engineering and Networking
Year : 2022
Abstract : The emergence of powerful deep learning architectures has resulted in breakthrough innovations in several fields such as healthcare, precision farming, banking, education, and much more. Despite the advantages, there are limitations in deploying deep learning models in resource-constrained devices due to their huge memory size. This research work reports an innovative hybrid compression pipeline for compressing neural networks exploiting the untapped potential of z-score in weight pruning, followed by quantization using DBSCAN clustering and Huffman encoding. The proposed model has been experimented with state-of-the-art LeNet Deep Neural Network architectures using the standard MNIST and CIFAR datasets. Experimental results prove the compression performance of DeepCompNet by 26x without compromising the accuracy. The synergistic blend of the compression algorithms in the proposed model will ensure effortless deployment of neural networks leveraging DL applications in memory-constrained devices.
Cite this Research Publication : Mary Shanthi Rani, M., P. Chitra, S. Lakshmanan, M. Kalpana Devi, R. Sangeetha, and S. Nithya. "[Retracted] DeepCompNet: A Novel Neural Net Model Compression Architecture." Computational Intelligence and Neuroscience 2022, no. 1 (2022): 2213273.