Back close

FPGA (ZCU104) Based Energy Efficient Accelerator for MobileNet-V1

Publication Type : Conference Paper

Publisher : 2024 IEEE International Colloquiumon Signal Processing & Its Applications (CSPA2024)

Source : 2024 IEEE International Colloquiumon Signal Processing & Its Applications (CSPA2024).

Url : https://ieeexplore.ieee.org/document/10525375/

Campus : Coimbatore

School : School of Artificial Intelligence

Year : 2024

Abstract : Convolutional neural networks (CNNs) are commonly used in modern AI systems. These models feature millions of layer connections, which are memory- and computationally-intensive. Using these models on an embedded mobile application requires a lot of power and bandwidth to retrieve off-chip DRAM data. Reducing data transmission between on-chip and off-chip DRAM is key to high throughput and energy efficiency. The proposed FPGA-based hardware accelerator achieves these goals by employing a patch-wise design that uses less on-chip memory. The proposed architecture involves parallel computations on the patches stored in the on-chip memory. The proposed architecture is verified with a synthetic aperture radar (SAR) dataset, crucial in military surveillance, disaster management, and marine vigilance applications. The proposed accelerator prevented the need for a cloud-based server and was implemented on the Zynq ultra-scale ZCU104 FPGA board. The proposed architecture consumed 81% less power than GPU and 44% less than state-of-the-art works, underscoring its eco-friendliness. Furthermore, the proposed architecture achieved 24 GOP/s.

Cite this Research Publication : Yanamala, Rama Muni Reddy, and Muralidhar Pullakandam, Satyanarayana G.N.V, Jagan Dumpala, "FPGA (ZCU104) Based Energy Efficient Accelerator for MobileNet-V1," 2024 IEEE International Colloquiumon Signal Processing & Its Applications (CSPA2024). Accepted and Presented in Malaysia.

Admissions Apply Now