Publication Type : Conference Paper
Publisher : IEEE
Source : 2023 7th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2023, pp. 187-192
Url : https://ieeexplore.ieee.org/document/10395848
Campus : Chennai
School : School of Engineering
Department : Electronics and Communication
Year : 2023
Abstract : Infrared-Visible image fusion is a method of integrating information from images captured in separate spectra, including infrared (IR) and visible light. This technique aims to create a single composite image that takes advantage of the strengths of the original image while minimizing their inherent limitations. This study introduces a Neural Style Transfer based non-End-to-End framework for seamlessly merging infrared and visible images. Proposed approach entails an optimization process where fused features interplay with the initial composite image. Then, the vital features are extracted from input images using the first four layers of the ResNet50 network. These features subsequently unite through an appropriate fusion rule. The original images are blended using the average rule to formulate the initial composite image. By employing backpropagation, the final synthesized image emerges as the initial composite image is fine-tuned with the imbibed features. In this study, the efficacy of proposed fusion framework is validated by conducting experiments on the TNO Image Fusion dataset. The outcomes of these experiments clearly demonstrate that our approach outperforms currently approaches, as evident from improvements in both subjective and objective assessments.
Cite this Research Publication : Lokesh Kumar M, Aishwarya N and Ashwin B, “Neural Style Transfer Based Infrared-Visible Fusion”, 2023 7th International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, India, 2023, pp. 187-192