Journal of Real-Time Image Processing, cilt.21, sa.6, 2024 (SCI-Expanded)
Taking photographs under low ambient light can be challenging due to the inability of camera sensors to gather sufficient light, resulting in dark images with increased noise and reduced image quality. Standard photography techniques and traditional enhancement methods often fail to provide satisfactory solutions for images captured under extremely low ambient light conditions. To address this problem, data-driven methods have been proposed to model complex non-linear relationships between extremely dark and long-exposure images. Recently, burst photography has become interested in improving single-image low-light image enhancement to provide more information about the scene. In this study, we propose a novel unified fusion and enhancement model inspired by recent advancements in learning-based burst image processing methods. Our model processes a burst set of raw input images across multiple scales to fuse complementary information and predict possible enhancements over the fused information, thereby producing images with longer exposure. Additionally, we introduce a new data augmentation technique, the amplification ratio scaling multiplier, for training to further improve generalization. Experimental results demonstrate that our model achieves state-of-the-art performance in the perceptual metric LPIPS while maintaining highly competitive distortion metrics PSNR and SSIM compared to existing low-light burst image enhancement techniques.