Abstract:
Low-light image (LLI) enhancement is an important image processing task that aims at improving the illumination of images taken under low-light conditions. Recently, a remarkable progress has been made in utilizing deep learning (DL) approaches for LLI enhancement. In this thesis, we perform a concise and comprehensive review and comparative study of the most recent DL models used for LLI enhancement. We address LLI enhancement in two ways: i) standalone, as a separate task, and ii) end-to-end, as a pre-processing stage embedded within another high-level computer vision task, namely object detection and classification. We also conduct a feature analysis of DL feature maps extracted from normal, low-light, and enhanced images, and perform the occlusion experiment to better understand the effect of enhancement on object detection and classification. We then address a common problem of these models depicted by their design as standalone solutions without focusing on the impact of enhancement on high-level computer vision tasks like object classification. Our review and empirical evaluations show that enhancing LLI visual quality does not necessarily correlate with improved object detection and classification performance, and may even deteriorate it, especially in cases where enhanced images include extreme artifacts. To solve the problem, we propose a new LLI enhancement model that performs image-to-frequency filter learning and is designed for seamless integration into classification models. Through this integration, the classification model is embedded with an internal enhancement capability and is jointly trained to optimize both enhancement and classification performance. We conduct a large battery of experiments involving 76 testers to evaluate our approach’s LLI enhancement quality. When evaluated as a standalone enhancement model, our solution consistently ranks first or second among five state of the art enhancement techniques both quantitatively and qualitatively. When embedded with a classification model, our solution achieves an average of 5.5% improvement in classification accuracy, compared with the traditional pipeline of separate enhancement followed by classification. Results clearly produce robust classification performance on both low light and normal light images.