TY - JOUR
T1 - Enlightening Low-Light Images with Dynamic Guidance for Context Enrichment
AU - Zhu, Lingyu
AU - Yang, Wenhan
AU - Chen, Baoliang
AU - Lu, Fangbo
AU - Wang, Shiqi
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2022/8/1
Y1 - 2022/8/1
N2 - Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g., low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts (e.g., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.
AB - Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g., low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts (e.g., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.
KW - contextual feature
KW - guidance map
KW - Low-light image enhancement
UR - https://www.scopus.com/pages/publications/85123692097
U2 - 10.1109/TCSVT.2022.3146731
DO - 10.1109/TCSVT.2022.3146731
M3 - 文章
AN - SCOPUS:85123692097
SN - 1051-8215
VL - 32
SP - 5068
EP - 5079
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 8
ER -