Enlightening Low-Light Images with Dynamic Guidance for Context Enrichment

  • Lingyu Zhu
  • , Wenhan Yang
  • , Baoliang Chen
  • , Fangbo Lu
  • , Shiqi Wang*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g., low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts (e.g., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research.

Original languageEnglish
Pages (from-to)5068-5079
Number of pages12
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume32
Issue number8
DOIs
StatePublished - 1 Aug 2022
Externally publishedYes

Keywords

  • contextual feature
  • guidance map
  • Low-light image enhancement

Fingerprint

Dive into the research topics of 'Enlightening Low-Light Images with Dynamic Guidance for Context Enrichment'. Together they form a unique fingerprint.

Cite this