TY - JOUR
T1 - RCNet
T2 - Deep Recurrent Collaborative Network for Multi-View Low-Light Image Enhancement
AU - Luo, Hao
AU - Chen, Baoliang
AU - Zhu, Lingyu
AU - Chen, Peilin
AU - Wang, Shiqi
N1 - Publisher Copyright:
© 1999-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Scene observation from multiple perspectives brings a more comprehensive visual experience. However, acquiring multiple views in the dark causes highly correlated views alienated, making it challenging to improve scene understanding with auxiliary views. Recent single image-based enhancement methods may not provide consistently desirable restoration performance for all views due to ignoring potential feature correspondence among views. To alleviate this issue, we make the first attempt to investigate multi-view low-light image enhancement. First, we construct a new dataset called Multi-View Low-light Triplets (MVLT), including 1,860 pairs of triple images with large illumination ranges and wide noise distribution. Each triplet is equipped with three viewpoints towards the same scene. Second, we propose a multi-view enhancement framework based on the Recurrent Collaborative Network (RCNet). To benefit from similar texture correspondence across views, we design the recurrent feature enhancement, alignment, and fusion (ReEAF) module, where intra-view feature enhancement (Intra-view EN) followed by inter-view feature alignment and fusion (Inter-view AF) is performed to model intra-view and inter-view feature propagation via multi-view collaboration. Additionally, two modules from enhancement to alignment (E2A) and alignment to enhancement (A2E) are developed to enable interactions between Intra-view EN and Inter-view AF, utilizing attentive feature weighting and sampling for enhancement and alignment. Experimental results demonstrate our RCNet significantly outperforms other state-of-the-art methods.
AB - Scene observation from multiple perspectives brings a more comprehensive visual experience. However, acquiring multiple views in the dark causes highly correlated views alienated, making it challenging to improve scene understanding with auxiliary views. Recent single image-based enhancement methods may not provide consistently desirable restoration performance for all views due to ignoring potential feature correspondence among views. To alleviate this issue, we make the first attempt to investigate multi-view low-light image enhancement. First, we construct a new dataset called Multi-View Low-light Triplets (MVLT), including 1,860 pairs of triple images with large illumination ranges and wide noise distribution. Each triplet is equipped with three viewpoints towards the same scene. Second, we propose a multi-view enhancement framework based on the Recurrent Collaborative Network (RCNet). To benefit from similar texture correspondence across views, we design the recurrent feature enhancement, alignment, and fusion (ReEAF) module, where intra-view feature enhancement (Intra-view EN) followed by inter-view feature alignment and fusion (Inter-view AF) is performed to model intra-view and inter-view feature propagation via multi-view collaboration. Additionally, two modules from enhancement to alignment (E2A) and alignment to enhancement (A2E) are developed to enable interactions between Intra-view EN and Inter-view AF, utilizing attentive feature weighting and sampling for enhancement and alignment. Experimental results demonstrate our RCNet significantly outperforms other state-of-the-art methods.
KW - collaborative network
KW - inter-view alignment & fusion
KW - intra-view enhancement
KW - Multi-view low-light enhancement
UR - https://www.scopus.com/pages/publications/105002267639
U2 - 10.1109/TMM.2024.3521760
DO - 10.1109/TMM.2024.3521760
M3 - 文章
AN - SCOPUS:105002267639
SN - 1520-9210
VL - 27
SP - 2001
EP - 2014
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
ER -