TY - GEN
T1 - Exploiting Long and Short Temporal Dependence for Low-Light Video Enhancement
AU - Luo, Hao
AU - Zhu, Lingyu
AU - Mao, Yudong
AU - Li, Yixuan
AU - Zhong, Zhiwei
AU - Wang, Shanshe
AU - Wang, Shiqi
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Existing learning-based methods often lack temporal coherence in low-light video enhancement due to rarely considering intrinsic temporal dependence. To address this issue, we propose the Long-short Temporal Filtering Network (TFNet) to learn the mapping from low-light videos to normal-light ones, utilizing the well-considered data-centric strategy and a refined architecture. From the data-centric temporal strategy, we incorporate both long-range and short-range temporal dependence into TFNet, effectively capturing the temporal information. From the model design perspective, the TFNet incorporates the Temporal-aware Attentional Filtering (TAF) module, which aims to estimate and adaptively combine filtering kernels for guided filtering towards features of the middle frame. To further refine the filtered features, the cascaded Grouped Attention (GA) blocks are presented in a grouped attention strategy. Experimental results on benchmark datasets have demonstrated the superiority of our TFNet against the state-of-the-art methods in terms of video frame quality and brightness consistency.
AB - Existing learning-based methods often lack temporal coherence in low-light video enhancement due to rarely considering intrinsic temporal dependence. To address this issue, we propose the Long-short Temporal Filtering Network (TFNet) to learn the mapping from low-light videos to normal-light ones, utilizing the well-considered data-centric strategy and a refined architecture. From the data-centric temporal strategy, we incorporate both long-range and short-range temporal dependence into TFNet, effectively capturing the temporal information. From the model design perspective, the TFNet incorporates the Temporal-aware Attentional Filtering (TAF) module, which aims to estimate and adaptively combine filtering kernels for guided filtering towards features of the middle frame. To further refine the filtered features, the cascaded Grouped Attention (GA) blocks are presented in a grouped attention strategy. Experimental results on benchmark datasets have demonstrated the superiority of our TFNet against the state-of-the-art methods in terms of video frame quality and brightness consistency.
KW - Grouped Attention
KW - Low-Light Video Enhancement
KW - Temporal Dependence
KW - Temporal-Aware Filtering
UR - https://www.scopus.com/pages/publications/105022609327
U2 - 10.1109/ICME59968.2025.11209427
DO - 10.1109/ICME59968.2025.11209427
M3 - 会议稿件
AN - SCOPUS:105022609327
T3 - Proceedings - IEEE International Conference on Multimedia and Expo
BT - 2025 IEEE International Conference on Multimedia and Expo
PB - IEEE Computer Society
T2 - 2025 IEEE International Conference on Multimedia and Expo, ICME 2025
Y2 - 30 June 2025 through 4 July 2025
ER -