Enhanced Context Mining and Filtering for Learned Video Compression

  • Haifeng Guo
  • , Sam Kwong*
  • , Dongjie Ye
  • , Shiqi Wang
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The Deep Contextual Video Compression framework (DCVC) utilizes a conditional coding paradigm, where the context is extracted and employed as a condition for the contextual encoder-decoder and entropy model. In this paper, we propose enhanced context mining and filtering to improve the compression efficiency of DCVC. Firstly, considering the context of DCVC is generated without supervision and redundancy may exist among context channels, an enhanced context mining model is proposed to mitigate redundancy across context channels to obtain superior context features. Then, we introduce a transformer-based enhancement network as a filtering module to capture long-distance dependencies and further enhance compression efficiency. The transformer-based enhancement adopts a full-resolution pipeline and calculates self-attention across channel dimensions. By combining the local modeling ability of the enhanced context mining model and the non-local modeling ability of the transformer-based enhancement network, our model outperforms LDP configurations of Versatile Video Coding (VVC), achieving an average bit savings of 6.7% in terms of MS-SSIM.

Original languageEnglish
Pages (from-to)3814-3826
Number of pages13
JournalIEEE Transactions on Multimedia
Volume26
DOIs
StatePublished - 2024
Externally publishedYes

Keywords

  • end-to-end training approach
  • enhanced context mining
  • in loop filtering
  • Learned video compression

Fingerprint

Dive into the research topics of 'Enhanced Context Mining and Filtering for Learned Video Compression'. Together they form a unique fingerprint.

Cite this