TY - GEN
T1 - PUGCQ
T2 - 29th ACM International Conference on Multimedia, MM 2021
AU - Li, Guo
AU - Chen, Baoliang
AU - Zhu, Lingyu
AU - He, Qinwen
AU - Fan, Hongfei
AU - Wang, Shiqi
N1 - Publisher Copyright:
© 2021 ACM.
PY - 2021/10/17
Y1 - 2021/10/17
N2 - Recent years have witnessed a surge of professional user-generated content (PUGC) based video services, coinciding with the accelerated proliferation of video acquisition devices such as mobile phones, wearable cameras, and unmanned aerial vehicles. Different from traditional UGC videos by impromptu shooting, PUGC videos produced by professional users tend to be carefully designed and edited, receiving high popularity with a relatively satisfactory playing count. In this paper, we systematically conduct the comprehensive study on the perceptual quality of PUGC videos and introduce a database consisting of 10,000 PUGC videos with subjective ratings. In particular, during the subjective testing, we collect the human opinions based upon not only the MOS, but also the attributes that could potentially influence the visual quality including face, noise, blur, brightness, and color. We make the attempt to analyze the large-scale PUGC database with a series of video quality assessment (VQA) algorithms and a dedicated baseline model based on pretrained deep neural network is further presented. The cross-dataset experiments reveal a large domain gap between the PUGC and the traditional user-generated videos, which are critical in learning based VQA. These results shed light on developing next-generation PUGC quality assessment algorithms with desired properties including promising generalization capability, high accuracy, and effectiveness in perceptual optimization. The dataset and the codes are released at https://github.com/wlkdb/pugcq_create.
AB - Recent years have witnessed a surge of professional user-generated content (PUGC) based video services, coinciding with the accelerated proliferation of video acquisition devices such as mobile phones, wearable cameras, and unmanned aerial vehicles. Different from traditional UGC videos by impromptu shooting, PUGC videos produced by professional users tend to be carefully designed and edited, receiving high popularity with a relatively satisfactory playing count. In this paper, we systematically conduct the comprehensive study on the perceptual quality of PUGC videos and introduce a database consisting of 10,000 PUGC videos with subjective ratings. In particular, during the subjective testing, we collect the human opinions based upon not only the MOS, but also the attributes that could potentially influence the visual quality including face, noise, blur, brightness, and color. We make the attempt to analyze the large-scale PUGC database with a series of video quality assessment (VQA) algorithms and a dedicated baseline model based on pretrained deep neural network is further presented. The cross-dataset experiments reveal a large domain gap between the PUGC and the traditional user-generated videos, which are critical in learning based VQA. These results shed light on developing next-generation PUGC quality assessment algorithms with desired properties including promising generalization capability, high accuracy, and effectiveness in perceptual optimization. The dataset and the codes are released at https://github.com/wlkdb/pugcq_create.
KW - no-reference video quality assessment
KW - professional user-generated content
KW - video quality assessment
UR - https://www.scopus.com/pages/publications/85119329853
U2 - 10.1145/3474085.3475183
DO - 10.1145/3474085.3475183
M3 - 会议稿件
AN - SCOPUS:85119329853
T3 - MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
SP - 3728
EP - 3736
BT - MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
PB - Association for Computing Machinery, Inc
Y2 - 20 October 2021 through 24 October 2021
ER -