Open Source AI Project


DVQA is a state-of-the-art deep learning-based full-reference video quality assessment algorithm designed by Tencent's Multimedia Lab.


DVQA represents a cutting-edge approach in the realm of video quality assessment, uniquely positioned by Tencent’s Multimedia Lab to tackle the complexities of evaluating video quality across the entire video chain. This deep learning-based full-reference algorithm is designed with a keen focus on enhancing the video watching experience for users, a crucial aspect that has remained challenging to quantify accurately. Unlike traditional methods of video quality assessment, which are categorized into objective (based on predefined criteria) and subjective (based on human perception) evaluations, DVQA introduces a more effective and efficient solution. Traditional methods often fall short due to their time-consuming nature, high costs, and susceptibility to human bias.

The innovation behind DVQA lies in its foundation: a vast subjective quality database curated through an online platform designed for subjective quality evaluation. This database serves as the training ground for deep learning-based objective quality assessment algorithms, which are fine-tuned with the subjective data collected. Such a methodology ensures that DVQA not only speeds up the process of video quality assessment but also enhances its accuracy, marrying the efficiency of objective analysis with the reliability of subjective feedback.

DVQA is specifically optimized for Professionally Generated Content (PGC) videos, showcasing its relevance in today’s content-rich digital environment. Developed using Python and leveraging PyTorch for its deep learning components, DVQA is marked by its modular design. This design philosophy ensures ease of integration with emerging deep learning technologies, offering the flexibility to customize models according to specific needs and the capability to train and test with novel datasets.

At the heart of DVQA’s technical prowess is the C3DVQA network structure, which employs two-dimensional convolutions to extract spatial features from individual frames, followed by four layers of three-dimensional convolutions dedicated to learning spatio-temporal features. This structure is adept at simulating the human eye’s perception of video residuals, culminating in a pooling layer and a fully connected layer designed to learn the nonlinear regression relationship between perceived quality and the target quality score range.

The efficacy of DVQA has been rigorously validated against prominent video quality datasets such as LIVE and CSIQ, where it has demonstrated superior performance over well-known full-reference quality assessment algorithms like PSNR, MOVIE, ST-MAD, VMAF, and DeepVQA. Its successful deployment in various Tencent products, including Tencent Meeting, underscores its practical value. In these applications, DVQA plays a pivotal role in monitoring and enhancing the user experience quality, ensuring that video content meets comprehensive quality standards. This blend of innovative technology, practical application, and proven effectiveness positions DVQA as a leading solution in the quest for optimal video quality assessment.

Relevant Navigation

No comments

No comments...