Title

One-for-all: An Efficient Variable Convolution Neural Network for In-loop Filter of VVC

Zhijie Huang, Jun Sun, Xiaopeng Guo and Mingyu Shang

Abstract:

Recently, many researches on convolution neural network (CNN) based in-loop filters have been proposed to improve coding efficiency. However, most existing CNN based filters tend to train and deploy multiple networks for various quantization parameters (QP) and frame types (FT), which drastically increases resources in training these models and the memory burdens for video codec. In this paper, we propose a novel variable CNN (VCNN) based in-loop filter for VVC, which can effectively handle the compressed videos with different QPs and FTs via a single model. Specifically, an efficient and flexible attention module is developed to recalibrate features according to QPs or FTs. Then we embed the module into the residual block so that these informative features can be continuously utilized in the residual learning process. To minimize the information loss in the learning process of the entire network, we utilize a residual feature aggregation module (RFA) for more efficient feature extraction. Based on it, an efficient network architecture VCNN is designed that can not only effectively reduce compression artifacts, but also can be adaptive to various QPs and FTs. To address training data imbalance on various QPs and FTs and improve the robustness of the model, a focal mean square error loss function is employed to train the proposed network. Then we integrate the VCNN into VVC as an additional tool of in-loop filters after the deblocking filter. Extensive experimental results show that our VCNN approach obtains on average 3.63%, 4.36%, 4.23%, 3.56% under all intra, low-delay P, low-delay, and random access configurations, respectively, which is even better than QP-Separate models.
文章已被IEEE TCSVT收录,了解更多