Jointly learning perceptually heterogeneous features for blind 3D video quality assessment

Publication Type:
Journal Article
Citation:
Neurocomputing, 2019, 332 pp. 298 - 304
Issue Date:
2019-03-07
Filename Description Size
Jointly learning perceptually heterogeneous features for blind 3D.pdfPublished Version1.69 MB
Adobe PDF
Full metadata record
© 2018 Elsevier B.V. 3D videos quality assessment (3D-VQA) is essential to various 3D video processing applications. However, it has not been well investigated on how to make use of perceptual multi-channel video information to improve 3D-VQA under different distortion categories and degrees, especially under asymmetrical distortions. In the paper, we propose a new blind 3D-VQA metric by jointly learning perceptually heterogeneous features. Firstly, a binocular spatio-temporal internal generative mechanism (BST-IGM) is proposed to decompose the views of 3D video into multi-channel videos. Then, we extract perceptually heterogeneous features by proposed multi-channel natural video statistics (MNVS) model, which are characterized 3D video information. Furthermore, a robust AdaBoosting Radial Basis Function (RBF) neural network is utilized to map the features to the overall quality of 3D video. On two benchmark databases, the extensive evaluations demonstrate that the proposed algorithm significantly outperforms several state-of-the-art quality metrics in term of prediction accuracy and robustness.
Please use this identifier to cite or link to this item: