Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees

Patent application title: Objective assessment method for stereoscopic video quality based on wavelet transform

Inventors:  Gangyi Jiang (Ningbo, CN)  Gangyi Jiang (Ningbo, CN)  Yang Song (Ningbo, CN)  Zongju Peng (Ningbo, CN)  Fen Chen (Ningbo, CN)  Kaihui Zheng (Ningbo, CN)  Shanshan Liu (Ningbo, CN)
Assignees:  NINGBO UNIVERSITY
IPC8 Class: AG06T700FI
USPC Class: 1 1
Class name:
Publication date: 2016-10-13
Patent application number: 20160300339



Abstract:

An objective assessment method for a stereoscopic video quality based on a wavelet transform fuses brightness values of pixels in a left viewpoint image and a right viewpoint image of a stereoscopic image in a manner of binocular brightness information fusion, and obtains a binocular fusion brightness image of the stereoscopic image. The manner of binocular brightness information fusion overcomes a difficulty in assessing a stereoscopic perception quality of a stereoscopic video quality assessment to some extent and effectively increases an accuracy of a stereoscopic video objective quality assessment. When weighing qualities of each frame group in a binocular fusion brightness image video corresponding to a distorted stereoscopic video, the objective assessment method fully considers a sensitivity degree of a human eye visual characteristic to various types of information in the video, and determines a weight of each frame group based on a motion intensity and a brightness difference.

Claims:

1. An objective assessment method for a stereoscopic video quality based on a wavelet transform, comprising steps of: {circle around (1)} representing an original undistorted stereoscopic video by V.sub.org, and representing a distorted stereoscopic video to-be-assessed by V.sub.dis; {circle around (2)} calculating a binocular fusion brightness of each pixel in each frame of a stereoscopic image of the V.sub.org; denoting the binocular fusion brightness of a first pixel having coordinates of (u,v) in an fth frame of the stereoscopic image of the V.sub.org as B.sub.org.sup.f(u,v), B org f ( u , v ) = ( I org R , f ( u , v ) ) 2 + ( I org L , f ( u , v ) ) 2 + 2 ( I org R , f ( u , v ) .times. I org L , f ( u , v ) .times. cos .differential. ) .times. .lamda. ; ##EQU00031## then according to the respective binocular fusion brightnesses of all the pixels in each frame of the stereoscopic image of the V.sub.org, obtaining a binocular fusion brightness image of each frame of the stereoscopic image in the V.sub.org; denoting the binocular fusion brightness image of the fth frame of the stereoscopic image in the V.sub.org as B.sub.org.sup.f, wherein a second pixel having the coordinates of (u,v) in the B.sub.org.sup.f has a pixel value of the B.sub.org.sup.f(u,v); according to the respective binocular fusion brightness images of all the stereoscopic images in the V.sub.org, obtaining a binocular fusion brightness image video corresponding to the V.sub.org, denoted as B.sub.org, wherein an fth frame of the binocular fusion brightness image in the B.sub.org is the B.sub.org.sup.f; and calculating a binocular fusion brightness of each pixel in each frame of a stereoscopic image of the V.sub.dis; denoting the binocular fusion brightness of a third pixel having the coordinates of (u,v) in an fth frame of the stereoscopic image of the V.sub.dis as B.sub.dis.sup.f(u,v), B dis f ( u , v ) = ( I dis R , f ( u , v ) ) 2 + ( I dis L , f ( u , v ) ) 2 + 2 ( I dis R , f ( u , v ) .times. I dis L , f ( u , v ) .times. cos .differential. ) .times. .lamda. ; ##EQU00032## then according to the respective binocular fusion brightnesses of all the pixels in each frame of the stereoscopic image of the V.sub.dis, obtaining a binocular fusion brightness image of each frame of the stereoscopic image in the V.sub.dis; denoting the binocular fusion brightness image of the fth frame of the stereoscopic image in the V.sub.dis as B.sub.dis.sup.f, wherein a fourth pixel having the coordinates of (u,v) in the B.sub.dis.sup.f has a pixel value of the B.sub.dis.sup.f(u,v); according to the respective binocular fusion brightness images of all the stereoscopic images in the V.sub.dis, obtaining a binocular fusion brightness image video corresponding to the V.sub.dis, denoted as B.sub.dis, wherein an fth frame of the binocular fusion brightness image in the B.sub.dis is the B.sub.dis.sup.f; wherein: 1.ltoreq.f.ltoreq.N.sub.f wherein the f has an initial value of 1; the N.sub.f represents a total frame number of the stereoscopic images respectively in the V.sub.org and the V.sub.dis; 1.ltoreq.u.ltoreq.U, 1.ltoreq.v.ltoreq.V wherein the U represents a width of the stereoscopic image respectively in the V.sub.org and the V.sub.dis, and the V represents a height of the stereoscopic image respectively in the V.sub.org and the V.sub.dis; the I.sub.org.sup.R,f(u,v) represents a brightness value of a fifth pixel having the coordinates of (u,v) in a right viewpoint image of the fth frame of the stereoscopic image of the V.sub.org; the I.sub.org.sup.L,f(u,v) represents a brightness value of a sixth pixel having the coordinates of (u,v) in a left viewpoint image of the fth frame of the stereoscopic image of the V.sub.org the I.sub.dis.sup.R,f(u,v) represents a brightness value of a seventh pixel having the coordinates of (u,v) in a right viewpoint image of the fth frame of the stereoscopic image of the V.sub.dis; the I.sub.dis.sup.L,f(u,v) represents a brightness value of an eighth pixel having the coordinates of (u,v) in a left viewpoint image of the fth frame of the stereoscopic image of the V.sub.dis; the .differential. represents a fusion angle; and the .lamda. represents a brightness parameter of a display; {circle around (3)} adopting 2.sup.n frames of the binocular fusion brightness images as a frame group; respectively dividing the B.sub.org and the B.sub.dis into n.sub.GoF frame groups; denoting an ith frame group in the B.sub.org as G.sub.org.sup.i and denoting an ith frame group in the B.sub.dis as G.sub.dis.sup.i; wherein: the n is an integer in a range of [3,5]; n GoF = N f 2 n , ##EQU00033## wherein the .left brkt-bot. .right brkt-bot. is a round-down symbol; and 1.ltoreq.i.ltoreq.n.sub.GoF; {circle around (4)} processing each frame group in the B.sub.org with a one-level three-dimensional wavelet transform, and obtaining eight groups of first sub-band sequences corresponding to each frame group in the B.sub.org, wherein: the eight groups of the first sub-band sequences comprise four groups of first time-domain high-frequency sub-band sequences and four groups of first time-domain low-frequency sub-band sequences; and each group of the first sub-band sequence comprises 2 n 2 ##EQU00034## first wavelet coefficient matrixes; and processing each frame group in the B.sub.dis with the one-level three-dimensional wavelet transform, and obtaining eight groups of second sub-band sequences corresponding to each frame group in the B.sub.dis, wherein: the eight groups of the second sub-band sequences comprise four groups of second time-domain high-frequency sub-band sequences and four groups of second time-domain low-frequency sub-band sequences; and each group of the second sub-band sequence comprises 2 n 2 ##EQU00035## second wavelet coefficient matrixes; {circle around (5)} calculating respective qualities of two groups among the eight groups of the second sub-band sequences corresponding to each frame group in the B.sub.dis; and denoting a quality of a jth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i as Q.sup.i,j, Q i , j = k = 1 K SSIM ( VI org i , j , k , VI dis i , j , k ) K , ##EQU00036## wherein: j=1,5; the 1.ltoreq.k.ltoreq.K; the K represents a total number of the wavelet coefficient matrixes respectively in each group of the first sub-band sequence corresponding to each frame group in the B.sub.org and each group of the second sub-band sequence corresponding to each frame group in the B.sub.dis; and K = 2 n 2 ; ##EQU00037## the VI.sub.org.sup.i,j,k represents a kth first wavelet coefficient matrix of a jth group of the first sub-band sequence corresponding to the G.sub.org.sup.i; VI.sub.dis.sup.i,j,k represents a kth second wavelet coefficient matrix of the jth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i; and SSIM( ) is a structural similarity calculation function; {circle around (6)} according to the respective qualities of two groups among the eight groups of the second sub-band sequences corresponding to each frame group in the B.sub.dis, calculating a quality of each frame group in the B.sub.dis; and denoting the quality of the G.sub.dis.sup.i as Q.sub.GoF.sup.i=w.sub.G.lamda.Q.sup.i,1+(1-w.sub.G).times.Q.sup.i,5, wherein: the w.sub.G is a weight of the Q.sup.i,1; the Q.sup.i,1 represents the quality of a first group of the second sub-band sequence corresponding to the G.sub.dis.sup.i; and the Q.sup.i,5 represents the quality of a fifth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i; and {circle around (7)} according to the quality of each frame group in the B.sub.dis, calculating an objective assessment quality of the V.sub.dis and denoting the objective assessment quality of the V.sub.dis as Q.sub.v, Q v = i = 1 n GoF w i .times. Q GoF i i = 1 n GoF w i , ##EQU00038## wherein the w.sup.i is a weight of the Q.sub.GoF.sup.i.

2. The objective assessment method for the stereoscopic video quality based on the wavelet transform, as recited in claim 1, wherein the w.sup.i in the step {circle around (7)} is obtained through steps of: {circle around (7)}-1, calculating a motion vector of each pixel in each frame of the binocular fusion brightness image of the G.sub.dis.sup.i except a first frame of the binocular fusion brightness image, with a reference to a previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; {circle around (7)}-2, according to the motion vector of each pixel in each frame of the binocular fusion brightness image of the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image, calculating a motion intensity of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; and denoting the motion intensity of an f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as MA.sup.f', MA f ' = 1 U .times. V s = 1 U t = 1 V ( ( mv x ( s , t ) ) 2 + ( mv y ( s , t ) ) 2 ) ; ##EQU00039## wherein: 2.ltoreq.f'.ltoreq.2.sup.n; the f' has an initial value of 2; 1.ltoreq.s.ltoreq.U, 1.ltoreq.t.ltoreq.V; the mv.sub.x(s,t) represents a horizontal component of the motion vector of a pixel having coordinates of (s,t) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i and the mv.sub.y(s,t) represents a vertical component of the pixel having the coordinates of (s,t) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i; {circle around (7)}-3, calculating a motion intensity of the G.sub.dis.sup.i, denoted as MAavg.sup.i, MAavg i = f ' = 2 2 n MA f ' 2 n - 1 ; ##EQU00040## {circle around (7)}-4, calculating a background brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i; denoting the background brightness image of an f''th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as BL.sub.dis.sup.i,f''; and denoting a pixel value of a first pixel having coordinates of (p,q) in the BL.sub.dis.sup.i,f'' as BL.sub.dis.sup.i,f''(p,q), BL dis i , f n ( p , q ) = 1 32 bi = - 2 2 bj = - 2 2 I dis i , f n ( p + bi , q + bi ) .times. BO ( bi + 3 , bj + 3 ) , ##EQU00041## wherein: 1.ltoreq.f''.ltoreq.2.sup.n; 3.ltoreq.p.ltoreq.U-2, 3.ltoreq.q.ltoreq.V-2; -2.ltoreq.bi.ltoreq.2, -2.ltoreq.bj.ltoreq.2; the I.sub.dis.sup.i,f'(p+bi,q+bi) represents a pixel value of a pixel having coordinates of (p+bi,q+bi) in the f''th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; and the BO(bi+3,bj+3) represents an element at a subscript of (bi+3,bj+3) in a 5.times.5 background brightness operator; {circle around (7)}-5, calculating a brightness difference image between each frame of the binocular fusion brightness image and the previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; denoting the brightness difference image between the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i and an f'-1th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as LD.sub.dis.sup.i,f'; and denoting a pixel value of a second pixel having the coordinates of (p,q) in the LD.sub.dis.sup.i,f' as LD.sub.dis.sup.i,f'(p,q), LD.sub.dis.sup.i,f'(p,q)=(I.sub.dis.sup.i,f'(p,q)-I.sub.dis.sup.i,f'-1(p,- q)+BL.sub.dis.sup.i,f'(p,q)-BL.sub.dis.sup.i,f'-1(p,q))/2, wherein: 2.ltoreq.f'.ltoreq.2.sup.n; 3.ltoreq.p.ltoreq.U-2, 3.ltoreq.q.ltoreq.V-2; the I.sub.dis.sup.i,f'(p,q) represents a pixel value of a third pixel having the coordinates of (p,q) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i; the I.sub.dis.sup.i,f'-1(p,q) represents a pixel value of a fourth pixel having the coordinates of (p,q) in the f'-1th frame of the binocular fusion brightness image in the G.sub.dis.sup.i; the BL.sub.dis.sup.i,f'(p,q) represents a pixel value of a fifth pixel having the coordinates of (p,q) in the background brightness image BL.sub.dis.sup.i,f'' of the f'th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; and the BL.sub.dis.sup.i,f'-1(p,q) represents a pixel value of a sixth pixel having the coordinates of (p,q) in the background brightness image BL.sub.dis.sup.i,f'-1 of the f'-1th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; {circle around (7)}-6, calculating a mean value of the pixel values of all the pixels in the brightness difference image between each frame of the binocular fusion brightness image and the previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; denoting the mean value of the pixel values of all the pixels in the LD.sub.dis.sup.i,f' as LD.sup.i,f'; calculating a brightness difference value of the G.sub.dis.sup.i and denoting the brightness difference value of the G.sub.dis.sup.i as LDavg.sup.i, LDavg i = f ' = 2 2 n LD i , f ' 2 n - 1 ; ##EQU00042## {circle around (7)}-7, obtaining a motion intensity vector of the B.sub.dis from the respective motion intensities of all the frame groups in the B.sub.dis in order, and denoting the motion intensity vector of the B.sub.dis as V.sub.MAavg, V.sub.MAavg=[MAavg.sup.1,MAavg.sup.2, . . . ,MAavg.sup.i, . . . ,MAavg.sup.n.sup.GoF]; obtaining a brightness difference vector of the B.sub.dis from the respective brightness difference values of all the frame groups in the B.sub.dis in order, and denoting the brightness difference vector of the B.sub.dis as V.sub.LDavg, V.sub.LDavg=[LDavg.sup.1,LDavg.sup.2, . . . ,LDavg.sup.i, . . . ,LDavg.sup.n.sup.GoF]; wherein: the MAavg.sup.1, the MAavg.sup.2, and the MAavg.sup.n.sup.GoF respectively represent the motion intensities of a first frame group, a second frame group and a n.sub.GoFth frame group in the B.sub.dis; the LDavg.sup.1, the LDavg.sup.2, and the LDavg.sup.n.sup.GoF respectively represent the brightness difference value of the first frame group, the second frame group and the n.sub.GoFth frame group in the B.sub.dis; {circle around (7)}-8, processing the MAavg.sup.i with a normalization calculation, and obtaining a normalized motion intensity of the G.sub.dis.sup.i, denoted as v.sub.MAavg.sup.norm,i, v MAavg norm , i = MAavg i - max ( V MAavg ) max ( V MAavg ) - min ( V MAavg ) ; ##EQU00043## processing the LDavg.sup.i with the normalization calculation, and obtaining a norm normalized brightness difference value of the G.sub.dis.sup.i, denoted as v.sub.LDavg.sup.norm,i, v LDavg norm , i = LDavg i - max ( V LDavg ) max ( V LDavg ) - min ( V LDavg ) ; ##EQU00044## wherein the max( ) is a maximum function and the min( ) is a minimum function; and {circle around (7)}-9, according to the v.sub.MAavg.sup.norm,i and the v.sub.LDavg.sup.norm,i, calculating the weight w.sup.i of the Q.sub.GoF.sup.i; w.sup.i=(1-v.sub.MAavg.sup.norm,i).times.v.sub.LDavg.sup.norm,i.

3. The objective assessment method for the stereoscopic video quality based on the wavelet transform, as recited in claim 1, wherein: in the step {circle around (6)}, w.sub.G=0.8.

4. The objective assessment method for the stereoscopic video quality based on the wavelet transform, as recited in claim 2, wherein: in the step {circle around (6)}, w.sub.G=0.8.

Description:

CROSS REFERENCE OF RELATED APPLICATION

[0001] The present application claims priority under 35 U.S.C. 119(a-d) to CN 201510164528.7, filed Apr. 8, 2015.

BACKGROUND OF THE PRESENT INVENTION

[0002] 1. Field of Invention

[0003] The present invention relates to a stereoscopic video quality assessment method, and more particularly to an objective assessment method for a stereoscopic video quality based on a wavelet transform.

[0004] 2. Description of Related Arts

[0005] With the rapid development of the video coding technology and displaying technology, various types of video systems have been increasingly widely applied and gained attention, and gradually become the research focus in the information processing field. Because of the excellent watching experience, the stereoscopic video has become more and more popular, and the applications of the related technologies have greatly integrated into the current social life, such as the stereoscopic television, the stereoscopic film and the naked-eye 3D. However, during the process of capturing, compression, coding, transmission, and displaying of the stereoscopic video, it is inevitable to introduce different degrees and kinds of distortion due to a series of uncontrollable factors. Thus, how to accurately and effectively measure the video quality plays an important role in promoting the development of the various types of the video systems. The stereoscopic video quality assessment is divided into the subjective assessment and the objective assessment. The key of the current stereoscopic video quality assessment field is how to establish an accurate and effective objective assessment model to assess the objective quality of the stereoscopic video. Conventionally, most of the objective assessment methods for the stereoscopic video quality merely simply apply the plane video quality assessment method respectively for assessing the left viewpoint quality and the right viewpoint quality; such objective assessment methods fail to well deal with the relationship between the viewpoints nor consider the influence of the depth perception in the stereoscopic video on the stereoscopic video quality, resulting in the poor accuracy. Although some of the conventional methods consider the relationship between the two eyes, the weighting between the left viewpoint and the right viewpoint is unreasonable and fails to accurately describe the perception characteristics of the human eyes to the stereoscopic video. Moreover, most of the conventional time-domain weightings in the stereoscopic video quality assessment are merely a simple average weighting, while in fact the time-domain perception of the human eyes to the stereoscopic video is not merely the simple average weighting. Thus, the conventional objective assessment methods for the stereoscopic video quality fail to accurately reflect the perception characteristics of the human eyes, and have the inaccurate objective assessment results.

SUMMARY OF THE PRESENT INVENTION

[0006] An object of the present invention is to provide an objective assessment method for a stereoscopic video quality based on a wavelet transform, the method being able to effectively increase a correlation between an objective assessment result and a subjective perception.

[0007] Technical solutions of the present invention are described as follows.

[0008] An objective assessment method for a stereoscopic video quality based on a wavelet transform comprises steps of:

[0009] {circle around (1)} representing an original undistorted stereoscopic video by V.sub.org, and representing a distorted stereoscopic video to-be-assessed by V.sub.dis;

[0010] {circle around (2)} calculating a binocular fusion brightness of each pixel in each frame of a stereoscopic image of the V.sub.org; denoting the binocular fusion brightness of a first pixel having coordinates of (u,v) in an fth frame of the stereoscopic image of the V.sub.org as B.sub.org.sup.f(u,v),

B org f ( u , v ) = ( I org R , f ( u , v ) ) 2 + ( I org L , f ( u , v ) ) 2 + 2 ( I org R , f ( u , v ) .times. I org L , f ( u , v ) .times. cos .differential. ) .times. .lamda. ; ##EQU00001##

then according to the respective binocular fusion brightnesses of all the pixels in each frame of the stereoscopic image of the V.sub.org, obtaining a binocular fusion brightness image of each frame of the stereoscopic image in the V.sub.org; denoting the binocular fusion brightness image of the fth frame of the stereoscopic image in the V.sub.org as B.sub.org.sup.f, wherein a second pixel having the coordinates of (u,v) in the B.sub.org.sup.f has a pixel value of the B.sub.org.sup.f(u,v); according to the respective binocular fusion brightness images of all the stereoscopic images in the V.sub.org, obtaining a binocular fusion brightness image video corresponding to the V.sub.org, denoted as B.sub.org, wherein an fth frame of the binocular fusion brightness image in the B.sub.org is the B.sub.org.sup.f; and

[0011] calculating a binocular fusion brightness of each pixel in each frame of a stereoscopic image of the V.sub.dis; denoting the binocular fusion brightness of a third pixel having the coordinates of (u,v) in an fth frame of the stereoscopic image of the V.sub.dis as B.sub.dis.sup.f(u,v),

B dis f ( u , v ) = ( I dis R , f ( u , v ) ) 2 + ( I dis L , f ( u , v ) ) 2 + 2 ( I dis R , f ( u , v ) .times. I dis L , f ( u , v ) .times. cos .differential. ) .times. .lamda. ; ##EQU00002##

then according to the respective binocular fusion brightnesses of all the pixels in each frame of the stereoscopic image of the V.sub.dis, obtaining a binocular fusion brightness image of each frame of the stereoscopic image in the V.sub.dis; denoting the binocular fusion brightness image of the fth frame of the stereoscopic image in the V.sub.dis as B.sub.dis.sup.f, wherein a fourth pixel having the coordinates of (u,v) in the B.sub.dis.sup.f has a pixel value of the B.sub.dis.sup.f(u,v); according to the respective binocular fusion brightness images of all the stereoscopic images in the V.sub.dis, obtaining a binocular fusion brightness image video corresponding to the V.sub.dis, denoted as B.sub.dis, wherein an fth frame of the binocular fusion brightness image in the B.sub.dis is the B.sub.dis.sup.f; wherein:

[0012] 1.ltoreq.f.ltoreq.N.sub.f, wherein the f has an initial value of 1; the N.sub.f represents a total frame number of the stereoscopic images respectively in the V.sub.org and the V.sub.dis; 1.ltoreq.u.ltoreq.U, 1.ltoreq.v.ltoreq.V, wherein the U represents a width of the stereoscopic image respectively in the V.sub.org and the V.sub.dis, and the V represents a height of the stereoscopic image respectively in the V.sub.org and the V.sub.dis; the I.sub.org.sup.R,f(u,v) represents a brightness value org of a fifth pixel having the coordinates of (u,v) in a right viewpoint image of the fth frame of the stereoscopic image of the V.sub.org; the I.sub.org.sup.L,f(u,v) represents a brightness value of a sixth pixel having the coordinates of (u,v) in a left viewpoint image of the fth frame of the stereoscopic image of the V.sub.org; the I.sub.dis.sup.R,f(u,v) represents a brightness value of a seventh pixel having the coordinates of (u,v) in a right viewpoint image of the fth frame of the stereoscopic image of the V.sub.dis; the I.sub.dis.sup.L,f(u,v) represents a brightness value of an eighth pixel having the coordinates of (u,v) in a left viewpoint image of the fth frame of the stereoscopic image of the V.sub.dis; the .differential. represents a fusion angle and the .lamda. represents a brightness parameter of a display;

[0013] {circle around (3)} adopting 2.sup.n frames of the binocular fusion brightness images as a frame group; respectively dividing the B.sub.org and the B.sub.dis into n.sub.GoF frame groups; denoting an ith frame group in the B.sub.org as G.sub.org.sup.i; and denoting an ith frame group in the B.sub.dis as G.sub.dis.sup.i; wherein: the n is an integer in a range of [3,5];

n GoF = N f 2 n , ##EQU00003##

wherein the .left brkt-bot. .right brkt-bot. is a round-down symbol; and 1.ltoreq.i.ltoreq.n.sub.GoF;

[0014] {circle around (4)} processing each frame group in the B.sub.org with a one-level three-dimensional wavelet transform, and obtaining eight groups of first sub-band sequences corresponding to each frame group in the B.sub.org, wherein: the eight groups of the first sub-band sequences comprise four groups of first time-domain high-frequency sub-band sequences and four groups of first time-domain low-frequency sub-band sequences; and each group of the first sub-band sequence comprises

2 n 2 ##EQU00004##

first wavelet coefficient matrixes; and

[0015] processing each frame group in the B.sub.dis with the one-level three-dimensional wavelet transform, and obtaining eight groups of second sub-band sequences corresponding to each frame group in the B.sub.dis, wherein: the eight groups of the second sub-band sequences comprise four groups of second time-domain high-frequency sub-band sequences and four groups of second time-domain low-frequency sub-band sequences; and each group of the second sub-band sequence comprises

2 n 2 ##EQU00005##

second wavelet coefficient matrixes;

[0016] {circle around (5)} calculating respective qualities of two groups among the eight groups of the second sub-band sequences corresponding to each frame group in the B.sub.dis; and denoting a quality of a jth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i as Q.sup.i,j,

Q i , j = k = 1 K SSIM ( VI org i , j , k , VI dis i , j , k ) K , ##EQU00006##

wherein: j=1,5; 1.ltoreq.k.ltoreq.K; the K represents a total number of the wavelet coefficient matrixes respectively in each group of the first sub-band sequence corresponding to each frame group in the B.sub.org and each group of the second sub-band sequence corresponding to each frame group in the B.sub.dis;

K = 2 n 2 ; ##EQU00007##

the VI.sub.org.sup.i,j,k represents a kth first wavelet coefficient matrix of a jth group of the first sub-band sequence corresponding to the G.sub.org.sup.i; the VI.sub.dis.sup.i,j,k represents a kth second wavelet org coefficient matrix of the jth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i; the SSIM( ) is a structural similarity calculation function;

[0017] {circle around (6)} according to the respective qualities of the two groups among the eight groups of the second sub-band sequences corresponding to each frame group in the B.sub.dis, calculating a quality of each frame group in the B.sub.dis; and denoting the quality of the G.sub.dis.sup.i as Q.sub.GoF.sup.i, Q.sub.GoF.sup.i=w.sub.G.times.Q.sup.i,1+(1-w.sub.G).times.Q.sup.i,5, wherein: the w.sub.G is a weight of the Q.sup.i,1; the Q.sup.i,1 represents the quality of a first group of the second sub-band sequence corresponding to the G.sub.dis; and the Q.sup.i,5 represents the quality of a fifth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i; and

[0018] {circle around (7)} according to the quality of each frame group in the B.sub.dis, calculating an objective assessment quality of the V.sub.dis and denoting the objective assessment quality of the V.sub.dis as Q.sub.v,

Q v = i = 1 n GoF w i .times. Q GoF i i = 1 n GoF w i , ##EQU00008##

wherein the w.sup.i is a weight of the Q.sub.GoF.sup.i.

[0019] The w.sup.i in the step {circle around (7)} is obtained through following steps:

[0020] {circle around (7)}-1, calculating a motion vector of each pixel in each frame of the binocular fusion brightness image of the G.sub.dis.sup.i except a first frame of the binocular fusion brightness image, with a reference to a previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image;

[0021] {circle around (7)}-2, according to the motion vector of each pixel in each frame of the binocular fusion brightness image of the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image, calculating a motion intensity of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; and denoting the motion intensity degree of an f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as MA.sup.f',

MA f ' = 1 U .times. V s = 1 U t = 1 V ( ( mv x ( s , t ) ) 2 + ( mv y ( s , t ) ) 2 ) , ##EQU00009##

wherein: 2.ltoreq.f'.ltoreq.2.sup.n; the f' has an initial value of 2; 1.ltoreq.s.ltoreq.U, 1.ltoreq.t.ltoreq.V; the mv.sub.x(s,t) represents a horizontal component of the motion vector of a pixel having coordinates of (s,t) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i, and the mv.sub.y(s,t) represents a vertical component of the pixel having the coordinates of (s,t) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i;

[0022] {circle around (7)}-3, calculating a motion intensity of the G.sub.dis.sup.i, denoted as MAavg.sup.i,

MAavg i = f ' = 2 2 n MA f ' 2 n - 1 ; ##EQU00010##

[0023] {circle around (7)}-4, calculating a background brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i; denoting the background brightness image of an f''th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as BL.sub.dis.sup.i,f''; and denoting a pixel value of a first pixel having coordinates of (p,q) in the BL.sub.dis.sup.i,f' as BL.sub.dis.sup.i,f''(p,q),

BL dis i , f '' ( p , q ) = 1 32 bi = - 2 2 bj = - 2 2 I dis i , f '' ( p + bi , q + bi ) .times. BO ( bi + 3 , bj + 3 ) , ##EQU00011##

wherein: 1.ltoreq.f''2.sup.n; 3.ltoreq.p.ltoreq.U-2, 3.ltoreq.q.ltoreq.V-2; -2.ltoreq.bi.ltoreq.2, 2.ltoreq.bj.ltoreq.2; the I.sub.dis.sup.i,f''(p+bi,q+bi) represents a pixel value of a pixel having coordinates of (p+bi,q+bi) in the f''th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; and the BO(bi+3,bj+3) represents an element at a subscript of (bi+3,bj+3) in a 5.times.5 background brightness operator;

[0024] {circle around (7)}-5, calculating a brightness difference image between each frame of the binocular fusion brightness image and the previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; denoting the brightness difference image between the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i and an f'-1th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as LD.sub.dis.sup.i,f'; and denoting a pixel value of a second pixel having the coordinates of (p,q) in the LD.sub.dis.sup.i,f' as LD.sub.dis.sup.i,f'(p,q),

LD.sub.dis.sup.i,f'(p,q)=(I.sub.dis.sup.i,f'(p,q)-I.sub.dis.sup.i,f'-1(p- ,q)+BL.sub.dis.sup.i,f'(p,q)-BL.sub.dis.sup.i,f'-1(p,q))/2,

[0025] wherein: 2.ltoreq.f'.ltoreq.2.sup.n; 3.ltoreq.p.ltoreq.U-2, 3.ltoreq.q.ltoreq.V-2; the I.sub.dis.sup.i,f'(p,q) represents a pixel value of a third pixel having the coordinates of (p,q) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i; the I.sub.dis.sup.i,f'-1(p,q) represents a pixel value of a fourth pixel having the coordinates of (p,q) in the f'-1th frame of the binocular fusion brightness image in the G.sub.dis.sup.i; the BL.sub.dis.sup.i,f'(p,q) represents a pixel value of a fifth pixel having the coordinates of (p,q) in the background brightness image BL.sub.dis.sup.i,f'' of the f'th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; and the BL.sub.dis.sup.i,f'-1(p,q) represents a pixel value of a sixth pixel having the coordinates of (p,q) in the background brightness image BL.sub.dis.sup.i,f'-1 of the f'-1th frame of the binocular fusion brightness image of the G.sub.dis.sup.i;

[0026] {circle around (7)}-6, calculating a mean value of the pixel values of all the pixels in the brightness difference image between each frame of the binocular fusion brightness image and the previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; denoting the mean value of the pixel values of all the pixels in the LD.sub.dis.sup.i,f' as LD.sup.i,f'; calculating a brightness difference value of the G.sub.dis.sup.i and denoting the brightness difference value of the G.sub.dis.sup.i as LD.sub.avg.sup.i,

LDavg i = f ' = 2 2 n LD i , f ' 2 n - 1 ; ##EQU00012##

[0027] {circle around (7)}-7, obtaining a motion intensity vector of the B.sub.dis from the respective motion intensities of all the frame groups in the B.sub.dis in order, and denoting the motion intensity vector of the B.sub.dis as V.sub.MAavg,

V.sub.MAavg=[MAavg.sup.1,MAavg.sup.2, . . . ,MAavg.sup.i, . . . ,MAavg.sup.n.sup.GoF];

[0028] obtaining a brightness difference vector of the B.sub.dis from the respective brightness difference values of all the frame groups in the B.sub.dis in order, and denoting the brightness difference vector of the B.sub.dis as V.sub.LDavg, V.sub.LDavg=[LDavg.sup.1, LDavg.sup.2, . . . , LDavg.sup.i, . . . , LDavg.sup.n.sup.GoF]; wherein:

[0029] the MAavg.sup.1, the MAavg.sup.2, and the MAavg.sup.n.sup.GoF respectively represent the motion intensities of a first frame group, a second frame group and a n.sub.GoFth frame group in the B.sub.dis; the LDavg.sup.1, the LDavg.sup.2, and the LDavg.sup.n.sup.GoF respectively represent the brightness difference values of the first frame group, the second frame group and the n.sub.GoFth frame group in the B.sub.dis;

[0030] {circle around (7)}-8, processing the MAavg.sup.i with a normalization calculation, and obtaining a normalized motion intensity of the G.sub.dis.sup.i, denoted as v.sub.MAavg.sup.norm,i,

v MAavg norm , i = MAavg i - max ( V MAavg ) max ( V MAavg ) - min ( V MAavg ) ; ##EQU00013##

[0031] processing the LDavg.sup.i with the normalization calculation, and obtaining a normalized brightness difference value of the G.sub.dis.sup.i, denoted as V.sub.LDavg.sup.norm,i,

v LDavg norm , i = LDavg i - max ( V LDavg ) max ( V LDavg ) - min ( V LDavg ) ; ##EQU00014##

[0032] wherein the max( ) is a function to find a maximum and the min( ) is a function to find a minimum; and

[0033] {circle around (7)}-9, according to the v.sub.MAavg.sup.norm,i and the v.sub.LDavg.sup.norm,i, calculating the weight w.sup.i of the Q.sub.GoF.sup.i, w.sup.i=(1-v.sub.MAavg.sup.norm,i).times.v.sub.LDavg.sup.norm,i.

[0034] Preferably, in the step {circle around (6)}, w.sub.G=0.8.

[0035] Compared with the conventional technology, the present invention has following advantages.

[0036] Firstly, the present invention fuses the brightness value of the pixels in the left viewpoint image with the brightness value of the pixels in the right viewpoint image in the stereoscopic image in a manner of binocular brightness information fusion, and obtains the binocular fusion brightness image of the stereoscopic image. The manner of binocular brightness information fusion overcomes a difficulty in assessing a stereoscopic perception quality of the stereoscopic video quality assessment to some extent and effectively increases an accuracy of the stereoscopic video objective quality assessment.

[0037] Secondly, the present invention applies the three-dimensional wavelet transform in the stereoscopic video quality assessment. Each frame group in the binocular fusion brightness image video is processed with the one-level three-dimensional wavelet transform, and video time-domain information is described through a wavelet domain decomposition, which solves a difficulty in describing the video time-domain information to some extent and effectively increases the accuracy of the stereoscopic video objective quality assessment.

[0038] Thirdly, when weighing the quality of each frame group in the binocular fusion brightness image video corresponding to the distorted stereoscopic video, the method provided by the present invention fully considers a sensitivity degree of a human eye visual characteristic to various kinds of information in the video, and determines the weight of each frame group based on the motion intensity and the brightness difference. Thus, the stereoscopic video quality assessment method, provided by the present invention, is more conform to a human eye subjective perception characteristic.

[0039] These and other objectives, features, and advantages of the present invention will become apparent from the following detailed description, the accompanying drawings, and the appended claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0040] The FIGURE is an implementation block diagram of an objective assessment method for a stereoscopic video quality based on a wavelet transform according to a preferred embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0041] The present invention is further described with an accompanying drawing and a preferred embodiment of the present invention.

[0042] According to a preferred embodiment of the present invention, the present invention provides an objective assessment method for a stereoscopic video quality based on a wavelet transform, wherein an implementation block diagram thereof is showed in the FIGURE, comprising steps of:

[0043] {circle around (1)} representing an original undistorted stereoscopic video by V.sub.org, and representing a distorted stereoscopic video to-be-assessed by V.sub.dis;

[0044] {circle around (2)} calculating a binocular fusion brightness of each pixel in each frame of a stereoscopic image of the V.sub.org; denoting the binocular fusion brightness of a first pixel having coordinates of (u,v) in an fth frame of the stereoscopic image of the V.sub.org as B.sub.org.sup.f(u,v),

B org f ( u , v ) = ( I org R , f ( u , v ) ) 2 + ( I org L , f ( u , v ) ) 2 + 2 ( I org R , f ( u , v ) .times. I org L , f ( u , v ) .times. cos .differential. ) .times. .lamda. ; ##EQU00015##

then according to the respective binocular fusion brightnesses of all the pixels in each frame of the stereoscopic image of the V.sub.org, obtaining a binocular fusion brightness image of each frame of the stereoscopic image in the V.sub.org; denoting the binocular fusion brightness image of the fth frame of the stereoscopic image in the V.sub.org as B.sub.org.sup.f, wherein a second pixel having the coordinates of (u,v) in the B.sub.org.sup.f has a pixel value of the B.sub.org.sup.f(u,v); according to the respective binocular fusion brightness images of all the stereoscopic images in the V.sub.org, obtaining a binocular fusion brightness image video corresponding to the V.sub.org, denoted as B.sub.org, wherein an fth frame of the binocular fusion brightness image in the B.sub.org is the B.sub.org.sup.f; and

[0045] calculating a binocular fusion brightness of each pixel in each frame of a stereoscopic image of the V.sub.dis; denoting the binocular fusion brightness of a third pixel having the coordinates of (u,v) in an fth frame of the stereoscopic image of the V.sub.dis as B.sub.dis.sup.f(u,v),

B dis f ( u , v ) = ( I dis R , f ( u , v ) ) 2 + ( I dis L , f ( u , v ) ) 2 + 2 ( I dis R , f ( u , v ) .times. I dis L , f ( u , v ) .times. cos .differential. ) .times. .lamda. ; ##EQU00016##

then according to the respective binocular fusion brightnesses of all the pixels in each frame of the stereoscopic image of the V.sub.dis, obtaining a binocular fusion brightness image of each frame of the stereoscopic image in the V.sub.dis; denoting the binocular fusion brightness image of the fth frame of the stereoscopic image in the V.sub.dis as B.sub.dis.sup.f, wherein a fourth pixel having the coordinates of (u,v) in the B.sub.dis.sup.f has a pixel value of the B.sub.dis.sup.f(u,v); according to the respective binocular fusion brightness images of all the stereoscopic images in the V.sub.dis, obtaining a binocular fusion brightness image video corresponding to the V.sub.dis, denoted as B.sub.dis, wherein an fth frame of the binocular fusion brightness image in the B.sub.dis is the B.sub.dis.sup.f; wherein:

[0046] 1.ltoreq.f.ltoreq.N.sub.f; wherein the f has an initial value of 1; the N.sub.f represents a total frame number of the stereoscopic images respectively in the V.sub.org and the V.sub.dis; 1.ltoreq.u.ltoreq.U, 1.ltoreq.v.ltoreq.V; wherein the U represents a width of the stereoscopic image respectively in the V.sub.org and the V.sub.dis, and the V represents a height of the stereoscopic image respectively in the V.sub.org and the V.sub.dis; the I.sub.org.sup.R,f(u,v) represents a brightness value of a fifth pixel having the coordinates of (u,v) in a right viewpoint image of the fth frame of the stereoscopic image of the V.sub.org; the I.sub.org.sup.L,f(u,v) represents a brightness value of a sixth pixel having the coordinates of (u,v) in a left viewpoint image of the fth frame of the stereoscopic image of the V.sub.org; the I.sub.dis.sup.R,f(u,v) represents a brightness value of a seventh pixel having the coordinates of (u,v) in a right viewpoint image of the fth frame of the stereoscopic image of the V.sub.dis; the I.sub.dis.sup.L,f(u,v) represents a brightness value of an eighth pixel having the coordinates of (u,v) in a left viewpoint image of the fth frame of the stereoscopic image of the V.sub.dis; the .differential. represents a fusion angle, wherein it is embodied that .differential.=120.degree. herein; and the .lamda. represents a brightness parameter of a display, wherein it is embodied that .lamda.=1 herein;

[0047] {circle around (3)} adopting 2.sup.n frames of the binocular fusion brightness images as a frame group; respectively dividing the B.sub.org and the B.sub.dis into n.sub.GoF frame groups; denoting an ith frame group in the B.sub.org as G.sub.org.sup.i; and denoting an ith frame group in the B.sub.dis as G.sub.dis.sup.i; wherein: the n is an integer in a range of [3,5], wherein it is embodied that n=4 herein, namely adopting sixteen frames of the binocular fusion brightness images as the frame group; during a practical implementation, if a frame number of the binocular fusion brightness images respectively in the B.sub.org and the B.sub.dis is not a positive integral multiple of 2.sup.n, after orderly dividing the binocular fusion brightness images into a plurality of the frame groups, the redundant frames of the binocular fusion brightness images are not processed;

n GoF = N f 2 n , ##EQU00017##

wherein the .left brkt-bot. .right brkt-bot. is a round-down symbol; and 1.ltoreq.i.ltoreq.n.sub.GoF;

[0048] {circle around (4)} processing each frame group in the B.sub.org with a one-level three-dimensional wavelet transform, and obtaining eight groups of first sub-band sequences corresponding to each frame group in the B.sub.org, wherein: the eight groups of the first sub-band sequences comprise four groups of first time-domain high-frequency sub-band sequences and four groups of first time-domain low-frequency sub-band sequences; each group of the first sub-band sequence comprises

2 n 2 ##EQU00018##

first wavelet coefficient matrixes; herein, the four groups of the first time-domain high-frequency sub-band sequences corresponding to each frame group in the B.sub.org are respectively an original time-domain high-frequency approximate sequence HLL.sub.org, an original time-domain high-frequency horizontal detail sequence HLH.sub.org, an original time-domain high-frequency vertical detail sequence HHL.sub.org, and an original time-domain high-frequency diagonal detail sequence HHH.sub.org; and the four groups of the first time-domain low-frequency sub-band sequences corresponding to each frame group in the B.sub.org are respectively an original time-domain low-frequency approximate sequence LLL.sub.org, an original time-domain low-frequency horizontal detail sequence LLH.sub.org, an original time-domain low-frequency vertical detail sequence LHL.sub.org, and an original time-domain low-frequency diagonal detail sequence LHH.sub.org; and

[0049] processing each frame group in the B.sub.dis with the one-level three-dimensional wavelet transform, and obtaining eight groups of second sub-band sequences corresponding to each frame group in the B.sub.dis, wherein: the eight groups of the second sub-band sequences comprise four groups of second time-domain high-frequency sub-band sequences and four groups of second time-domain low-frequency sub-band sequences; each group of the second sub-band sequence comprises

2 n 2 ##EQU00019##

second wavelet coefficient matrixes; herein, the four groups of the second time-domain high-frequency sub-band sequences corresponding to each frame group in the B.sub.dis are respectively a distorted time-domain high-frequency approximate sequence HLL.sub.dis, a distorted time-domain high-frequency horizontal detail sequence HLH.sub.dis, a distorted time-domain high-frequency vertical detail sequence HHL.sub.dis, and a distorted time-domain high-frequency diagonal detail sequence HHH.sub.dis; and the four groups of the second time-domain low-frequency sub-band sequences corresponding to each frame group in the B.sub.dis are respectively a distorted time-domain low-frequency approximate sequence LLL.sub.dis, a distorted time-domain low-frequency horizontal detail sequence LLH.sub.dis, a distorted time-domain low-frequency vertical detail sequence LHL.sub.dis, and a distorted time-domain low-frequency diagonal detail sequence LHH.sub.dis; wherein:

[0050] in the present invention, the binocular fusion brightness image videos are processed with a time-domain decomposition through the three-dimensional wavelet transform; video time-domain information is described based on frequency components; and to finish processing the time-domain information in a wavelet domain solves a difficulty of a time-domain quality assessment in the video quality assessment to some extent and increases an accuracy of the assessment method;

[0051] {circle around (5)} calculating respective qualities of two groups among the eight groups of the second sub-band sequences corresponding to each frame group in the B.sub.dis; and denoting a quality of a jth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i as Q.sup.i,j,

Q i , j = k = 1 K SSIM ( VI org i , j , k , VI dis i , j , k ) K , ##EQU00020##

wherein:

[0052] j=1,5, wherein: a first group of the second sub-band sequence corresponding to the G.sub.dis.sup.i is a first group of the second time-domain high-frequency sub-band sequence corresponding to the G.sub.dis.sup.i when j=1; and a fifth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i is a first group of the second time-domain low-frequency sub-band sequence corresponding to the G.sub.dis.sup.i when j=5;

[0053] 1.ltoreq.k.ltoreq.K, wherein: the K represents a total number of the wavelet coefficient matrixes respectively in each group of the first sub-band sequence corresponding to each frame group in the B.sub.org and each group of the second sub-band sequence corresponding to each frame group in the B.sub.dis; and

K = 2 n 2 ; ##EQU00021##

[0054] the VI.sub.org.sup.i,j,k represents a kth first wavelet coefficient matrix of a jth group of the first sub-band sequence corresponding to the G.sub.org.sup.i;

[0055] the VI.sub.dis.sup.i,j,k represents a kth second wavelet coefficient matrix of the jth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i; and

[0056] SSIM( ) is a structural similarity calculation function,

SSIM ( VI org i , j , k , VI dis i , j , k ) = ( 2 .mu. org .mu. dis + c 1 ) ( 2 .sigma. org - dis + c 2 ) ( .mu. org 2 + .mu. dis 2 + c 1 ) ( .sigma. org 2 + .sigma. dis 2 + c 2 ) , ##EQU00022##

wherein: the .mu..sub.org represents a mean value of values of all elements in the VI.sub.org.sup.i,j,k; the .mu..sub.dis represents a mean value of values of all elements in the VI.sub.dis.sup.i,j,k; the .sigma..sub.org represents a variance of the VI.sub.org.sup.i,j,k; the .sigma..sub.dis represents a variance of the VI.sub.dis.sup.i,j,k; the .sigma..sub.org-dis represents a covariance between the VI.sub.org.sup.i,j,k and the VI.sub.dis.sup.i,j,k; both the c.sub.1 and the c.sub.2 are constants; the c.sub.1 and the c.sub.2 prevents a denominator from being 0; and it is embodied that c.sub.1=0.05 and c.sub.2=0.05 herein;

[0057] {circle around (6)} according to the respective qualities of two groups among the eight groups of the second sub-band sequences corresponding to each frame group in the B.sub.dis calculating a quality of each frame group in the B.sub.dis; and denoting the quality of the G.sub.dis.sup.i as Q.sub.GoF.sup.i, Q.sub.GoF.sup.i=w.sub.G.times.Q.sup.i,1+(1-w.sub.G).times.Q.sup.i,5, wherein: the w.sub.G is a weight of the Q.sup.i,1, wherein it is embodied that w.sub.G=0.8 herein; the Q.sup.i,1 represents the quality of the first group of the second sub-band sequence corresponding to the G.sub.dis.sup.i, namely the quality of the first group of the second time-domain high-frequency sub-band sequence corresponding to the G.sub.dis.sup.i; the Q.sup.i,5 represents the quality of the fifth group of the second sub-band sequence corresponding to the G.sub.dis.sup.i, namely the quality of the first group of the second time-domain low-frequency sub-band sequence corresponding to the G.sub.dis.sup.i; and

[0058] {circle around (7)} according to the quality of each frame group in the B.sub.dis, calculating an objective assessment quality of the V.sub.dis and denoting the objective assessment quality of the V.sub.dis as Q.sub.v,

Q v = i = 1 n GoF w i .times. Q GoF i i = 1 n GoF w i , ##EQU00023##

wherein: the w.sup.i is a weight of the Q.sub.GoF.sup.i; and it is embodied that the w.sup.i is obtained through following steps:

[0059] {circle around (7)}-1, calculating a motion vector of each pixel in each frame of the binocular fusion brightness image of the G.sub.dis.sup.i except a first frame of the binocular fusion brightness image, with a reference to a previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image;

[0060] {circle around (7)}-2, according to the motion vector of each pixel in each frame of the binocular fusion brightness image of the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image, calculating a motion intensity of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; and denoting the motion intensity of an f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as MA.sup.f',

MA f ' = 1 U .times. V s = 1 U t = 1 V ( ( mv x ( s , t ) ) 2 + ( mv y ( s , t ) ) 2 ) , ##EQU00024##

wherein: 2.ltoreq.f'.ltoreq.2.sup.n; the f' has an initial value of 2; 1.ltoreq.s.ltoreq.U, 1.ltoreq.t.ltoreq.V; the mv.sub.x(s,t) represents a horizontal component of the motion vector of a pixel having coordinates of (s,t) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i and the mv.sub.y(s,t) represents a vertical component of the pixel having the coordinates of (s,t) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i;

[0061] {circle around (7)}-3, calculating a motion intensity of the G.sub.dis.sup.i, denoted as MAavg.sup.i,

MAavg i = f ' = 2 2 n MA f ' 2 n - 1 ; ##EQU00025##

[0062] {circle around (7)}-4, calculating a background brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i; denoting the background brightness image of an f''th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as BL.sub.dis.sup.i,f''; and denoting a pixel value of a first pixel having coordinates of (p,q) in the BL.sub.dis.sup.i,f'' as BL.sub.dis.sup.i,f''(p,q),

BL dis i , f '' ( p , q ) = 1 32 bi = - 2 2 bj = - 2 2 I dis i , f '' ( p + bi , q + bi ) .times. BO ( bi + 3 , bj + 3 ) , ##EQU00026##

wherein: 1.ltoreq.f''.ltoreq.2.sup.n; 3.ltoreq.p.ltoreq.U-2, 3.ltoreq.q.ltoreq.V-2; -2.ltoreq.bi.ltoreq.2, -2.ltoreq.bj.ltoreq.2; the I.sub.dis.sup.i,f''(p+bi,q+bi) represents a pixel value of a pixel having coordinates of (p+bi,q+bi) in the f''th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; and the BO(bi+3,bj+3) represents an element at a subscript of (bi+3,bj+3) in a 5.times.5 background brightness operator, wherein it is embodied that the 5.times.5 background brightness operator herein is

[ 1 1 1 1 1 1 2 2 2 1 1 2 0 2 1 1 2 2 2 1 1 1 1 1 1 ] ; ##EQU00027##

[0063] {circle around (7)}-5, calculating a brightness difference image between each frame of the binocular fusion brightness image and the previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; denoting the brightness difference image between the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i and an f'-1th frame of the binocular fusion brightness image in the G.sub.dis.sup.i as LD.sub.dis.sup.i,f'; and denoting a pixel value of a second pixel having the coordinates of (p,q) in the LD.sub.dis.sup.i,f' as LD.sub.dis.sup.i,f'(p,q),

LD.sub.dis.sup.i,f'(p,q)=(I.sub.dis.sup.i,f'(p,q)-I.sub.dis.sup.i,f'-1(p- ,q)+BL.sub.dis.sup.i,f'(p,q)-BL.sub.dis.sup.i,f'-1(p,q))/2,

[0064] wherein: 2.ltoreq.f'.ltoreq.2.sup.n; 3.ltoreq.p.ltoreq.U-2, 3.ltoreq.q.ltoreq.V-2; the I.sub.dis.sup.i,f'(p,q) represents a pixel value of a third pixel having the coordinates of (p,q) in the f'th frame of the binocular fusion brightness image in the G.sub.dis.sup.i the I.sub.dis.sup.i,f'-1(p,q) represents a pixel value of a fourth pixel having the coordinates of (p,q) in the f'-1th frame of the binocular fusion brightness image in the G.sub.dis.sup.i; the BL.sub.dis.sup.i,f'(p,q) represents a pixel value of a fifth pixel having the coordinates of (p,q) in the background brightness image BL.sub.dis.sup.i,f'' of the f'th frame of the binocular fusion brightness image of the G.sub.dis.sup.i; and the BL.sub.dis.sup.i,f'-1(p,q) represents a pixel value of a sixth pixel having the coordinates of (p,q) in the background brightness image BL.sub.dis.sup.i,f'-1 of the f'-1th frame of the binocular fusion brightness image of the G.sub.dis.sup.i;

[0065] {circle around (7)}-6, calculating a mean value of the pixel values of all the pixels in the brightness difference image between each frame of the binocular fusion brightness image and the previous frame of the binocular fusion brightness image of each frame of the binocular fusion brightness image in the G.sub.dis.sup.i except the first frame of the binocular fusion brightness image; denoting the mean value of the pixel values of all the pixels in the LD.sub.dis.sup.i,f' as LD.sup.i,f'; calculating a brightness difference value of the G.sub.dis.sup.i and denoting the brightness difference value of the G.sub.dis.sup.i as LDavg.sup.i,

LDavg i = f ' = 2 2 n LD i , f ' 2 n - 1 ; ##EQU00028##

{circle around (7)}-7, obtaining a motion intensity vector of the B.sub.dis from the respective motion intensities of all the frame groups in the B.sub.dis in order, and denoting the motion intensity vector of the B.sub.dis as V.sub.MAavg,

V.sub.MAavg=[MAavg.sup.1,MAavg.sup.2, . . . ,MAavg.sup.i, . . . ,MAavg.sup.n.sup.GoF];

[0066] obtaining a brightness difference vector of the B.sub.dis from the respective brightness difference values of all the frame groups in the B.sub.dis in order, and denoting the brightness difference vector of the B.sub.dis as V.sub.LDavg,

V.sub.LDavg=[LDavg.sup.1,LDavg.sup.2, . . . ,LDavg.sup.i, . . . ,LDavg.sup.n.sup.GoF]; wherein:

[0067] the MAavg.sup.1, the MAavg.sup.2, and the MAavg.sup.n.sup.GoF respectively represent the motion intensities of a first frame group, a second frame group and a n.sub.GoFth frame group in the B.sub.dis; the LDavg.sup.1, the LDavg.sup.2, and the LDavg.sup.n.sup.GoF respectively represent the brightness difference value of the first frame group, the second frame group and the n.sub.GoFth frame group in the B.sub.dis;

[0068] {circle around (7)}-8, processing the MAavg.sup.i with a normalization calculation, and obtaining a normalized motion intensity of the G.sub.dis.sup.i, denoted as v.sub.MAavg.sup.norm,i,

v MAavg norm , i = MAavg i - max ( V MAavg ) max ( V MAavg ) - min ( V MAavg ) ; ##EQU00029##

[0069] processing the LDavg.sup.i with the normalization calculation, and obtaining a normalized brightness difference value of the G.sub.dis.sup.i, denoted as v.sub.LDavg.sup.norm,i,

v LDavg norm , i = LDavg i - max ( V LDavg ) max ( V LDavg ) - min ( V LDavg ) ; ##EQU00030##

[0070] wherein the max( ) is a function to find a maximum and the min( ) is a function to find a minimum; and

[0071] {circle around (7)}-9, according to the v.sub.MAavg.sup.norm,i and the v.sub.LDavg.sup.norm,i, calculating the weight w.sup.i of the Q.sub.GoF.sup.i, w.sup.i=(1-v.sub.MAavg.sup.norm,i).times.v.sub.LDavg.sup.norm,i.

[0072] In order to illustrate effectiveness and feasibility of the method provided by the present invention, a NAMA3DS1-CoSpaD1 stereoscopic video database (NAMA3D video database in short) provided by a French IRCCyN research institution is adopted for a verification test, for analyzing a correlation between an objective assessment result of the method provided by the present invention and a difference mean opinion score (DMOS). The NAMA3D video database comprises 10 original high-definition stereoscopic videos showing different scenes. Each original high-definition stereoscopic video is treated with an H.264 coding compression distortion or a JPEG2000 coding compression distortion. The H.264 coding compression distortion has 3 different distortion degrees, namely totally 30 first distorted stereoscopic videos; and the JPEG2000 coding compression distortion has 4 different distortion degrees, namely totally 40 second distorted stereoscopic videos. Through the steps {circle around (1)}-{circle around (7)} of the method provided by the present invention, the above 70 distorted stereoscopic videos are calculated in the same manner to obtain an objective assessment quality of each distorted stereoscopic video relative to a corresponding undistorted stereoscopic video; then the objective assessment quality of each distorted stereoscopic video is processed through a four-parameter Logistic function non-linear fitting with the DMOS; and finally, a performance index value between the objective assessment result and a subjective perception is obtained. Herein, three common objective parameters for assessing a video quality assessment method serve as assessment indexes. The three objective parameters are respectively Correlation coefficient (CC), Spearman Rank Order Correlation coefficient (SROCC) and Rooted Mean Squared Error (RMSE). A range of the value of the CC and the SROCC is [0, 1]. The nearer a value approximates to 1, the more accurate an objective assessment method is; otherwise, the objective assessment method is less accurate. The smaller RMSE, the higher accuracy of a predication of the objective assessment method, and the better performance of the objective assessment method; otherwise, the predication of the objective assessment method is worse. The assessment indexes, CC, SROCC and RMSE, for representing the performance of the method provided by the present invention are listed in Table 1. According to data listed in the Table 1, the objective assessment quality of the distorted stereoscopic video, which is obtained through the method provided by the present invention, has a good correlation with the DMOS. For H.264 coding compression distorted videos, the CC reaches 0.8712; the SROCC reaches 0.8532; and the RMSE is as low as 5.7212. For JPEG2000 coding compression distorted videos, the CC reaches 0.9419; the SROCC reaches 0.9196; and the RMSE is as low as 4.1915. For an overall distorted video comprising both the H.264 coding compression distorted videos and the JPEG2000 coding compression distorted videos, the CC reaches 0.9201; the SROCC reaches 0.8910; and the RMSE is as low as 5.0523. Thus, the objective assessment result of the method provided by the present invention is relatively consistent with a human eye subjective perception result, which fully proves the effectiveness of the method provided by the present invention.

TABLE-US-00001 TABLE 1 Correlation between objective assessment quality of distorted stereoscopic video calculated through method provided by present invention and DMOS CC SROCC RMSE 30 H.264 coding compression 0.8712 0.8532 5.7212 stereoscopic videos 40 JPEG2000 coding compression 0.9419 0.9196 4.1915 stereoscopic videos Totally 70 distorted 0.9201 0.8910 5.0523 stereoscopic videos

[0073] One skilled in the art will understand that the embodiment of the present invention as shown in the drawings and described above is exemplary only and not intended to be limiting.

[0074] It will thus be seen that the objects of the present invention have been fully and effectively accomplished. Its embodiments have been shown and described for the purposes of illustrating the functional and structural principles of the present invention and is subject to change without departure from such principles. Therefore, this invention includes all modifications encompassed within the spirit and scope of the following claims.



User Contributions:

Comment about this patent or add new information about this topic:

CAPTCHA
Images included with this patent application:
Objective assessment method for stereoscopic video quality based on     wavelet transform diagram and imageObjective assessment method for stereoscopic video quality based on     wavelet transform diagram and image
Objective assessment method for stereoscopic video quality based on     wavelet transform diagram and imageObjective assessment method for stereoscopic video quality based on     wavelet transform diagram and image
Objective assessment method for stereoscopic video quality based on     wavelet transform diagram and image
New patent applications in this class:
DateTitle
2022-09-22Electronic device
2022-09-22Front-facing proximity detection using capacitive sensor
2022-09-22Touch-control panel and touch-control display apparatus
2022-09-22Sensing circuit with signal compensation
2022-09-22Reduced-size interfaces for managing alerts
New patent applications from these inventors:
DateTitle
2017-06-22Image quality objective evaluation method based on manifold feature similarity
2016-12-293d-hevc depth video information hiding method based on single-depth intra mode
Website © 2025 Advameg, Inc.