2023年全國(guó)碩士研究生考試考研英語(yǔ)一試題真題(含答案詳解+作文范文)_第1頁(yè)
已閱讀1頁(yè),還剩16頁(yè)未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、<p><b>  文獻(xiàn)翻譯</b></p><p>  基于相鄰幀的差別預(yù)測(cè)的多分辨率小波域運(yùn)動(dòng)估計(jì)和補(bǔ)償</p><p>  2011年3月26日</p><p> 院(系)名稱信息工程學(xué)院</p><p> 專業(yè)名稱計(jì)算機(jī)科學(xué)與技術(shù)</p><p> 學(xué)生姓名</p>

2、<p> 指導(dǎo)教師</p><p><b>  附:英文原文</b></p><p>  Multi-resolution Motion Estimation and Compensation based on Adjacent Prediction of Frame Difference in Wavelet Domain</p><

3、p>  Tang Guowei</p><p>  Abstract:Aiming at the higher bit—rate occupation of motion vector encoding and more time load of full—searching strategies, a multi-resolution motion estimation and compensation

4、algorithm based on adjacent prediction of frame difference was proposed. Differential motion detection was employed to image sequences and proper threshold was ad opted to identify the connected region. Then the motion r

5、egion was extracted to carry out motion estimation and motion compensation on it.The experiment resul</p><p>  Key words: Motion estimation; Motion compensation; Multi-resolution analysis; Video coding</p

6、><p>  I. Introduction</p><p>  For the excellent properties of time-frequency localization, wavelet analysis is widely used in the field of image/video coding. As to image sequences, motion estima

7、tion and motion compensation can effectively reduce the time-relativity and improve the encoding efficiency. But the traditional motion compensation wavelet coding, taking the structure of motion estimation plus intra-fr

8、ame still image encoding, can not make full use of the advantages of the inherent multi—resolution characteristics </p><p>  By using comparatively small searching window and matching block, this method can

9、reduce the amount of operation effectively, get rid of the blocking artifacts and be easy to achieve video scalable encoding suitable for human visual system and progressive transmission. But in the MRME algorithm there

10、exist the defaults of discontinuous motion vector and the inconformity of real object border with block border, which leads to the increase of high frequency components in transform coefficients and</p><p> 

11、 Zan Jinwen proposed multi resolution motion estimation through median filtering which can produce more smooth motion fields and result in a better estimation performance. But median filtering brings about a quite negati

12、ve effect on the unsmooth motion of high frequency sub-band in high resolution. Y.C.Su made a theoretical deduction and a deep research to interpolation algorithm of wavelet coefficients and proposed half-pixel multi-res

13、olution motion compensation which improves the accuracy of mo</p><p>  This paper presents a Frame Difference adjacency prediction MRME algorithm (FDMRME), which adopts differential motion detection for imag

14、e sequence and extracts the motion region to carry out motion estimation and compensation. This method reduces the complexity of motion estimation, improves the encoding efficiency of motion vector and raises the quality

15、 of the reconstruction image at the same bit-rate as MRME.</p><p>  II. Motion Detection Based On Frame Difference</p><p>  1. Three frame difference method </p><p>  Motion detecti

16、on includes optical flow algorithm, background elimination algorithm, adjacent frame algorithm and three frame difference algorithm. By using continuous three images to make difference operation and carry out AND operati

17、on to the difference results, the three frame difference algorithm can quickly detect the motion region from image sequences. The detection procedure is shown in Fig.1.</p><p>  Fig.1 The procedure of motion

18、 object detection using three frame difference method</p><p>  g1(x ,y)is the motion variation image of the first two frames, and g2(x ,y )is that of the later two frames. The motion information is included

19、in both of g1(x ,y)and g2(x ,y). Binarize the two motion variation images and make AND operation for them to obtain the motion objects. </p><p>  2. Threshold selection of difference images</p><p&

20、gt;  In order to extract motion objects. It is necessary to select proper threshold T for the frame difference images gl(x ,y)and g2(x ,y)by using a gray feature based approach and then to binarize the frame difference i

21、mages by T . The process of thres- hold selection consists of 4 steps:</p><p>  (1) Among the three frame images. Divide the first and the second frames to 2x2 blocks.Add the 4 pixels of each blocks in secon

22、d frame to get a sum a , and add the 4 pixels of each corresponding blocks in first frame to get bi.</p><p>  Here m.n are respectively the length and the width of the image, and k is the number of the 2

23、5;2 blocks.</p><p>  (2) To binarize the frame difference image of the second and the first frames by using threshold T = 1.1S and get the binarization image. In this step a rough threshold is obtained on wh

24、ich a fined threshold can be obtained.</p><p>  (3) Compute the mean value of the pixels less than the threshold T in the frame difference image.</p><p>  Here q is the number of the pixels less

25、 than the threshold T in the frame difference image. Take M as the threshold of the frame difference image of the second and the first frames and then binarize the frame difference image.</p><p>  (4) To bin

26、arize the frame difference image of the third and second frames by threshold M. And then compute the mean value of the pixels less than M in the frame difference image. Take the mean value as the threshold of the frame d

27、ifference image of the third and second frames and then binarize the frame difference image.</p><p>  3. Identification of the connected region and extraction of the coordinates of motion region. </p>

28、<p>  To label the AND operation images to get the coordinates of the motion region by using object clustering approach, which includes 2 steps:</p><p>  Label each object pixels </p><p> 

29、 Scan the AND operation images from left to right and from top to bottom. When a pixel (gray-level is 1) belonging to object region is met, detect the 8 neighbors of this pixe1. If they are not labeled, label them with a

30、 new label number (start from 1), and each time the number increases by 1. Otherwise label the current pixel with the smallest label number of the 8 neighboring pixels.</p><p>  (2) Cluster each object regio

31、n</p><p>  Scan the motion object image from top to bottom line by line (from left to right and then from right to left).when an object pixel is met, detect the 8 neighbors of this pixe1. If the smallest lab

32、el number of them is greater than that of the object pixe1. It is substituted for the smallest labe1 number of the 8 neighboring pixels. When the whole image has been scanned., scan the image once more from bottom to top

33、 in the same way until all the labels of the object pixels stop changing.</p><p>  For head-shoulder images there might be some irregular regions after the AND difference image has been processed by the obje

34、ct clustering procedure. Denoise these irregular regions and save them as the motion regions of the image.</p><p>  III.Difference Adjacent Block Prediction Motion Estimation and Compensation</p><

35、p>  Step1 Make 3 level wavelet decomposition to the image sequences and conduct motion detection to the lowest frequency sub-band LL3 by using three frame difference method. Extract the motion regions and divide them

36、into 2×2 blocks. Suppose the motion vector of each block as V3( x ,y),and the other 3 sub-band motion vectors are also ( x ,y). Define a reliability flag R. If not all the pixels in theblock share the same state, th

37、en it can be determined that this block lies in the border of the moveme</p><p>  Step 2 Check the corresponding flags of the pixels of the current block in 1ow frequency sub-band. If they belong to still re

38、gion then R = 1. And the motion vectors are 0, which do not need to be estimated. And the motion vectors of the corresponding position in other sub-bands are also 0. </p><p>  Step 3 If all the current block

39、s of LL3 belong to motion region. Then R = 1. Predict according to the motion of the adjacent bocks. The relation between the block at( x ,y)with the adjacent bocks are shown in Fig.2.The motion estimation values are:<

40、;/p><p>  Here the value of a1 is 0 or 1. If the corresponding block is located at the same region with the current block, then the value is 1. Otherwise it is 0. Determine whether the prediction value exceeds

41、the bound or not. If not, then start searching by</p><p>  taking the prediction value as the center. In order to promote the consistency, the Mean Absolute Difference (MAD) is taken as the matching criterio

42、n.</p><p>  Current block</p><p>  Fig.2 The sketch of current block R=1 and the adjacencies</p><p>  Step 4 If there axe both motion pixels and still pixels in the current block. T

43、hen it can be inferred that the current block is located at the border of the motion region. The motion is less reliable and it can not be processed as the reliable block.In order to promote the reliability of the predic

44、tion, more information needs to be obtained. So it will be processed later. W hen the first scanning is finished, all the motion vectors of reliable blocks have been achieved. Then the reliable motion </p><p&g

45、t;  Here a1 has the same meaning as above.</p><p>  Current block</p><p>  Fig.3 The sketch of current block R=0 and the adjacencies</p><p>  Step 5 Estimate the corresponding bloc

46、ks in other sub-bands by using the block motion vectors of the reference frame LL3 sub-band. For the level m(m <3) sub-images, the initial value of the motion vector of the block at(x ,y)is. Repeat Steps 3 and 4,the p

47、rediction value of motion estimation can be obtained as. Then the prediction value of the motion compensation is.</p><p>  Step 6 The final motion vectors can be obtained for every block at(x ,y)in each sub-

48、images. So the motion compensation prediction of any pixel(x ,y)in a block is not only decided by the motion vector of this block but also by the motion vector in the adjacent blocks. At a fixed bit-rate, More bits can b

49、e allocated to the residual information to improve the quality of the reconstruction images. Meanwhile as the estimation is conducted only for the motion region, the time consumed in estimation is</p><p>  I

50、V. Analysis of the Experiment Results</p><p>  The condition of the experiment is shown in Tab.1, and 100 frames of Claire and Miss America are tested respectively.</p><p>  Tab.2 shows the byte

51、s (represented by motion vector, Bytes/frame) for encoding the motion vector and the total time (represented by total time, seconds) consumed in motion estimation and compensation in the MRME/MRMC approach and the FDMRME

52、 approach. By making use of the sub-band orientation selectivity of wavelet decomposition in the FDMRME algorithm, the accuracy of the base motion vectors in low frequency sub-bands is promoted, and the error for motion

53、vectors of each sub- bands is decreased </p><p>  Tab.1 Testing condition</p><p>  Tab.2 Contrast of motion estimation performance</p><p>  Tab.3 shows the encoding results of the F

54、DMRME method to the testing sequences. Here the PSNR represents the quality of the reconstruction image, motion vector is the same as that of the above, ER (Bytes/frame) represents the number of bytes for transmitting th

55、e error image after motion compensation, TOTAL (Bytes/frame) represents the total number of bytes of ER an d motion vector.From Tab.3, at a certain bit-rate, the bits for encoding the motion vector in the FDMRME algorith

56、m are much less, so</p><p>  Tab.3 Contrast of encoding results</p><p>  V. Conclusion</p><p>  The variable block multi-resolution motion estimation and compensation is an importan

57、t approach to achieve high efficiency video encoding in wavelet based video encoding domain. By analyzing the problem of the MRME method, this paper proposed that segment the motion region through motion detection and gu

58、ide the process of motion estimation with it, the consistency and the accuracy of the motion vectors are promoted, the encoding efficiency is improved, and meanwhile the complexity of motion esti</p><p>  Fr

59、om: Tang Guowei, Multi-resolution Motion Estimation and Compensation based on Adjacent Prediction of Frame Difference in Wavelet Domain, Journal of Electronics, May 2009.</p><p><b>  英語(yǔ)譯文</b><

60、/p><p>  基于相鄰幀的差別預(yù)測(cè)的多分辨率小波域運(yùn)動(dòng)估計(jì)和補(bǔ)償</p><p><b>  唐國(guó)偉</b></p><p>  摘要:針對(duì)高比特率的運(yùn)動(dòng)矢量編碼和占用更多時(shí)間負(fù)載的全部搜索策略,提出了基于相鄰幀的差別預(yù)測(cè)的多分辨率小波域運(yùn)動(dòng)估計(jì)和運(yùn)動(dòng)補(bǔ)償算法。圖像序列使用微分運(yùn)動(dòng)檢測(cè)方法,并選擇適當(dāng)?shù)拈撝祦?lái)確定相連區(qū)域。接下來(lái)提取出運(yùn)動(dòng)的區(qū)域

61、來(lái)進(jìn)行運(yùn)動(dòng)估計(jì)和運(yùn)動(dòng)補(bǔ)償。實(shí)驗(yàn)結(jié)果表明,該算法提高了運(yùn)動(dòng)矢量編碼的效率,降低了運(yùn)動(dòng)估計(jì)的復(fù)雜度,在相同的比特率的情況下,重構(gòu)圖像具有更好的質(zhì)量,多分辨率運(yùn)動(dòng)估計(jì)算法(MRME)得以改善。</p><p>  關(guān)鍵詞:運(yùn)動(dòng)估計(jì);運(yùn)動(dòng)補(bǔ)償;多分辨率分析;視頻編碼</p><p><b>  1、引 言</b></p><p>  由于小波分析的時(shí)頻

62、局部化性能,被廣泛應(yīng)用于圖像/視頻編碼領(lǐng)域。正如圖像序列,運(yùn)動(dòng)估計(jì)和運(yùn)動(dòng)補(bǔ)償可以有效地減少相對(duì)的時(shí)間并且有效地改進(jìn)編碼。然而傳統(tǒng)的運(yùn)動(dòng)補(bǔ)償小波編碼,以其運(yùn)動(dòng)估測(cè)的結(jié)構(gòu)再加上幀的內(nèi)部靜止圖像編碼,不能充分利用固有的多分辨率特性的優(yōu)勢(shì)。在1992年,Yaqin Zhang和S.Zafar提出可變方塊多分辨率運(yùn)動(dòng)估測(cè)的fMRME1視頻壓縮算法,即1aid的運(yùn)動(dòng)估測(cè)和運(yùn)動(dòng)補(bǔ)償基礎(chǔ)是在在小域中。</p><p>  通過(guò)使

63、用相對(duì)較小的搜索窗口和匹配塊,這種方法可以有效的減少運(yùn)算量,擺脫方塊效應(yīng),并有易于使視頻可擴(kuò)展編碼適合人類視覺(jué)系統(tǒng)并實(shí)現(xiàn)進(jìn)步的過(guò)渡。但是在MRME算法中,存在著不連續(xù)的運(yùn)動(dòng)矢量的默認(rèn)值和實(shí)物邊界和方塊邊界的不一致這導(dǎo)致了高頻率在系數(shù)變換中的成分增加,并影響位移幀的差[DFD]的編碼。</p><p>  Zan Jinwen提出通過(guò)多分辨率運(yùn)動(dòng)估測(cè)中值濾波,可產(chǎn)生更平穩(wěn)的運(yùn)動(dòng)領(lǐng)域,產(chǎn)生一個(gè)更好估計(jì)性能。但中值濾波

64、會(huì)在不平穩(wěn)高分辨率的子帶運(yùn)動(dòng)中帶來(lái)相當(dāng)負(fù)面的影響Su141作了理論推斷,深入研究小波系數(shù)的插補(bǔ)算法并提出半像素多分辨率運(yùn)動(dòng)補(bǔ)償,有效提高運(yùn)動(dòng)估計(jì)的準(zhǔn)確性。為了克服了離散的小波變換不斷轉(zhuǎn)移的屬性。Zhang提出基于2聲道高通訊濾波和子帶的小波域運(yùn)動(dòng)估測(cè)適應(yīng)中央搜索點(diǎn)預(yù)測(cè).此方法有一個(gè)相當(dāng)?shù)偷挠?jì)算復(fù)雜度,但編碼器的性能也從PSNR值的數(shù)據(jù)中減少了。Cagnazzo 所研究的基于視頻編碼的小波理論性優(yōu)化準(zhǔn)則,提出一個(gè)最佳運(yùn)動(dòng)估計(jì)和補(bǔ)償?shù)姆椒ǎ?/p>

65、但是此方法是以擴(kuò)展一個(gè)復(fù)雜性的成本為代價(jià)的法,但在一個(gè)擴(kuò)展的COM成本復(fù)雜性。</p><p>  本文提出了一種幀差的鄰接預(yù)測(cè)MRME算法(即FDMRME)它采用差動(dòng)檢測(cè)圖像序列,并提取出運(yùn)動(dòng)區(qū)域進(jìn)行運(yùn)動(dòng)估計(jì)和補(bǔ)償。這種方法減少運(yùn)動(dòng)估測(cè)的復(fù)雜性,提高了運(yùn)動(dòng)矢量的編碼的高效性,并提出在同一比特率情況下,作為MRME的圖像重建的質(zhì)量。</p><p>  2、基于幀差的運(yùn)動(dòng)檢測(cè)</p&

66、gt;<p><b>  (1) 三幀差算法</b></p><p>  運(yùn)動(dòng)檢測(cè)包括光流的算法,背景消除算法,相鄰三幀差法。利用連續(xù)三幅圖像,使差異化操作并進(jìn)行AND運(yùn)算導(dǎo)致出不同的結(jié)果,三幀差算法可以快速?gòu)膱D像序列中檢測(cè)出運(yùn)動(dòng)區(qū)域。檢測(cè)過(guò)程如圖1所示。</p><p>  圖1 用三幀差算法對(duì)運(yùn)動(dòng)對(duì)象的檢測(cè)過(guò)程</p><p>

67、;  g1(x,y)是前兩個(gè)幀的運(yùn)動(dòng)圖像的變化,g2(x,y)是后兩個(gè)幀。g1(x,y)和g2(x,y)都包含有運(yùn)動(dòng)的信息。使兩個(gè)運(yùn)動(dòng)變化圖像二值化,并為他們進(jìn)行AND運(yùn)算以獲得運(yùn)動(dòng)對(duì)象。</p><p> ?。?) 不同圖像的閾值選擇</p><p>  為了提取運(yùn)動(dòng)目標(biāo),有必要選擇合適的閾值T作為框架。不同的圖像g1(x,y)和 g2(x,y) 通過(guò)使用基于灰度特征的方法,然后通過(guò)閾值

68、T使幀差圖像二值化。</p><p>  閾值選擇的過(guò)程包含4個(gè)步驟。</p><p>  第一、在三幀圖像中,將第一和第二幀分成2X2的塊。在第二幀上每塊增加4個(gè)像素以獲得總和a,然后每個(gè)相應(yīng)的塊加上的第一幀的4個(gè)像素以獲得bi。</p><p>  這里m,n分別代表圖像的長(zhǎng)和寬,k是2×2個(gè)塊的數(shù)量。</p><p>  第二

69、、通過(guò)使用閾T=1.1S,使第一第二幀的幀差圖像二值化,得到二值化圖像。</p><p>  在這一步驟中,獲得一個(gè)粗略的閾值,進(jìn)而可以得到一個(gè)被罰閾值。</p><p>  第三、在幀差圖像中,計(jì)算像素的平均值小于閾值T。</p><p>  q是指在幀差圖像中像素?cái)?shù)量小于閾值T。以M為第二和第一幀的幀差圖像的閾值,然后使幀差圖像二值化。</p>&

70、lt;p>  第四、通過(guò)閾值M使第三和第二幀的幀差圖像二值化然后在幀差圖像中計(jì)算出像素的平均值小于閾值M。以平均值作為第三和第二幀的幀差圖像的閾值,然后二值化幀差圖像。</p><p> ?。?) 識(shí)別所連接的區(qū)域和運(yùn)動(dòng)區(qū)域的坐標(biāo)的提取</p><p>  標(biāo)注出AND運(yùn)算圖像以通過(guò)使對(duì)象聚類的方法得出運(yùn)動(dòng)區(qū)域的坐標(biāo),包括以下2個(gè)步驟。</p><p>  第

71、一、標(biāo)注出每個(gè)對(duì)象的像素</p><p>  從左到右,從上到下掃描AND運(yùn)算的圖像。每當(dāng)遇到一個(gè)屬于目標(biāo)區(qū)域的像素(灰度級(jí)為1),就檢測(cè)該像素的相鄰的8個(gè)像素。如果這8個(gè)像素沒(méi)有被標(biāo)注出來(lái),就用和一個(gè)新的標(biāo)簽號(hào)碼標(biāo)注他們(從1開(kāi)始),每一次的數(shù)量增加1。否則就把當(dāng)前像素的標(biāo)簽號(hào)碼標(biāo)注成這8個(gè)相鄰像素中最小的編號(hào)。</p><p>  第二、群集每個(gè)對(duì)象區(qū)域</p><

72、p>  從頭到尾一行一行的掃描運(yùn)動(dòng)物體圖像(從左到右,然后由右至左)。每當(dāng)遇到一個(gè)對(duì)象的像素,檢測(cè)該像素的8個(gè)相鄰的像素。如果這8個(gè)中最小的標(biāo)簽號(hào)比該像素的標(biāo)簽號(hào)大,它就會(huì)被這8個(gè)中小的標(biāo)簽號(hào)所取代。當(dāng)整個(gè)圖象已經(jīng)掃描完之后,再?gòu)暮蟮角耙酝瑯拥姆绞皆賿呙枰幌碌膱D像,直到所有對(duì)象像素的標(biāo)簽不再變化為止。</p><p>  當(dāng)AND差異圖像已被對(duì)象聚類過(guò)程處理過(guò)之后,頭肩圖像可能有一些異常區(qū)域。降噪這些異常區(qū)

73、域并把它們作為圖像的運(yùn)動(dòng)區(qū)域保存下來(lái)。</p><p>  3、差異相鄰塊運(yùn)動(dòng)估計(jì)和補(bǔ)償</p><p>  第1步,把3級(jí)小波分解到圖像序列,并進(jìn)行運(yùn)動(dòng)檢測(cè),通過(guò)使用三幀差法,以得出最低頻率子帶LL3。提取運(yùn)動(dòng)區(qū)域,并把他們分成2× 2個(gè)方塊。假設(shè)每塊的運(yùn)動(dòng)矢量是V3(x,y),其他3子帶的運(yùn)動(dòng)矢量為也(x,y)。定義一個(gè)可靠的標(biāo)記R。如果不是所有的像素在方塊所共享的同一狀態(tài),

74、那么就可以斷定這個(gè)塊所在的運(yùn)動(dòng)物體邊界是不可靠的運(yùn)動(dòng)估計(jì)。</p><p>  第2步,檢查當(dāng)前方塊低頻率子帶相應(yīng)的像素標(biāo)志,如果他們屬于靜止區(qū)域則R= 1且運(yùn)動(dòng)矢量是0,這并不需要估計(jì)。而在其他子帶對(duì)應(yīng)位置的運(yùn)動(dòng)矢量也為0。</p><p>  第3步,如果所有的LL3當(dāng)前方塊屬于運(yùn)動(dòng)區(qū)域。則R= 1,據(jù)相鄰方塊運(yùn)動(dòng)的預(yù)測(cè),塊(x,y)與相鄰方塊之間的關(guān)系如圖2所示。 </p>

75、;<p>  a1的值是0或1。如果相應(yīng)的方塊位于同一地區(qū)的當(dāng)前塊,那么這個(gè)值是1。否則則是0。確定是否超過(guò)預(yù)測(cè)值,綁定與否。如果不是,那么就把預(yù)測(cè)值當(dāng)成中心開(kāi)始搜索。為了保持一致性,把平均絕對(duì)差(MAD)作為匹配準(zhǔn)則。</p><p><b>  當(dāng)前塊</b></p><p>  圖2 當(dāng)前塊R=1以及相鄰方塊的略圖</p><p

76、>  第4步,如果有一個(gè)對(duì)象,它的像素即是它的運(yùn)動(dòng)像素,同時(shí)也是當(dāng)前方塊的像素。那么可以推斷當(dāng)前方塊位于運(yùn)動(dòng)區(qū)域邊界處。該運(yùn)動(dòng)是不可靠,它不能被當(dāng)成是可靠的方塊來(lái)進(jìn)行處理。為了促進(jìn)了預(yù)測(cè)的可靠性,需要知道更多的信息。所以這些隨后會(huì)來(lái)處理。當(dāng)?shù)谝粋€(gè)掃描完成后,所有可靠的方塊運(yùn)動(dòng)矢量已實(shí)現(xiàn)。然后,毗鄰不可靠方塊的可靠方塊的運(yùn)動(dòng)矢量可以被用來(lái)于作出預(yù)測(cè)。當(dāng)前方塊R = 0時(shí)與相鄰方塊之間的位置關(guān)系如圖3所示。</p>&l

77、t;p><b>  當(dāng)前塊</b></p><p>  圖3 當(dāng)前塊R=0以及相鄰方塊的略圖</p><p>  第5步,通過(guò)參照幀LL3子帶使用方塊運(yùn)動(dòng)矢量,在其他子帶中估計(jì)相應(yīng)的方塊。對(duì)于m(m <3)子圖像,方塊(x,y)的運(yùn)動(dòng)矢量的初始值是 。重復(fù)第3步和第4步,可以得出運(yùn)動(dòng)估計(jì)預(yù)測(cè)值是 。</p><p>  第6步,因?yàn)?/p>

78、每個(gè)方塊(x,y)在每個(gè)子圖像中,所以最后的運(yùn)動(dòng)矢量 的可以得出來(lái)。因此,在方塊中,任何像素(x,y)的運(yùn)動(dòng)補(bǔ)償預(yù)測(cè)不僅取決于這個(gè)方塊的運(yùn)動(dòng)矢量,同時(shí)也取決于相鄰方塊的運(yùn)動(dòng)矢量。在固定的比特率的情況下,更多的比特可以分配到剩余的信息,以提高重建圖像的質(zhì)量。與此同時(shí)。由于估計(jì)只針對(duì)運(yùn)動(dòng)區(qū)域, 用于估計(jì)的時(shí)間消耗要少得多。對(duì)于運(yùn)動(dòng)區(qū)域這是成正比的,而不是對(duì)于整個(gè)圖像。</p><p><b>  4、實(shí)驗(yàn)結(jié)

79、果分析</b></p><p>  該實(shí)驗(yàn)條件如表1所示。</p><p>  克萊爾100幀 和美國(guó)小姐分別進(jìn)行了測(cè)試。表.2顯示運(yùn)動(dòng)矢量的字節(jié)(由運(yùn)動(dòng)矢量,字節(jié)/幀表示)和在MRME/ MRMC補(bǔ)償法和FDMRME方法中所消耗的運(yùn)動(dòng)估計(jì)的總時(shí)間(表示總時(shí)間,秒)通過(guò)利用的子帶取向小波分解的選擇性,使用FDMRME算法,改進(jìn)低頻率子帶的基運(yùn)動(dòng)矢量的準(zhǔn)確性。每個(gè)子帶的運(yùn)動(dòng)矢量的

80、錯(cuò)誤率下降,這些證明了運(yùn)動(dòng)矢量的編碼的有效性是有目共睹的。在CIF序列中美國(guó)小姐所消耗的時(shí)間比在QCIF序列中克萊爾所消耗的時(shí)間少。通過(guò)使用運(yùn)動(dòng)檢測(cè),視頻幀分為運(yùn)動(dòng)區(qū)域,靜止區(qū)域。編碼器不會(huì)使運(yùn)動(dòng)估計(jì)和補(bǔ)償成為靜止區(qū)域,所以消耗的時(shí)間是由運(yùn)動(dòng)區(qū)域而不是由整個(gè)圖像決定的,這導(dǎo)致系統(tǒng)效率的改進(jìn)。</p><p><b>  表1 實(shí)驗(yàn)條件</b></p><p>  表2

81、 運(yùn)動(dòng)估計(jì)值對(duì)比</p><p>  表3顯示了FDMRME方法的編碼結(jié)果,用來(lái)測(cè)試序列。這里PSNR表示重建圖像的質(zhì)量,運(yùn)動(dòng)矢量代表的意思和上面的一樣。ER(字節(jié)/幀)表示經(jīng)過(guò)運(yùn)動(dòng)補(bǔ)償之后,用于傳輸錯(cuò)誤圖像的字節(jié)數(shù)。TOTAL(字節(jié)/幀)表示ER字節(jié)和運(yùn)動(dòng)矢量的總字節(jié)數(shù)。從表3得出,在一定的比特率情況下,在FDMRME算法中,用于編碼的運(yùn)動(dòng)矢量的比特要少得多。所以更多的比特可以分配到運(yùn)動(dòng)補(bǔ)償后的殘差,以提高重建

82、圖像的質(zhì)量。</p><p><b>  表3 編碼結(jié)果對(duì)比</b></p><p><b>  5、結(jié)論</b></p><p>  變量方塊多分辨率運(yùn)動(dòng)估計(jì)和補(bǔ)償是基于視頻編碼領(lǐng)域?qū)崿F(xiàn)小波視頻編碼高效率的重要途徑。通過(guò)分析MRME方法出現(xiàn)的問(wèn)題,本文提出了通過(guò)運(yùn)動(dòng)檢測(cè)的運(yùn)動(dòng)矢量部分以及運(yùn)動(dòng)估計(jì)過(guò)程的指導(dǎo),促進(jìn)了運(yùn)動(dòng)矢量

溫馨提示

  • 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒(méi)有圖紙預(yù)覽就沒(méi)有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫(kù)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論