Automatic detection method of jitter of video

A technology for video jitter and automatic detection, applied in the field of digital video, can solve the problem of lack of detection methods and standards in digital video, achieve high accuracy, improve efficiency and effect.

Active Publication Date: 2014-11-05
SHANGHAI JIAO TONG UNIV
8 Cites 27 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] At present, there is no complete detection method and standard for the jitter detection and evalua...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention discloses an automatic detection method of jitter of a video. The automatic detection method comprises the following steps of: a first step, selecting feature points from a video picture; a second step, tracing and matching the feature points in an adjacent frame; a third step, removing abnormal points of the feature points to obtain an inter-frame motion vector; a fourth step, extracting a video jitter frequency and video jitter amplitude features; a fifth step, extracting histogram features of an inter-frame light stream vector; and a sixth step, judging the video jitter degree through a classifier. According to the automatic detection method, any prior knowledge is not needed while judging whether the video has jitter and the automatic detection method has the advantage of a higher accuracy rate.

Application Domain

Technology Topic

Image

  • Automatic detection method of jitter of video
  • Automatic detection method of jitter of video
  • Automatic detection method of jitter of video

Examples

  • Experimental program(1)

Example Embodiment

[0022] The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several modifications and improvements can be made without departing from the concept of the present invention. These all belong to the protection scope of the present invention.
[0023] like figure 1 As shown, it is an overall flow chart of an embodiment of the present invention, which specifically includes:
[0024] The first step is to select feature points in the current video frame;
[0025] The second step is to track and match feature points in adjacent frames to obtain inter-frame optical flow vectors;
[0026] The third step is to remove the abnormal points of the inter-frame optical flow vectors obtained in the second step; in the removal of abnormal optical flow vectors, the optical flow vectors whose phase and amplitude are greatly different from most optical flow vectors are regarded as abnormal optical flow vectors. to be removed.
[0027] In the fourth step, the inter-frame motion model is estimated according to the inter-frame optical flow vector obtained in the third step, and then two features of video jitter frequency and video jitter amplitude are extracted according to this model; in the estimation of the inter-frame motion model, the inter-frame motion model is the translation model, which uses the average of the inter-frame optical flow vectors as an estimate of the motion model.
[0028] In the fifth step, according to the inter-frame optical flow vector obtained in the third step, the optical flow vector histogram is calculated as the video jitter feature; the statistics of the optical flow vector histogram are the optical flow vector after the outliers are excluded, and the optical flow vector histogram is counted. The graph counts all normal inter-frame optical flow vectors in parallel.
[0029] The sixth step is to extract the video jitter frequency, video jitter amplitude and optical flow vector histogram extracted in the fourth and fifth steps. Figure 3 The shake-like feature is input to the trained classifier to obtain the degree of shaking of the video. In this step: the training set of the shake level classifier is the stabilized video shot and the artificially synthesized shake video with different shake levels. The label selection criterion for the video of the shake-level classifier test set is the subjective evaluation of video shake. A support vector machine is used as the video jitter level classifier.
[0030] Based on the above steps, the specific implementation details of a video jitter automatic detection method are as follows:
[0031] figure 2 This is a schematic diagram of feature point extraction in the present invention. The selected positions of feature points should be evenly distributed in the entire video screen as much as possible, and the positions of feature points have been marked by small circles.
[0032] After the feature point tracking and matching successfully obtains the inter-frame optical flow vector, if the formula (1) holds, it is considered that there is a jitter in the X direction of the video image, and if the formula (2) holds, it is considered that the video image has a jitter in the Y direction jitter.
[0033] [P(x)| k+1 -P(x)| k ]·[P(x)| k -P(x)| k-1 ]<0 (1)
[0034] [P(y)| k+1 -P(y)| k ]·[P(y)| k -P(y)| k-1 ]<0 (2)
[0035] where the first item on the left side of equation (1) represents the inter-frame translational motion model estimation from the kth frame to the k+1th frame in the X direction, and the second item represents the frame between the k-1th frame to the kth frame in the X direction. The translational motion model estimation, the first item on the left in Equation (2) represents the inter-frame translational motion model estimation from the kth frame to the k+1th frame in the Y direction, and the second item represents the k-1th frame to the kth in the Y direction. Frame-to-frame translational motion model estimation. After traversing the video sequence, the frequency of video jitter can be obtained.
[0036] The video jitter amplitude represents the maximum distance that the motion model between frames moves in one direction (positive X direction, negative X direction, positive Y direction, or negative Y direction). The video jitter amplitude should be normalized to adapt to different sizes of video images.
[0037] image 3 This is the feature map of the optical flow vector histogram in the embodiment of the present invention. When calculating the optical flow vector histogram, each optical flow vector is projected to the X and Y directions for statistics. First, the X direction is used as an example to illustrate the histogram. establishment. When the histogram in the X direction is counted, it is subdivided into two directions, one is the positive direction of X (the value is positive), the other is the reverse direction of X (the value is negative), and the step size of the histogram is the width of the video. 1%. Since very little of the optical flow vector displacement between frames can be greater than 20% of the video width, the histogram is divided into 40 levels, and there are 20 levels in the positive and negative directions of X. When the displacement of the optical flow vector in the X direction is greater than 20%, it is included in the last level, that is, from 1% to 19% of the video width, each with a step size of 1%, greater than 19% Unified to one level, the horizontal axis range of the optical flow vector histogram from left to right is: (-∞,-19%],(-19%,-18%],…,(-1%,0 ], (0,1%],(1%,2%],…,(18%,19%],(19%,+∞) a total of 40 levels. Similarly, the same is true for the histogram statistics in the Y direction. The only difference is that the step size is 1% of the video height, not 1% of the video width. Finally, the optical flow vector histogram is normalized, and the height of the histogram indicates that it is located in this interval. The ratio of the optical flow vectors within the total number of optical flow vectors.
[0038] image 3 The optical flow vector histograms of the three videos are drawn. Among them, figure (a) is the optical flow vector histogram in the X direction of the stable video, figure (b) is the optical flow vector histogram in the Y direction of the stable video, and figure (c) is The optical flow vector histogram in the X direction of the slightly shaking video, Figure (d) is the optical flow vector histogram in the Y direction of the slightly shaking video, Figure (e) is the X direction optical flow vector histogram of the video with severe shaking, and Figure (f) is the violent shaking video. Video Y-direction optical flow vector histogram.
[0039] Figure 4 It is the recall rate-precision rate curve of the video jitter detection classifier in the present invention.
[0040] Figure 5 It is a schematic diagram of the application case of the present invention, which reflects the contribution of the present invention in the image stabilization process, and further improves the entire video image stabilization steps.
[0041]It can be seen from the above embodiments that the present invention does not require any prior knowledge when judging whether there is jitter in the video, and has a higher accuracy.
[0042] Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various variations or modifications within the scope of the claims, which do not affect the essential content of the present invention.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Classification and recommendation of technical efficacy words

  • Improve efficiency and effectiveness
  • Improve accuracy
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products