Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Movement perception model extraction method based on time-space domain

A motion perception and motion model technology, applied in color TV parts, TV system parts, image data processing and other directions, can solve the complex content of video objects, there is no segmentation method, and the computer does not have the ability to observe, recognize, and understand. image and other problems to achieve the effect of improving the motion perception model

Inactive Publication Date: 2010-10-06
SHANGHAI UNIV
View PDF0 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the complex content of the video object itself, and the status quo of artificial intelligence technology, the current computer still does not have the ability of human beings to observe, recognize and understand images
There is no general and effective segmentation method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Movement perception model extraction method based on time-space domain
  • Movement perception model extraction method based on time-space domain
  • Movement perception model extraction method based on time-space domain

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0078] Embodiment 1: The present invention is based on the time-space domain motion perception model extraction method according to figure 1 The program block diagram shown is programmed on a PC test platform with Athlon x22.0GHz CPU and 1024M memory. Image 6 It is a motion perception model diagram of a frame obtained by inputting the mother-daughter sequence on the JM10.2 verification model.

[0079] see figure 1 , the present invention is based on the time-space domain motion perception model extraction method, and extracts the initial motion model by analyzing the motion vector generated in the encoding process. At the same time, the video image segmentation model is obtained by using the brightness information in the spatial domain. On the basis of the above two models, the final motion perception model is obtained by using the edge judgment principle. The motion perception model obtained by this method combines the characteristics of the space domain and the time doma...

Embodiment 2

[0086] Embodiment two: the present embodiment is basically the same as embodiment one, and the special features are as follows: the motion model building process of the above-mentioned steps (2) is as follows:

[0087] ① Perform mean value filter processing of 3×3 mask on the motion vector generated in the encoding process;

[0088] ②Assuming that the motion vector of the (i, j) macroblock in the nth frame is denoted as PV (i, j)=(x n,i,j ,y n,i,j ), the motion vector of each pixel of the macroblock is PV(i, j), and the motion direction of the vector is θ n,i,j =arctan(y n,i,j / x n,i,j )

[0089] ③ Calculate the probability histogram distribution function of the motion vector direction of the current pixel point and its surrounding eight points Where SH() is the direction value θ of the motion vector of the current pixel point and its surrounding eight points n,i,j The formed histogram, m is the space size of the histogram, and w represents the search window size of N*N...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a movement perception model extraction method based on a time-space domain. The method comprises the specific steps of: inputting a video coding frame; establishing a movement model; scanning movement perception objects; establishing a time-space domain split model; and finally obtaining a final time-space domain movement perception model by combining an edge determination method with the movement model and the time-space domain split model. In consideration of the region consistency of movement objects, by combining time-space video image split, the invention improves the extraction effect of video movement objects and establishes the movement perception model based on a time-space domain.

Description

technical field [0001] The invention relates to a motion perception model extraction method based on time and space domain, which integrates various data processing methods to extract video moving objects that human eyes pay attention to, especially integrates video space domain image segmentation on the basis of analyzing motion vectors method, greatly improving the motion perception model. Background technique [0002] The establishment of motion perception model has become a research hotspot in video processing technology. Video is the combination of images in continuous time, and the motion phenomenon produced by continuous images makes the extraction of video moving objects have certain practical significance. The moving object in the video is the part that people pay the most attention to when watching, so establishing a good motion perception model is the focus of most researchers. [0003] The detection and segmentation of video objects is the premise and basis for...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/20H04N5/14
Inventor 石旭利潘琤雯张兆扬魏小文
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products