Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature fusion motion video key frame extraction method based on human body posture recognition

A technology of video key frame and fusion feature, which is applied in the field of motion video key frame extraction of fusion feature, can solve the problems of a large amount of image information, information transmission and calculation burden, and achieve the effect of reducing the amount of calculation and avoiding missed detection and false detection

Pending Publication Date: 2022-07-08
SHANDONG UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] With the development of the network, multimedia information retrieval has an increasing impact on various fields of society. The traditional video retrieval method uses the image retrieval method to retrieve frame by frame. This method needs to process a large amount of image information, which has great impact on information transmission and Calculations cause a lot of burden

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature fusion motion video key frame extraction method based on human body posture recognition
  • Feature fusion motion video key frame extraction method based on human body posture recognition
  • Feature fusion motion video key frame extraction method based on human body posture recognition

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0040] The present invention provides a method for extracting key frames of motion video with fusion features, such as figure 1 As shown in the figure, this method fuses the static features extracted by the lightweight human gesture recognition algorithm and the motion features extracted by the spatial graph convolution to improve the accuracy and completeness of key frame detection. The specific examples are as follows:

[0041] (1) The target video segment is segmented frame by frame, and the video is divided into a series of video frames.

[0042] (2) In order to better preserve the original information in the input image and reduce the loss, the residual network ResNet50 is used for static feature extraction, and the data dimension is reduced to 256 dimensions, and the static feature of the obtained video frame is represented as S s =[S s1 ,S s2 ,…,S sT ].

[0043] (3) Abstract the skeleton data of the human body in the three-dimensional space, use the lightweight HRNe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a fusion feature motion video key frame extraction method based on human body posture recognition. The method comprises the following steps: S1, segmenting a target video segment frame by frame; s2, carrying out static feature extraction by using a residual network, and carrying out dimensionality on the data to obtain video frame static features; s3, abstracting skeleton data of the human body in the three-dimensional space, and extracting motion features of the video frames to obtain motion features Sd; s4, carrying out linear weighting processing on the extracted static features and motion features according to weights; s5, global features are extracted from the fused features through a self-attention mechanism, then the importance of video frames is calculated, key frames of corresponding actions are extracted through a Bernoulli function, and a result set is optimized through reinforcement learning. And the problems of moving target feature loss and key frame missing detection are effectively improved.

Description

technical field [0001] The present invention is designed in the field of video processing, and particularly relates to a method for extracting key frames of motion video with fusion features. Background technique [0002] For video, a video is an image sequence, and its content is much richer than an image, with strong expressiveness and a large amount of information. Generally speaking, the analysis of video is carried out after decomposing the video into video frames, but Video frames usually have a lot of redundancy. After extracting video key frames and analyzing them, the computing time can be effectively reduced. [0003] With the development of the network, multimedia information retrieval has a greater and greater impact on various fields of society. The traditional video retrieval method uses the image retrieval method to retrieve frame by frame. Calculation creates a large burden. In addition, in today's popular home camera equipment, we often need to save a piec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V20/40G06V40/20G06K9/62G06N3/04G06N3/08G06V10/80G06V10/82
CPCG06N3/08G06N3/045G06F18/253
Inventor 郑艳伟江文李博韬于东晓
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products