Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Gesture image key frame extraction method based on deep learning

A technology of deep learning and extraction methods, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems such as limitations, and achieve the effect of enhancing expressive ability, enhancing expressive ability, and reducing computational complexity

Active Publication Date: 2019-08-09
康旭科技有限公司
View PDF15 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the influence of the background area on the determination of video key frames and the limitation of original image feature expression information, the present invention proposes a method for extracting key frames of gesture images based on deep learning, which is a method for the change of motion range in sign language videos. Smaller Video Keyframe Extraction Method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Gesture image key frame extraction method based on deep learning
  • Gesture image key frame extraction method based on deep learning
  • Gesture image key frame extraction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The present invention will be further described below in conjunction with drawings and embodiments.

[0045] The present invention is mainly aimed at the key frame extraction situation in the gesture video. Since the recognition object of the present invention is a self-defined gesture action, a dynamic gesture video database is self-built in the specific implementation. Part of the data sets used in the specific implementation are as follows: figure 2 As shown, the figure shows a partial gesture video frame image converted from one of the gesture videos, and the image is saved in .jpg format, and the final picture size is 1280×720.

[0046] Such asfigure 1 As shown, the method of the present invention first converts the gesture video into a gesture video frame image, detects the gesture target area through the Mobilenet-SSD target detection model, segments the marked gesture target frame, and obtains the hand image. Extract the abstract features of the hand area throu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a gesture image key frame extraction method based on deep learning. The method comprises the following steps: reading an input gesture video, and converting the input gesture video into a video frame image; detecting a gesture in a video frame image by adopting a Mobilenet-SSD target detection model, and segmenting the detected gesture; and training a gesture segmentation image by adopting a VGG16 training model to obtain a corresponding abstract feature, calculating a spatial gradient, and setting an appropriate threshold value to judge a key frame according to a gradient difference between two adjacent frames of images. The Mobilenet-SSD target detection model is used to detect and segments a hand area, background area noise is removed; hand abstract features areaccurately extracted by utilizing VGG-16, so that the expression capability of the picture is greatly enhanced;, the parameter quantity is reduced, the model complexity is reduced, and the method is suitable for small-amplitude changing video key frame extraction.

Description

technical field [0001] The invention belongs to a key frame extraction method, in particular to a gesture image key frame extraction method based on deep learning. Background technique [0002] Gesture video key frame extraction is a key step in the dynamic gesture recognition process. Key frame extraction for gesture video reduces data complexity, improves the real-time performance of sign language recognition algorithms, and ensures that the key frame extraction effect is the key to accurate sign language recognition. important condition. How to determine the action key frames in sign language videos is always a difficult point, mainly because the range of gesture changes is relatively small, it is not easy to determine key frames, and key frame redundancy is prone to occur when key frames are extracted. At present, the common gesture segmentation techniques include the first and last frame and middle frame method, the method based on color, texture, shape feature, the me...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/28G06N3/045
Inventor 田秋红杨慧敏李霖烨包嘉欣
Owner 康旭科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products