Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A key frame extraction method of gesture images based on deep learning

A technology of deep learning and extraction methods, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems such as limitations, achieve good robustness, reduce complexity, and reduce the amount of parameters.

Active Publication Date: 2021-05-04
康旭科技有限公司
View PDF15 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Aiming at the influence of the background area on the determination of video key frames and the limitation of original image feature expression information, the present invention proposes a method for extracting key frames of gesture images based on deep learning, which is a method for the change of motion range in sign language videos. Smaller Video Keyframe Extraction Method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A key frame extraction method of gesture images based on deep learning
  • A key frame extraction method of gesture images based on deep learning
  • A key frame extraction method of gesture images based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] The present invention will be further described below in conjunction with drawings and embodiments.

[0045] The present invention is mainly aimed at the key frame extraction situation in the gesture video. Since the recognition object of the present invention is a self-defined gesture action, a dynamic gesture video database is self-built in the specific implementation. Part of the data sets used in the specific implementation are as follows: figure 2 As shown, the figure shows a partial gesture video frame image converted from one of the gesture videos, and the image is saved in .jpg format, and the final picture size is 1280×720.

[0046] likefigure 1 As shown, the method of the present invention first converts the gesture video into a gesture video frame image, detects the gesture target area through the Mobilenet-SSD target detection model, segments the marked gesture target frame, and obtains the hand image. Extract the abstract features of the hand area through ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for extracting key frames of gesture images based on deep learning. First read the input gesture video, convert the input gesture video into a video frame image; use the Mobilenet-SSD target detection model to detect gestures in the video frame image, and segment the detected gestures; use VGG16 training model training The image is segmented by hand gestures to obtain the corresponding abstract features, and the spatial gradient is calculated. According to the gradient difference between two adjacent frames of pictures, an appropriate threshold is set to determine the key frame. The present invention proposes to use the Mobilenet-SSD target detection model to detect and segment the hand area, remove the noise in the background area, and use VGG-16 to accurately extract the abstract features of the hand, which not only greatly enhances the expressive ability of the picture, but also reduces the amount of parameters. The complexity of the model is reduced, which is suitable for this kind of video key frame extraction with small changes.

Description

technical field [0001] The invention belongs to a key frame extraction method, in particular to a gesture image key frame extraction method based on deep learning. Background technique [0002] Gesture video key frame extraction is a key step in the dynamic gesture recognition process. Key frame extraction for gesture video reduces data complexity, improves the real-time performance of sign language recognition algorithms, and ensures that the key frame extraction effect is the key to accurate sign language recognition. important condition. How to determine the action key frames in sign language videos is always a difficult point, mainly because the range of gesture changes is relatively small, it is not easy to determine key frames, and key frame redundancy is prone to occur when key frames are extracted. At present, the common gesture segmentation techniques include the first and last frame and middle frame method, the method based on color, texture, shape feature, the me...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/28G06N3/045
Inventor 田秋红杨慧敏李霖烨包嘉欣
Owner 康旭科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products