Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for semantic recognition and retrieval of actions in videos

A technology in action semantics and video, applied in character and pattern recognition, instruments, calculations, etc., can solve problems such as large amount of calculations, and achieve the effect of improving accuracy and reducing the amount of calculations

Active Publication Date: 2022-03-18
JIANGSU AUSTIN OPTRONICS TECH
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

If we choose to use a time span greater than the maximum value, that is, greater than 60 frames to smooth the calculation and understand all actions, then the amount of calculation will be very large

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for semantic recognition and retrieval of actions in videos
  • A method for semantic recognition and retrieval of actions in videos
  • A method for semantic recognition and retrieval of actions in videos

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] On the basis of the original SlowFast algorithm, the present invention proposes to determine the input image of the slow channel according to the image stability index and improve the detection accuracy of the slow module; rely on the fast detection of skeletal motion to determine the input video segment of the fast channel and reduce the calculation amount of the fast channel hybrid algorithm.

[0049] A method for action semantic recognition retrieval in a video of the present invention, the video adopts V={Im(f i )}, where Im is the image, f i It is for the image from 1~F imax number of F imax is the maximum number of frames of video V. That is Im(f i ) represents the number f in V i images, such as figure 1 As shown, a method for semantic recognition and retrieval of actions in a video includes the following steps:

[0050] Step 1, using the OpenPose toolbox to extract the key points of the human skeleton in the video image to obtain the three-dimensional coo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for semantic recognition and retrieval of actions in video, which comprises the following steps: cutting the video into segmental motion videos with motion, finding stable frames in the segmental motion videos; performing SlowFast adaptive cross-frame action recognition ; The extracted segmented motion video is used as the input of the Fast algorithm module; the stable frame is used as the input of the slow algorithm module, and the SlowFast algorithm is used for motion semantic recognition to obtain the corresponding motion semantic recognition result Out1, and a video retrieval library is established. When corresponding to an action semantic query, the corresponding segmented motion video is extracted for user query. Through preprocessing, it can improve the accuracy of the SlowFast algorithm on the premise of greatly reducing the calculation amount.

Description

technical field [0001] The invention belongs to the technical field of motion semantic recognition, and in particular relates to a method for motion semantic recognition and retrieval in videos. Background technique [0002] In daily life, people sometimes need to find a set of specific action segments in a very long video. For example, in several days of video data, judge the time when the old man fell, so as to observe the surrounding conditions when he fell. However, we probably don't know the specific time and place, or which camera's video this action appeared in. One needs an action-based semantic video retrieval function. When we retrieve the same action in many videos in many places, we can gather these action videos to form an overall effect of the same action, which can be displayed on a multi-screen intelligent display system to play a role. Uniform effect. [0003] In similar work, there are works based on face recognition and narration recognition, but there...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/40G06V40/20
Inventor 翟晓东汝乐凌涛凌婧
Owner JIANGSU AUSTIN OPTRONICS TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products