Two-channel interaction time convolution network, close-range video action segmentation method, computer system and medium
A convolutional network and interactive time technology, applied in the field of video action segmentation, can solve the problem of reducing target actions, achieve the effect of improving accuracy, improving public security capabilities, and accurate recognition
Pending Publication Date: 2021-10-22
DALIAN NATIONALITIES UNIVERSITY
View PDF3 Cites 0 Cited by
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
[0005] In order to solve the problem of reducing the possibility of missed detection and false detection of the target action by the action segmentation network, the present invention proposes a close-up video action segmentation method of a dual-channel interactive temporal convolutional network, which includes the following steps: sampling a single video, obtaining Collection of Video Frame Sequences
Method used
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreImage
Smart Image Click on the blue labels to locate them in the text.
Smart ImageViewing Examples
Examples
Experimental program
Comparison scheme
Effect test
Embodiment 1
[0158] Embodiment 1: In this implementation example, the video collected by the corridor monitoring system is input into the network model, and the human body action recognition is performed. The recognition result is as follows figure 2 .
Embodiment 2
[0159] Embodiment 2: This implementation example is to input the video collected by the elevator monitoring system into the network model for human body action recognition. The recognition results are as follows image 3 .
Embodiment 3
[0160] Embodiment 3: In this implementation example, a network video containing human motion is input into the network model to perform human motion recognition, and the recognition result is as follows Figure 4 .
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More PUM
Login to View More Abstract
The invention discloses a two-channel interaction time convolution network, a close-range video action segmentation method, a computer system and a medium, and belongs to the field of computer vision video understanding. The invention provides a close-range video action segmentation method based on a two-channel interaction time convolution network, and aims to solve the problem of reducing the possibility of missing detection and wrong detection of a target action by an action segmentation network. Comprising the following steps: sampling a single video to obtain a video frame sequence set; and inputting the video frame sequence set into a feature extraction network to obtain frame-level features; performing channel adjustment convolution on the frame-level features to obtain a first feature matrix; respectively inputting the feature matrix into a first branch and a second branch of the time convolution network; splicing the output features of the first branch and the second branch; using channel adjustment convolution to obtain a second feature matrix, and identifying motion classification for output. The invention has the advantages that important features can be prevented from being lost, and feature richness is improved; key information required by action classification is captured, and recognition of fine actions and small target actions is more accurate.
Description
technical field [0001] The invention belongs to the technical field of video action segmentation in video understanding and analysis, and relates to a close-range video action segmentation method of a dual-channel interactive time convolution network. Background technique [0002] In the era of big data, video has become a very important communication medium due to its wide applicability and rich performance capabilities. Video is used in various fields to disseminate and record information all the time. Video understanding has become a research hotspot in the field of computer vision, especially video action segmentation. Action segmentation tasks are suitable for detailed scenes where multiple actions occur continuously, such as the detection and recognition of continuous actions in a single scene such as production lines and video surveillance. The patent "A Method for Segmentation of Sequential Action Segments Based on Boundary Search Agents" (public number: CN111950393...
Claims
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More Application Information
Patent Timeline
Login to View More IPC IPC(8): G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/2415Y02T10/40
Inventor 杨大伟曹哲毛琳张汝波
Owner DALIAN NATIONALITIES UNIVERSITY



