Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Monitoring video intelligent early warning method based on multimedia semantic analysis

A technology for monitoring video and semantic analysis, applied in neural learning methods, instruments, biological neural network models, etc., can solve problems affecting the efficiency of video positioning

Active Publication Date: 2021-07-13
SHANDONG ARTIFICIAL INTELLIGENCE INST +3
View PDF6 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Obviously, this method will seriously affect the efficiency of video positioning. Therefore, efficient cross-modal video positioning is also challenging.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Monitoring video intelligent early warning method based on multimedia semantic analysis
  • Monitoring video intelligent early warning method based on multimedia semantic analysis
  • Monitoring video intelligent early warning method based on multimedia semantic analysis

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] Step A) Take 16 frames to the minimum unit to the row video data V k The unit segmentation is performed, and the network data V after the network is divided into the network and the two-way timing. k Convolution processing.

Embodiment 2

[0047] Step d) includes the following steps:

[0048] According to formula From matrix J k Select the R bar iconic fragment to form anchor set A k , Where

[0049] L is the number of video units, In addition to the default, PMOD2 is the remainder of the division operation, and a k,1 Set to video semantic tree I k Root node, will a k,2 And a k,3 Set to a separately k,1 The left and right subtitches, repeat the above steps until A k,p And a k,p+1 Set to a k,p-2 The left and right sub-nodes, 1 ≤ p ≤ r, so that until the anchor set A k The R anchor node is set.

Embodiment 3

[0051] Step f) through the formula Calculate the loss function φ of the full connection neural network 1 , Where For the Fronius norm, T is the transposition, and L is a unified dimension of multi-modal feature settings.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monitoring video intelligent early warning method based on multimedia semantic analysis, and the method comprises the steps of carrying out the accurate understanding of complex objects and interaction in a video through building a cross-modal semantic alignment model, generating a video clip space-time position map and a video semantic tree, introducing a text coding module based on a bidirectional long-short-term memory network, and deeply understanding and representing text semantics in the query statement. According to the invention, feature mapping and fusion from multi-modal features to a common space are achieved, refined video clip-query statement pairs are screened out through a semantic pruning strategy and coarse granularity, fine-granularity semantic matching calculation is carried out, and therefore the precision and efficiency of cross-modal video positioning are ensured.

Description

Technical field [0001] The present invention relates to the field of monitoring an early warning technology, and more particularly to a monitoring video intelligent warning method based on multimedia semantic parsing. Background technique [0002] In order to safeguard social security, video surveillance systems are widely used in various public places. However, most of the existing monitoring system takes the first-recorded method of work, and cannot be retrieved in real time and intelligent retrieval of target fragments in monitoring video. To this end, the present invention has been developed in the case of a span model video retrieval technique, that is, by the query statement described in natural language, the video segment matching the query semantic matching is detected from the monitoring video, and the timing section of the clip is explicitly explicit. For example, according to the query statement described in the natural language ("Black man who does not wear a mask int...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/049G06N3/08G06V20/49G06V20/41G06V20/52G06N3/045G06F18/22
Inventor 胡宇鹏贾永坡高赞宋雪萌尹建华李毅仁聂礼强
Owner SHANDONG ARTIFICIAL INTELLIGENCE INST
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products