Video target detection method, system and device based on class external memory

A target detection and category technology, applied in the field of computer vision and pattern recognition, can solve the problems of auxiliary frame performance degradation, achieve the effect of enhancing robustness and discrimination, and improving accuracy

Active Publication Date: 2020-09-29
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF12 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to solve the above-mentioned problems in the prior art, that is, the performance degradation of the prior art image recognition method based on feature aggregation is obvious when the number of auxiliary frames is small, the first aspect of the present invention provides a video recognition method based on category external memory Target detection method, described video target detection method comprises the following steps:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video target detection method, system and device based on class external memory
  • Video target detection method, system and device based on class external memory
  • Video target detection method, system and device based on class external memory

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0076] Step B20, for each normalized first frame sequence in the normalized first frame sequence set, randomly select one frame as a training frame, m frames as auxiliary frames, m is a natural number, preferably to balance training speed and model For the selection of performance, select two frames as the auxiliary frames corresponding to the training frame, and select other numbers of frames to achieve similar effects. There is no specific limitation here, and the first frame corresponding to each frame of image is extracted through a general-purpose target detection network based on deep learning. an instance feature;

[0077] In one embodiment of the present invention, the general target detection network selects Faster R-CNN, and in other embodiments, an appropriate network can also be selected according to needs, and the present invention will not be described in detail here;

[0078] Step B30, input the features of the first instance into the soft-max pre-classifier to ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field, particularly relates to a video target detection method, system and device based on class external memory, and aims to solve the problem that the target detection performance is obviously reduced when the number of auxiliary frames is small in the prior art. The method comprises: firstly, training a video target detection model through a self-attention mechanismaccording to training video information; and obtaining enhanced instance features of the to-be-detected video through the trained video target detection model and the self-attention mechanism, and finally inputting the enhanced instance features into a classification branch and a bounding box regression branch of the universal target detection network to obtain a target detection result. According to the invention, the sensitivity of a video target detection method based on feature integration to the number of auxiliary frames in the prior art is reduced, so that target detection can be accurately carried out under the condition of fewer auxiliary frames or no auxiliary frames.

Description

technical field [0001] The invention belongs to the field of computer vision and pattern recognition, and in particular relates to a video target detection method, system and device based on category external memory. Background technique [0002] Video object detection is an important and challenging computer vision task, which has a wide range of applications in security monitoring, intelligent video analysis, autonomous driving and other fields. However, it is not ideal to directly use image detectors to detect objects in videos due to motion blur, out-of-focus, etc. in some frames in the videos. Different from images, video data contains rich temporal motion information. Therefore, in order to solve the problem that image detectors do not perform well in low-quality frames in videos, many methods improve the performance of detectors by using temporal context information, such as A method based on feature aggregation. Although these methods have achieved a great improvem...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08G06F17/18
CPCG06N3/084G06F17/18G06V20/41G06V20/46G06V2201/07G06N3/045G06F18/2414
Inventor 张兆翔谭铁牛宋纯锋董文恺
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products