Supercharge Your Innovation With Domain-Expert AI Agents!

Real-time video recognition accelerator architecture based on key object splicing

An accelerator and object technology, applied in the field of neural networks, can solve the problems of difficulty in guaranteeing recognition accuracy, difficulty in improving recognition speed, and long time consumption, so as to reduce redundant calculations, improve processing speed and recognition accuracy, and save calculations workload effect

Active Publication Date: 2021-08-13
SHANGHAI JIAO TONG UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The technical problem to be solved by the present invention is that it is often necessary to process each frame of video image in the existing neural network processing video recognition task, which has the disadvantages of long time-consuming, high energy consumption, difficulty in guaranteeing recognition accuracy and difficulty in improving recognition speed.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Real-time video recognition accelerator architecture based on key object splicing
  • Real-time video recognition accelerator architecture based on key object splicing
  • Real-time video recognition accelerator architecture based on key object splicing

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0062] In order to solve the technical problems existing in the prior art, an embodiment of the present invention provides a real-time video recognition accelerator architecture based on key object stitching.

[0063] figure 1 It shows a schematic structural diagram of a real-time video recognition accelerator architecture based on key object stitching according to an embodiment of the present invention; refer to figure 1 As shown, the real-time video recognition accelerator architecture based on the combination of key objects in the present invention includes an object tracking module, an object aggregation module, an object splitting module, a preset neural network accelerator, an update object queue module and a main memory module, wherein the object tracking module, The object aggregation module, the object splitting module, and the preset neural network accelerator are respectively connected to the update object queue module and the main memory module, and the main memory...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a real-time video recognition accelerator architecture based on key object splicing. The real-time video recognition accelerator architecture comprises an object tracking module, an object aggregation module, an object splitting module, a preset neural network accelerator, an object queue updating module and a main storage module. The object tracking module is used for acquiring original position information of a key object rectangular frame in P-frame image data or original position information of a key object rectangular frame in B-frame image data; the object aggregation module is used for merging key object rectangular frames in the P-frame image data and / or key object rectangular frames in the B-frame image data to obtain a synthetic frame; the preset neural network accelerator is used for processing the composite frame to obtain a composite frame identification result; and the object splitting module is used for splitting the composite frame and returning a splitting result to the original image data. According to the architecture, the calculation workload in a target video recognition task is greatly saved, and the recognition task processing speed and the recognition accuracy are improved.

Description

technical field [0001] The invention relates to the technical field of neural networks, in particular to a real-time video recognition accelerator architecture based on key object stitching. Background technique [0002] Deep convolutional neural networks have been widely used in image recognition, such as in image classification, detection and segmentation. With its development, people have gradually extended the application range of deep convolutional neural networks to the video field. [0003] Usually, video recognition tasks based on deep neural networks can treat each video frame as an independent picture and input it into the deep neural network for recognition, that is, video recognition is regarded as an image recognition task to recognize each frame separately. However, directly applying a network model suitable for image recognition tasks to all video frames requires a huge computational and energy overhead; on the other hand, neural networks applied to image rec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/063
CPCG06N3/063G06V20/49G06V20/41G06V20/46G06N3/045G06F18/24
Inventor 宋卓然鲁恒景乃锋梁晓峣
Owner SHANGHAI JIAO TONG UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More