Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video perception-fused multi-task synergetic recognition method and system

A technology that integrates video and recognition methods, and is used in character and pattern recognition, instruments, computer parts, etc.

Inactive Publication Date: 2018-11-20
BEIJING UNIV OF POSTS & TELECOMM
View PDF0 Cites 60 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0011] The purpose of the present invention is to provide a generalized feature collaborative description mechanism for multi-source heterogeneous data, which effectively complements the video information obtained from different data sources, and evolves the traditional single-source fixed mode into a multi-source elastic mode, effectively Remove data redundancy, retain shared semantic information, and establish a multi-task recognition method and system that integrates video perception with high dynamic acceptance rate, high resource utilization rate, and low network consumption rate to solve the technical problems in the above-mentioned background technology

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video perception-fused multi-task synergetic recognition method and system
  • Video perception-fused multi-task synergetic recognition method and system
  • Video perception-fused multi-task synergetic recognition method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0085] like figure 1 As shown, Embodiment 1 of the present invention provides a multi-task cooperative recognition system fused with video perception. The system includes a general feature extraction module, a collaborative feature learning module, and a deep collaborative recognition module;

[0086] The general feature extraction module is used to study the shared semantic description of multi-source heterogeneous video data feature collaboration in combination with the biological visual perception mechanism, and obtain the general feature description of multi-source heterogeneous video data;

[0087] The collaborative feature learning module is used to establish a feature association learning and task prediction mechanism for task coordination by using the adaptive computing theory, so as to realize a context-aware task associated prediction mechanism;

[0088] The deep collaborative recognition module is used to combine long-term dependence, propose a visual multi-task de...

Embodiment 2

[0100] Embodiment 2 of the present invention provides a multi-task recognition method for fusing multi-source video perception data using the system described in Embodiment 1. The method comprises the steps of:

[0101] First, combined with the biological visual perception mechanism, the shared semantic description of multi-source heterogeneous video data feature collaboration is studied, and the general feature description of multi-source heterogeneous video data is obtained.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video perception-fused multi-task synergetic recognition method and system, and belongs to the technical field of multisource heterogeneous video data processing and recognition. According to the method and system, a biological vision perception mechanism is combined to research feature-synergetic shared semantic descriptions of multisource heterogeneous video data and universal feature descriptions of the multisource heterogeneous video data are obtained; an environment-suitable calculation theory is utilized to establish task-synergetic feature association learning and task prediction mechanism and realize an environment-suitable perception task association prediction mechanism; and long-time dependency is combined to put forward a context-synergetic vision multi-task deep synergetic recognition mode, realize a multi-task deep synergetic recognition model with long-time memory and solve the problem that the video multi-task recognition is bad in generalization, low in robustness and high in calculation complexity. According to the method and system, an intelligent, generalized and mobile video common feature description method and the multi-task deep synergetic recognition model are put forward, so that the development in the field of intelligent information push and personalized control services of smart city multisource heterogeneous video data canbe prompted.

Description

technical field [0001] The invention relates to the technical field of multi-source heterogeneous video data processing and recognition, and in particular to a multi-task cooperative recognition method and system for fused video perception. Background technique [0002] Artificial intelligence is supported by the development of technologies such as big data, cloud computing, and smart terminals, and based on deep neural networks. It is about to enter a new era of comprehensive development. Facing the urgent needs of ultra-high speed, mobility and universalization in storage and processing of massive data, dedicated artificial intelligence based on single-mode and single-task has become an important bottleneck restricting the development of this field. [0003] Traditional single-task recognition cannot meet the generalization requirements under the background of artificial intelligence. Taking the face video recognition, human body behavior recognition, vehicle classificatio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V20/41G06V20/46
Inventor 明悦
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products