A multi-task collaborative identification method and system

A collaborative identification, multitasking technology, applied in multimedia data indexing, multimedia data retrieval, special data processing applications, etc., can solve the real identification task of a huge number of parameters, difficult to achieve rapid and balanced configuration of network resources, continuous data flow input Poor performance, etc.

Active Publication Date: 2019-06-28
BEIJING UNIV OF POSTS & TELECOMM
View PDF8 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] Deep neural network training requires a large amount of training data, which makes it powerless for small-scale data tasks. Faced with the high training and labeling costs of massive data, it has poor performance for real recognition tasks with continuous data stream input.
[0008] The deep neural network model is complex, the number of parameters is huge, and the training process requires powerful computing facilities. At the same time, when facing different recognition tasks, different convolutional layer structures are used, which makes it difficult to achieve rapid and balanced allocation of network resources.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A multi-task collaborative identification method and system
  • A multi-task collaborative identification method and system
  • A multi-task collaborative identification method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0081] like figure 1As shown, a multi-task collaborative identification system disclosed in Embodiment 1 of the present invention includes:

[0082] The general feature extraction module is used to establish a time synchronization matching mechanism for multi-source heterogeneous data, realize a multi-source data association description model based on potential high-level shared semantics, realize efficient support between different channel data, information complementarity, and maximize the realization of data de-redundancy;

[0083] The deep collaborative feature learning module is used to build a long-term dependent generative memory model, explore a semi-supervised continuous learning system based on collaborative attention and deep autonomy, realize dynamic self-learning with selective memory and forgetting ability, and achieve the goal of understanding existing learning models. The effect of incremental performance improvements;

[0084] The intelligent multi-task deep...

Embodiment 2

[0093] Embodiment 2 of the present invention provides a method for multi-task discrimination using the above system, the method includes: a general feature description of massive multi-source audiovisual media perception data, including establishing a time synchronization matching mechanism for multi-source heterogeneous data, and realizing Multi-source data association description model with potential high-level shared semantics; deep collaborative feature learning for long-term memory of continuously input streaming media data, including building long-term dependent generative memory models, and exploring semi-supervised continuous learning based on collaborative attention and deep autonomy system; intelligent multi-task deep collaborative enhanced feedback recognition model under the adaptive framework, including the theory of context-aware computing based on the cooperative work of agents, and the introduction of adaptive deep collaborative enhanced feedback and multi-task j...

Embodiment 3

[0100] like figure 1 As shown, the third embodiment of the present invention provides a multi-task collaborative identification method.

[0101] First, a general feature description method for multi-source audiovisual media perception data is established by using the migration algorithm.

[0102] In order to achieve efficient collaborative analysis for different visual and auditory tasks, a highly robust and versatile feature description is extracted from the multi-source visual and auditory perception data. As a prototype feature for subsequent collaborative learning, it is first necessary to analyze the visual and auditory perception data Features. Most of the actually acquired audio data is a one-dimensional time series, which is mainly descriptive in its spectral-temporal cues. It is necessary to use the spectral transformation of the auditory receptive domain combined with the prosody information of adjacent audio frames to describe. The visual perception data are mostl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a multi-task cooperative identification method and system, and belongs to the technical field of artificial intelligence task identification, and the system comprises a generalfeature extraction module, a cooperative feature learning module, and an adaptive feedback evaluation identification module. The method comprises steps of based on a time synchronization matching mechanism, extracting universal features of the multi-source heterogeneous data, and realizing universal feature description of the multi-source heterogeneous data; Training the general features as prioriknowledge by combining a collaborative attention mechanism based on external dependence, and generating an association memory relationship among the general features; and extracting environmental perception parameters of the multi-source heterogeneous data, and realizing multi-task identification in combination with the associated memory relationship. According to the method, the weight of the to-be-identified task is judged through depth enhancement feedback in combination with an environmental perception adaptive calculation theory, the priority of the to-be-identified task is adaptively adjusted according to environmental changes, and the effect of simultaneously outputting a plurality of visual and auditory perception identification results is achieved.

Description

[0001] This application claims the priority of the Chinese Invention Patent Application No. 201810746362.3 filed on July 09, 2018. technical field [0002] The invention relates to the technical field of task recognition of artificial intelligence, in particular to a multi-task collaborative recognition method and system. Background technique [0003] Artificial intelligence is based on deep neural network algorithms and supported by big data, cloud computing, and intelligent terminals, and is about to enter a new era of full explosion. The continuous growth of communication bandwidth and the continuous improvement of transmission speed have rapidly lowered the threshold for obtaining massive audio / video data. Faced with the urgent need for ultra-high-speed, mobile, and ubiquitous storage and processing of massive data, weak artificial intelligence based on single-modal single-task processing in the traditional sense has become the main bottleneck restricting the development...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/41G06F16/432
Inventor 明悦
Owner BEIJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products