Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

122 results about "Decision fusion" patented technology

Chinese song emotion classification method based on multi-modal fusion

The invention discloses a Chinese song emotion classification method based on multi-modal fusion. The Chinese song emotion classification method comprises the steps: firstly obtaining a spectrogram from an audio signal, extracting audio low-level features, and then carrying out the audio feature learning based on an LLD-CRNN model, thereby obtaining the audio features of a Chinese song; for lyricsand comment information, firstly constructing a music emotion dictionary, then constructing emotion vectors based on emotion intensity and part-of-speech on the basis of the dictionary, so that textfeatures of Chinese songs are obtained; and finally, performing multi-modal fusion by using a decision fusion method and a feature fusion method to obtain emotion categories of the Chinese songs. TheChinese song emotion classification method is based on an LLD-CRNN music emotion classification model, and the model uses a spectrogram and audio low-level features as an input sequence. The LLD is concentrated in a time domain or a frequency domain, and for the audio signal with associated change of time and frequency characteristics, the spectrogram is a two-dimensional representation of the audio signal in frequency, and loss of information amount is less, so that information complementation of the LLD and the spectrogram can be realized.
Owner:BEIJING UNIV OF TECH

Method and system for detecting network pornography videos in real time

The invention discloses a method and system for detecting network pornography videos in real time. The method comprises the following steps: according to the length of a network video, establishing a pre-extracted frame number queue KFN of key frames, wherein KFN={n1, n2,..., nN} with the condition of n1<n2<...<nN; according to the pre-extracted frame number queue of key frames, extracting a key frame in order and decoding the key frame; carrying out pornographic content detection on the decoded key frame and judging whether the key frame contains the pornographic content; according to the detection result of the pornographic content of the single key frame together with the detection results, which are obtained previously, of other key frames, carrying out decision fusion judgment; if judging the key frame contains the pornographic content, considering the video contains the pornographic content and completing detection; if judging the key frame does not contain the pornographic content, considering the video does not contain the pornographic content and completing detection; and if whether the key frame contains the pornographic content is uncertain, continuing detecting the single key frame.
Owner:INST OF COMPUTING TECH CHINESE ACAD OF SCI

Feature fusion and decision fusion mixed multi-modal emotion recognition method

The invention discloses a feature fusion and decision fusion mixed multi-mode emotion recognition method, and belongs to the field of mode recognition and emotion recognition. The implementation method comprises the following steps: 1, constructing an image emotion recognition network by using a convolutional neural network framework, and obtaining image features and image emotion states; 2, constructing a text emotion recognition network by using a recurrent neural network framework, and obtaining text features and a text emotion state; and 3, constructing a multi-modal information fusion emotion recognition network, constructing a main classifier for fusing the image emotion state and the text emotion state and obtaining main emotion classification, constructing an auxiliary classifier for fusing the image features and the text features and obtaining auxiliary emotion classification, and fusing the main emotion classification and the auxiliary emotion classification to obtain final emotion classification. According to the method, information complementation among multi-modal information is utilized, the problem of low emotion recognition accuracy caused by factors such as information fuzziness or missing of single-modal information is avoided, and a new thought is provided for multi-modal data fusion and emotion recognition.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Human body behavior recognition method based on RGB video and skeleton sequence

The invention relates to a human body behavior recognition method based on an RGB video and a skeleton sequence, and belongs to the technical field of computer vision and pattern recognition. The method comprises the following steps: 1, carrying out the feature extraction of an inputted video segment through a feature stream, and obtaining a space-time feature map; step 2, generating a skeleton region heat map by the aid of the attention stream; 3, extracting the spatial and temporal features of the bone region through the binariar; step 4, generating a local decision result by using the localdecision block; and a fifth step of fusing the local decision results by using the decision fusion block to obtain a global decision result. According to the invention, two plug-and-play modules, i.e., a Loal decision block and a Decision block, are used for realizing decision fusion; and the Loal declusion block respectively performs decision making on the spatial and temporal features of each key area, and the Decision lusion block fuses all decision making results to obtain a final decision making result. According to the method, the accuracy of behavior recognition is effectively improvedon Pen Action and NTU RGB + D data sets.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Unmanned aerial vehicle target detection method based on multi-sensor information fusion

The invention relates to an unmanned aerial vehicle target detection method based on multi-sensor information fusion. The method comprises the steps: step 1, carrying out time and coordinate registration on radar and photoelectric equipment, monitoring a low-altitude protection area in real time to obtain feature information of a small target, and carrying out feature layer fusion on the feature information; step 2, collecting and expanding images of various unmanned aerial vehicle targets to serve as an unmanned aerial vehicle target detection data set, and introducing an SSD deep learning network for training to obtain an SSD deep learning prediction model; and step 3, performing target detection by using the SSD deep learning prediction model and image information acquired by the photoelectric equipment, performing decision fusion on multiple types of information of the same target by setting a threshold range, and finally performing fusion decision on different information prediction results and multiple judgment results to determine whether the target is an unmanned aerial vehicle. Through fusion of multi-sensor information, the target detection range of the unmanned aerial vehicle is expanded, the detection efficiency is improved, and certain environmental interference can be resisted.
Owner:NAVAL UNIV OF ENG PLA

Oil-gas pipeline pre-warning system based on decision fusion and pre-warning method

The invention belongs to the technical field of oil-gas pipeline monitoring and pre-warning systems, and particularly relates to an oil-gas pipeline pre-warning system based on decision fusion and a pre-warning method. The system comprises an infrastructure layer, a sensing actuation layer, a basic data layer and a core service layer. The infrastructure layer comprises intelligent sensing equipment. The sensing actuation layer carries out monitoring and data acquiring on an oil-gas pipeline through the intelligent sensing equipment. The basic data layer is used for storing data acquired by thesensing actuation layer and data in the full life circle of the oil-gas pipeline in a database. The core service layer is used for mining the data of the database. Multiple pre-warning grades of multiple pre-warning methods corresponding to different pre-warning tasks are subjected to decision fusion, and the pre-warning grades of the corresponding pre-warning tasks are obtained; the importance orders of all the pre-warning tasks and pre-warning methods are obtained through accident tree analysis, and active pre-warning is achieved; and multi-source data can be fused, the performance of information processing is effectively improved, the pre-warning accuracy is high, and the pre-warning system and the pre-warning method have higher pertinence for pre-warning of the oil-gas pipeline.
Owner:浙江浙能天然气运行有限公司 +2

Oil-immersed transformer fault diagnosis method based on neural network and decision fusion

The invention provides an oil-immersed transformer fault diagnosis method based on a neural network and decision fusion. The method based on the neural network and the decision fusion comprises the steps of fault coding, construction and training of a neural network model, and calculation of a decision fusion matrix. The method comprises the following steps: after encoding fault low-temperature overheating, medium-temperature overheating, high-temperature overheating, partial discharge, low-energy discharge and high-energy discharge, training a plurality of neural networks by using the contentof dissolved gas in five kinds of transformer oil as identification features, and calculating a decision fusion matrix according to the test accuracy of the neural networks to realize decision fusionof the plurality of neural networks. According to the method, the weight of the specific fault in the whole model identification can be adjusted according to the identification performance of the single neural network for the specific fault, so that the accuracy of fault diagnosis is improved, and the method has important significance for timely processing of transformer faults and stable and reliable operation of a power system.
Owner:WUHAN NARI LIABILITY OF STATE GRID ELECTRIC POWER RES INST

A radar radiation source signal intra-pulse characteristic comprehensive evaluation method and system

ActiveCN109766926AFeature Evaluation PerfectIncreasing the SNR affects the significance indexWave based measurement systemsInternal combustion piston enginesFeature extractionFeature evaluation
The invention belongs to the technical field of radar radiation source signal characteristic evaluation in electronic countermeasure, and discloses a radar radiation source signal intra-pulse characteristic comprehensive evaluation method and system. The method comprises the following steps of firstly, carrying out feature extraction on a received radar radiation source signal, and carrying out feature evaluation index measurement and normalization according to an established evaluation system; carrying out improved interval analytic hierarchy process by combining expert priori knowledge and an actual environment, and establishing a nonlinear equation optimization model by using an improved projection pursuit algorithm; and finally, performing final subjective and objective decision fusionby using a projection spectrum gradient algorithm. The radar radiation source signal intra-pulse characteristic evaluation method and system can reasonably and effectively realize various radar radiation source signal intra-pulse characteristic evaluations based on an actual environment, and scientific and effective evaluation is carried out to help to select characteristics capable of highlighting radar radiation source signals so as to facilitate the subsequent radar radiation source signal sorting and identification.
Owner:XIDIAN UNIV

Stock trend classification prediction method based on intelligent fusion calculation

The method comprises the following steps: performing discretization preprocessing on data in a complete data set of a target stock in a target time period by adopting an equidistant discretization algorithm and a one-dimensional K-Means clustering discretization algorithm; carrying out attribute reduction of the technical indexes; adopting a naive Bayes classifier and a K-nearest neighbor classifier, and according to the complete data set subjected to attribute reduction, carrying out classification prediction on the increase and decrease amplitude of the target stock in the next trading day;and performing decision fusion on the classification prediction results of the future increase and decrease of the target stock obtained by the two classifiers by using a D-S evidence combination rule, and finally taking the decision fusion result as a final classification prediction result of the future increase and decrease of the target stock. According to the invention, the prediction accuracyof various stock trend prediction methods based on a neural network, an SVM and the like can be obviously improved. When the method is used for constructing a multi-factor stock selection model, thenonlinear relationship between various stock index data and stock income is more significant.
Owner:XI AN JIAOTONG UNIV

Double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow

The invention discloses a double-flow convolution behavior recognition method based on 3D time flow and parallel spatial flow, and the method comprises the following steps: firstly carrying out the optical flow block extraction of an input video; secondly, segmenting an input video, extracting a video frame, and cutting out a human body part; inputting the optical flow block into a 3D convolutional neural network, and inputting the clipped frame into a parallel spatial flow convolutional network; finally, fusing the classification results of the parallel spatial streams and splicing with the scores of the time streams to form a full connection layer, and finally outputing an identification result through an output layer. According to the invention, the human body part cutting and the parallel spatial flow network are utilized to carry out single-frame identification, the single-frame identification accuracy is improved in space, and the 3D convolutional neural network is utilized to carry out action feature extraction of the optical flow, so that the identification accuracy of the time flow part is improved, and decision fusion is carried out by using the final single-layer neuralnetwork in combination with the spatial appearance features and the time action features, so that the overall recognition effect is improved.
Owner:SHANDONG UNIV

Vulnerability restoration method based on knowledge graph

The invention relates to the technical field of network security, and particularly discloses a vulnerability restoration method based on a knowledge graph, and the method is high in restoration efficiency, low in error rate and safe. The method comprises the following steps: S1, extracting features which can be identified and retrieved by a computer in a new vulnerability; S2, substituting the extracted new vulnerability features into a knowledge mapping vulnerability knowledge base for retrieval and identification; S3, when the new vulnerability feature is matched with any vulnerability in the vulnerability knowledge base, repairing the new vulnerability according to a preset repairing method corresponding to the vulnerability in the vulnerability knowledge base; S4, when the new vulnerability features are not matched with the vulnerabilities in the vulnerability knowledge base, summarizing associated vulnerability information in the matching process, and jointly performing decision fusion arrangement on the associated vulnerability information and the new vulnerability features extracted in the step S1 to output decision information; S5, reasoning the output decision information to generate a vulnerability solution; and S6, repairing a new vulnerability by using the vulnerability solution generated in the step S5, and updating the vulnerability solution in the step S5 to the vulnerability knowledge base.
Owner:SHENZHEN Y& D ELECTRONICS CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products