Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

544 results about "Feature transformation" patented technology

Method and system for inserting and transforming advertisement sign based on visual attention module

The invention discloses a method and a system for inserting and transforming an advertisement sign based on a visual attention module. The method comprises the following steps: firstly, predicting interest areas in various areas of each frame of video and the attention degree on each frame of a user on the basis of the constructed visual attention model; secondly, determining a time point for inserting an advertisement according to a curve of the attention degree on each frame of the user, evaluating the fitness degree of inserting the advertisement in the various areas on the basis of predicted attention distribution, further acquiring a sequence of candidate areas for inserting the advertisement, and inserting the advertisement in an area with little influence on video content; and finally, inserting the advertisement sign into proper time point and position according to the predicted attention distribution, and performing multiple feature transformation on the advertisement sign to make the advertisement sign attract the attention of users or audience repeatedly. The method and the system can effectively perform automatic insertion and transformation of the advertisement sign, and make the inserted advertisement sign attract the attention of people repeatedly in the condition of not influencing normal watching.
Owner:PEKING UNIV

Video image stabilizing method for space based platform hovering

A method for stabilizing an image during suspension of a video on an air-based platform comprises the steps as follows: first, selecting a frame image in a video series as a reference frame; extracting characteristic points of a reference image and a current image of the video series by using such a characteristic extraction method as invariant scale and feature transformation; preliminarily matching the characteristics by taking a euclidean space distance as a characteristic match evidence so as to form characteristic match point pairs; further selecting the characteristic match points according to invariability of relative positions of characteristic points of an image background and removing wrong matched characteristic match point pairs and the characteristic match point pairs positioned on a movement target; performing least square calculation by using the characteristic match point pairs in a six-parameter affine transformation module so as to obtain module parameters; and performing correction compensation to the current image so as to obtain stable video series output with fixed visual field. During the process, the invention also provides an idea of changing a new reference frame with an interval of a certain number of frames, thereby reducing errors and improving the stability accuracy; and the invention can be applied to traffic monitoring, target track and other fields and has wide market prospect and application value.
Owner:BEIHANG UNIV

Face recognition method based on deep transformation learning in unconstrained scene

The invention discloses a face recognition method based on deep transformation learning in an unconstrained scene. The method comprises the following steps: obtaining a face image and detecting face key points; carrying out transformation on the face image through face alignment, and in the alignment process, minimizing the distance between the detected key points and predefined key points; carrying out face attitude estimation and carrying out classification on the attitude estimation results; separating multiple sample face attitudes into different classes; carrying out attitude transformation, and converting non-front face features into front face features and calculating attitude transformation loss; and updating network parameters through a deep transformation learning method until meeting threshold requirements, and then, quitting. The method proposes feature transformation in a neural network and transform features of different attitudes into a shared linear feature space; by calculating attitude loss and learning attitude center and attitude transformation, simple class change is obtained; and the method can enhance feature transformation learning and improve robustness and differentiable deep function.
Owner:唐晖

Brain cognitive state judgment method based on polyteny principal component analysis

InactiveCN103116764AGood recognition and classificationCharacter and pattern recognitionHat matrixDecomposition
The invention discloses a brain cognitive state judgment method based on polyteny principal component analysis (PCA). The method includes the following steps of firstly, inputting sample sets, and processing input data; secondly, calculating characteristic decomposition of training sample sets, determining an optimal feature transformation transformational matrix, and projecting training samples into tensor characteristic subspace to obtain feature tensor sets of the training sets; thirdly, vectorizing lower dimension feature tensor data which are subjected to dimensionality reduction as input of linear discriminant analysis (LDA), determining an LDA optimal projection matrix, and projecting the vectorized lower dimension feature tensor data into LDA feature subspace for further extracting discriminant feature vectors of the training sets; and fourthly, classifying features, subjecting the discriminant feature vectors obtained by projection of training images and test images to feature matching, and further classifying the features . According to the brain cognitive state judgment method, PCA is utilized to directly perform dimensionality reduction and feature extraction to multi-level tensor data, the defect that structures and correlation of original image data are destroyed and redundancy and structures in the original images can not be completely maintained due to the fact that traditional PCA simply performs dimensionality reduction is overcome, and space structure information of functional magnetic resonance image (fMRI) imaging data is kept.
Owner:XIDIAN UNIV

Medical image expression generating system, training method and expression generating method thereof

The invention discloses a medical image expression generating system, a training method and an expression generating method thereof. After a medical image acquisition unit acquires a two-dimensional medical image, a convolutional neural network unit extracts the image characteristic of the medical image and converts the image characteristic to an image characteristic vector, and then outputs the image characteristic vector to a pre-established first vector space. A circulating neural network unit determines a semantic characteristic vector which corresponds with the image characteristic vector according to a correspondence between the image characteristic vector included in the pre-established first vector space and a semantic characteristic vector included in a second vector space, and furthermore performs outputting. An expression output unit converts the semantic characteristic vector which matches the image characteristic vector to a corresponding natural language and performs outputting. Therefore the expression generating system realizes simple reading and analyzing to the medical image and furthermore has advantages of improving image reading efficiency, improving image reading quality and greatly reducing misdiagnosing probability.
Owner:BOE TECH GRP CO LTD

Deep learning-based advertisement click-through rate prediction method and device

The invention discloses a deep learning-based advertisement click-through rate prediction method and device. The method includes the following steps that: a preset number of training advertisements as well as training click-through rates and training characteristics of each training advertisement are acquired; the training characteristics of each training advertisement are converted into training vectors, a deep learning model is trained by using the training vectors and the training click-through rates of each training advertisement, wherein the deep learning model is realized based on a nonlinear function; and a vector to be tested converted from characteristics to be tested of an advertisement to be tested is obtained, and the vector to be tested is adopted as the input of the deep learning model, and a predictive click-through rate corresponding to the advertisement to be tested is obtained. According to the deep learning model in the method of the invention, nonlinear relationships between the characteristics are fully considered, and thus, after the vector to be tested is inputted into the deep learning model, the deep learning model can efficiently and accurately output the predictive click-through rate corresponding to the vector to be tested based on the nonlinear function.
Owner:SHANGHAI TRUELAND INFORMATION & TECH CO LTD

Diabetic retinopathy grade classification method based on deep learning

The invention provides a diabetic retinopathy grade classification method based on deep learning. The diabetic retinopathy grade classification method comprises the steps of: constructing a sample library; removing backgrounds and noise of ophthalmoscope photographs in the sample library; normalizing the images of different brightness and different intensity to the same range by adopting a local mean value subtracting method; adopting random stretching and rotating methods for different samples for data augmentation, and constructing a training set and a test set; training an initial deep learning network model by establishing an input portion architecture, a multi-branch feature transformation portion architecture and an output portion architecture separately; and inputting samples to betested into the trained initial deep learning network model for diabetic retinopathy grade classification. Compared with the traditional processing method, the diabetic retinopathy grade classification method gets rid of the dependence on prior knowledge, and has good generalization ability; and by adopting the designed multiple grades, a small-sized convolution kernel can be used for extracting very tiny lesion features, thereby making the classification results more reliable.
Owner:NORTHEASTERN UNIV

Fault prediction method and device for industrial equipment based on LSTM circulating neural network

ActiveCN109814527AAchieving long-term forecastsAddressing Insufficient Prediction AccuracyElectric testing/monitoringNeural architecturesData setConfidence interval
The invention discloses a fault prediction method and device for industrial equipment based on an LSTM circulating neural network, wherein the method comprises the following steps of: acquiring a state monitoring data set of a plurality of sensors at the periphery of target equipment, wherein the state monitoring data set comprises monitoring data from 0 moment to a current moment; selecting a prediction characteristics containing preset fault information from the state monitoring data set by utilizing a characteristic selection standard, wherein the characteristic selection standard comprisesa correlation index and a monotonicity index; performing characteristic conversion on the prediction characteristics to obtain a prediction characteristic vector; and performing single-step fault prediction, long-term fault prediction and residual life prediction on the target equipment according to the prediction characteristic vector and a fault prediction network model. The method can effectively avoid insufficient prediction precision caused by unreasonable preset fault threshold, can give a confidence interval under the occasion of single-step performance prediction, and can achieve long-term prediction of performance and residual service life of the equipment.
Owner:TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products