Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

56results about How to "Feature valid" patented technology

Image super-resolution method based on generative adversarial network

The invention discloses an image super-resolution method based on a generative adversarial network. The method comprises the following steps: obtaining a training data set and a verification data set;constructing an image super-resolution model, wherein the image super-resolution model comprises a generation network model and a discrimination network model; initializing weights of the establishedgenerative network model and the discriminant network model, initializing the network model, selecting an optimizer, and setting network training parameters; simultaneously training the generative network model and the discriminant network model by using a loss function until the generative network and the discriminant network reach Nash equilibrium; obtaining a test data set and inputting the test data set into the trained generative network model to generate a super-resolution image; and calculating a peak signal-to-noise ratio between the generated super-resolution image and a real high-resolution image, calculating an evaluation index of the image reconstruction quality of the generated image, and evaluating the reconstruction quality of the image. According to the method, the performance of reconstructing the super-resolution image by the network is improved by optimizing the network structure, and the problem of image super-resolution is solved.
Owner:SOUTH CHINA UNIV OF TECH

Micro-expression recognition method based on space-time appearance movement attention network

ActiveCN112307958ASuppression identifies features with small contributionsTake full advantage of complementarityCharacter and pattern recognitionNeural architecturesPattern recognitionNetwork on
The invention relates to a micro-expression recognition method based on a space-time appearance movement attention network, and the method comprises the following steps: carrying out the preprocessingof a micro-expression sample, and obtaining an original image sequence and an optical flow sequence with a fixed number of frames; constructing a space-time appearance motion network which comprisesa space-time appearance network STAN and a space-time motion network STMN, designing the STAN and the STMN by adopting a CNN-LSTM structure, learning spatial features of micro-expressions by using a CNN model, and learning time features of the micro-expressions by using an LSTM model; introducing hierarchical convolution attention mechanisms into CNN models of an STAN and an STMN, applying a multi-scale kernel space attention mechanism to a low-level network, applying a global double-pooling channel attention mechanism to a high-level network, and respectively obtaining an STAN network added with the attention mechanism and an STMN network added with the attention mechanism; inputting the original image sequence into the STAN network added with the attention mechanism to be trained, inputting the optical flow sequence into the STMN network added with the attention mechanism to be trained, integrating output results of the original image sequence and the optical flow sequence through the feature cascade SVM to achieve a micro-expression recognition task, and improving the accuracy of micro-expression recognition.
Owner:HEBEI UNIV OF TECH +2

Image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding

The invention discloses an image super resolution (SR) reconstruction method based on subspace projection and neighborhood embedding. The method is characterized by: using first and secondary subspace projection methods to project original high-dimensional data to a low-dimensional space, using dimension reduction feature vectors to show a feature of a low-resolution image block so that global structure information and local structure information of original data can be maintained; comparing a Euclidean distance between the dimension reduction feature vectors in the low-dimensional space, finding a neighborhood block which is most matched with the low-resolution image block to be reconstructed, using a similarity and a scale factor between the feature vectors to construct an accurate embedded weight coefficient so that a searching speed and matching precision can be increased; then constructing the similarity and the scale factor between the feature vectors, calculating the accurate weight coefficient and acquiring more high frequency information from a training database; finally, according to the weight coefficient and the neighborhood block, estimating the high-resolution image block with high precision, reconstructing the image which has the high similarity with a real object, which is good for later-stage real object identification processing.
Owner:SOUTHWEST JIAOTONG UNIV

Electroencephalogram signal recognition method based on spatiotemporal feature weighted convolutional neural network

The invention requests to protect an electroencephalogram signal identification method based on a spatiotemporal feature weighted convolutional neural network. The method comprises the following steps: firstly, de-noising a motor imagery electroencephalogram signal by using discrete wavelet transform; designing a spatial-temporal feature weighted convolutional neural network to perform feature extraction on the processed electroencephalogram signal, wherein the convolution operation of the first layer is carried out on the time scale of the motor imagery electroencephalogram signal, and the convolution operation of the second layer is carried out on the channel scale, so that the extracted features include the space-time characteristics of the motor imagery electroencephalogram signal; because the importance degrees of the extracted features are different, a feature weighting module is added into the network, so that the important features are highlighted, and the unimportant featuresare weakened. Characteristics extracted by the model can reflect the characteristics of various motor imagery electroencephalogram signals more effectively, and the recognition accuracy of the motor imagery electroencephalogram signals can be improved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Pipeline state recognition method based on monitoring data multi-attribute feature fusion

ActiveCN110046651AAchieve effective characterizationThe basic probability assignment is accurateCharacter and pattern recognitionPattern recognitionAcquired characteristic
The invention discloses a pipeline state recognition method based on monitoring data multi-attribute feature fusion, and the method comprises the steps: building a state recognition framework based onpipeline state monitoring data, enabling the interior of the framework to comprise normal, blocked and leaked state modes, and extracting the data features of a pipeline operation state based on thestate modes contained in the state recognition framework; selecting the data features by adopting an evaluation index of comprehensive sensitivity and volatility, taking the obtained features as evidence bodies of a state recognition framework, processing the to-be-recognized state data of the pipeline according to the to-be-recognized state data of the pipeline, obtaining corresponding evidence bodies of the to-be-recognized state data of the pipeline, obtaining basic probability values of the evidence bodies, and utilizing D- S evidence theory synthesis rule to perform fusion analysis on allevidence information of the current state, determining the type of the current state belonging to an identification framework, and realizing the identification and judgment of the pipeline operationstate. The invention provides a pipeline operation state identification technology with guiding significance, which is of great significance for improving the accuracy of technical personnel for deciding and judging the pipeline operation state.
Owner:XI AN JIAOTONG UNIV

Target signal identification method based on rotor unmanned aerial vehicle-mounted equipment and ground equipment

ActiveCN110401479AAccurate target recognition resultsImprove stabilityRadio transmissionReturn timeFrequency band
The invention relates to a target signal identification method based on rotor unmanned aerial vehicle-mounted equipment and ground equipment, belongs to the technical field of communication reconnaissance, and solves the problems that air communication reconnaissance equipment in the prior art is not suitable for long-time reconnaissance, the signal return time is relatively long, the reconnaissance distance of the ground communication reconnaissance equipment is short, and the signal processing time is long. The method comprises the following steps: rotor unmanned aerial vehicle equipment receiving an external real-time target signal, identifying the frequency band of the target signal, correspondingly optimizing the target signal according to the identified frequency band, amplifies theobtained optimized signal, and forwarding the amplified optimized signal to ground equipment as a forwarding signal; and the ground equipment performing feature vector extraction on the forwarded signal, performing dual correlation operation on the obtained feature vector according to the mapping relationship between the signal stored in the database and the target feature vector, and further obtaining a signal type and a carrier target to complete target signal identification. The method applies a dual correlation operation identification technology, and is high in target identification precision and reliable.
Owner:36TH RES INST OF CETC

Lp norm-based sample couple-weighting facial feature extraction method

An Lp norm-based sample couple-weighting facial feature extraction method belongs to a feature extraction method in pattern recognition. The Lp norm-based sample couple-weighting facial feature extraction method includes the following steps: (1) n facial images, the sizes of which are M multiplied by N, are represented in the form of column vectors, wherein the dimensionality of Xi is d, and the column vectors are formed into a sample matrix; (2) different functions are adopted as weighting functions for facial sample couples of the same kind and facial sample couples of different kinds; (3) a sample couple-weighting optimization model with Lp norm constraints is created, and an iterative optimization algorithm is utilized to obtain a locally optimal unit projection vector w; and (4) a greedy algorithm is used for reducing the initial d dimensions of the features of the facial images to m dimensions, and thereby the reduction of dimensionality and the extraction of effective features are implemented. The method can flexibly carry out feature extraction on different types of data sets and decrease the sensitivity on abnormal values, and can be more adapted to the complexity of facial images; and since sample couples are weighted, the affection of sample mean is avoided, and extracted features are more effective. Under the condition of blocking, the performance of the method is enhanced by 2 to 5 percent in comparison with the performance of PCA (principal component analysis) and Lp-PCA-L1.
Owner:CHINA UNIV OF MINING & TECH

Video action recognition method and system of multi-level feature fusion model based on hybrid convolution

ActiveCN113128395ACompensation for dynamic feature extraction capabilitiesEfficient integrationCharacter and pattern recognitionNeural architecturesVisual technologyEngineering
The invention relates to a video action recognition method and system of a multi-level feature fusion model based on hybrid convolution, and belongs to the technical field of computer vision. The method comprises the steps: constructing a hybrid convolution module through employing two-dimensional convolution and separable three-dimensional convolution; performing channel shift operation on each input feature along the time dimension, constructing a time shift module, promoting information flow between adjacent frames, and compensating the defect of capturing the dynamic feature by the two-dimensional convolution operation; exporting multi-level complementary features from different convolutional layers of the backbone network, and performing spatial modulation and time modulation on the multi-level complementary features, so that the features of each level have consistent semantic information in the spatial dimension and have variable visual rhythm clues in the time dimension; according to the method, feature flows from bottom to top and feature flows from top to bottom are constructed, so that the features supplement each other, and the feature flows are processed in parallel to realize multi-level feature fusion; and carrying out model training by utilizing a two-stage training strategy.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Image processing method and apparatus, facial recognition method and apparatus, and computer device

ActiveUS11403876B2Improve learning effectSuppressing ineffective or slightly effective featuresImage enhancementImage analysisComputer equipmentWears glasses
This application relates to an image processing method and apparatus, a facial recognition method and apparatus, a computer device, and a readable storage medium. The image processing method includes: obtaining a target image comprising an object wearing glasses; inputting the target image to a glasses-removing model comprising a plurality of sequentially connected convolution squeeze and excitation networks; obtaining feature maps of feature channels of the target image through convolution layers of the convolution squeeze and excitation networks; obtaining global information of the feature channels according to the feature maps through squeeze and excitation layers of the convolution squeeze and excitation networks, learning the global information, and generating weights of the feature channels; weighting the feature maps of the feature channels according to the weights through weighting layers of the convolution squeeze and excitation networks, respectively, and generating weighted feature maps; and generating a glasses-removed image corresponding to the target image according to the weighted feature maps through the glasses-removing model. The glasses in the image can be effectively removed using the method.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Lower limb action recognition method based on pressure and acceleration sensor

The invention discloses a lower limb movement recognition method based on a pressure and acceleration sensor. The specific implementation steps of the method are as follows: firstly, the pressure sensor signal of the lower limb movement of the human body is collected in real time, and after preprocessing the pressure sensor signal, according to the pressure sensor data rising The edge and falling edge mark the start and end of the lower limb movement. When the rising edge of the pressure is detected, the three-axis acceleration signal of the acceleration sensor will be collected and stored. When the falling edge of the pressure is detected, the three-axis acceleration signal of the acceleration sensor will be collected. The three-axis signal of the acceleration sensor collected between the edge and the falling edge is called the acceleration signal segment. Then the frequency domain features and statistical features are extracted from the acceleration signal segment extracted in the previous step. After the features are extracted, the data dimensionality reduction is performed on the extracted features. Finally, the trained classifier is used to classify the feature data after dimension reduction, and the classification result of the action pattern is obtained.
Owner:SOUTH CHINA UNIV OF TECH +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products