Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

395results about How to "Reduce redundant information" patented technology

Optical flow characteristic vector mode value and angle combination micro expression detection method based on interested area

ActiveCN107358206AImprove the efficiency of micro-expression detectionEasy to identifyCharacter and pattern recognitionOptical flowVector mode
The invention relates to an optical flow characteristic vector mode value and angle combination micro expression detection method based on an interested area. According to the method, firstly, a micro expression video is pre-processed to acquire a micro expression sequence, key face characteristic points are further extracted, and a face interested area having the best effect is found out according to motion characteristics of different expression FACS motion units; optical flow characteristics of the interested area are extracted. The optical flow vector angle information is firstly introduced, an optical flow vector mode value and the angle information are acquired through calculation, through combination of the two, a more comprehensive and more judicious characteristic detection micro expression segment can be acquired; the optical flow mode value and the angle information are combined, the threshold is determined according to magnitude of the optical flow mode value, and a number shape combination method is utilized to vividly and visually acquire the micro expression segment. The method is advantaged in that micro expression detection efficiency is substantially improved, optical flow characteristic vector extraction is carried out only in the key face area, computational complexity is reduced, the consumption time is shortened, and high robustness is realized.
Owner:WUHAN MELIT COMM

Self-adaptive high-resolution near-to-eye optical field display device and method on basis of eye tracking

The invention discloses a self-adaptive high-resolution near-to-eye optical field display device on the basis of eye tracking. The self-adaptive high-resolution near-to-eye optical field display device comprises a beam splitter, a spatial light modulator array and backlight lighting equipment, wherein the beam splitter, the spatial light modulator array and the backlight lighting equipment are sequentially arranged in the line-of-sight direction of human eyes; the beam splitter is used for acquiring eye pupil position information; the spatial light modulator array is used for modulating transmittance of polarized light entering the human eyes; the backlight lighting equipment is used for providing backlight with uniform brightness for the spatial light modulator array. The invention also discloses a self-adaptive high-resolution near-to-eye optical field display method on the basis of eye tracking. According to the self-adaptive high-resolution near-to-eye optical field display method, a weighting function is set for each viewpoint and by utilizing a multi-viewpoint optical field global optimization method, redundant information of an edge view field is reduced and three-dimensional display visual resolution is improved; meanwhile, a human eye detection device is combined to acquire eye pupil positions in real time, optical field density is sampled and distributed again according to visual characteristics and by utilizing a single viewpoint optical field local optimization method, the computation burden of the optimization method is greatly reduced and self-adaptive high-resolution real-time three-dimensional display is realized.
Owner:ZHEJIANG UNIV

Deep learning face recognition system and method based on self-attention mechanism

The invention discloses a deep learning face recognition system and a deep learning face recognition method based on a self-attention mechanism, and belongs to the field of computer vision and mode recognition. According to the invention, a channel self-attention module is constructed. The dimension conversion transposition is carried out on three-dimensional data of a feature map. A cross-correlation relationship matrix between channels is learned to represent the relative relationship between different channels, the optimized features of the channels are obtained through calculation according to original features, different weight assignment is carried out on the different channels, selection of channel filtering is achieved, and redundant information of the feature channels is reduced.A space self-attention module is constructed; modeling the spatial information of the three-dimensional feature map; learning a cross-correlation relationship matrix between the spatial positions of the feature map; according to the method, the spatial position of the face feature map is optimized to represent the relative relation between different positions, the features after spatial position optimization are obtained through calculation with the input features, different positions of the face feature map are endowed with different weights, selection of important feature areas of the face is achieved, and the features are concentrated in the important areas of the face.
Owner:HUAZHONG UNIV OF SCI & TECH

Video keyframe extraction method

The invention discloses a video keyframe extraction method, which comprises the steps of: using a ViBe algorithm fused with an inter-frame difference method to perform moving object detection on an acquired original video sequence, so as to obtain a key video sequence containing a moving object; performing keyframe crude extraction on the key video sequence by using a global characteristic peak signal-to-noise ratio to obtain candidate keyframe sequences; and establishing global similarity of the candidate keyframe sequences by using the peak signal-to-noise ratio, establishing local similarity of the candidate keyframe sequences by using SURF feature points, and performing weighted fusion on the global similarity and the local similarity to obtain comprehensive similarity, performing self-adoptive keyframe extraction on the candidate keyframe sequences by using the comprehensive similarity, and finally acquiring a target keyframe sequence. The video keyframe extraction method providedby the invention can effectively extract the video keyframes, obviously reduce the redundant information of video data, and express the main content of the video concisely. Moreover, the video keyframe extraction method has low algorithm complexity and is suitable for real-time extraction of keyframes of surveillance videos.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Text translation method, device, storage medium and computer device

The invention relates to a method, device, a readable storage medium and a computer device for text translation, including steps: obtaining an initial source text and reconstructing that source text,wherein the reconstructed source text is the source text obtained by supplementing the initial source text with missing word position information; carrying out semantic coding on the initial source text to obtain a source end vector sequence corresponding to the initial source text; a target end vector being obtained by sequentially decoding that source end vector sequence, and the target end vector being decoded according to the word vector of the candidate target word determined before each decoding, and the candidate target word of the current time being determined according to the target end vector of the current time; forming a target end vector sequence by sequentially decoding the target end vectors; performing reconstruction evaluation processing on the source vector sequence and the target vector sequence according to the reconstruction source text to obtain a reconstruction score corresponding to each candidate target word; a target text being generated according to that reconstruction score and the candidate target word. The scheme provided by the present application can improve the quality of translation.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Unmanned aerial vehicle landing landform image classification method based on DCT-CNN model

The invention discloses an unmanned aerial vehicle landing landform image classification method based on a DCT-CNN model. The method comprises the following steps of acquiring a training image set anda test image set of unmanned aerial vehicle landing landform images; carrying out DCT conversion on the unmanned aerial vehicle landing landform images and carrying out DCT coefficient screening; aiming at characteristics of complex unmanned aerial vehicle landing landform image scenes and abundant information, constructing a DCT-CNN network model; inputting a DCT coefficient of a training set into the improved DCT-CNN model so as to train, carrying out parameter updating on a network till that a loss function is converged into one small value, and then ending the training; taking a trainingimage characteristic set as a training sample so as to train a SVM classifier; and inputting a test set, using a trained model to carry out layer-by-layer learning on a test image, and finally inputting an acquired characteristic vector into the trained SVM classifier so as to carry out classification, and acquiring a classification result. In the invention, a data redundancy is reduced, trainingtime is greatly shortened, and classification accuracy of the unmanned aerial vehicle landing landform images is effectively increased.
Owner:BEIJING UNIV OF TECH

One-dimension range profile optimal orthogonal nolinear subspace identification method for radar targets

The invention belongs to the technical field of radar target identification and provides a one-dimension range profile optimal orthogonal nolinear subspace identification method for radar targets. Nonlinear transformation is conducted on a one-dimension range profile of each category of targets, the one-dimension range profile is mapped to high-dimensional linear characteristic space, an optimal orthogonal nolinear transformational matrix is established in the high-dimensional linear characteristic space, characteristic extraction is conducted, a nearest neighbor rule is adopted for classification, and the category of an input target is finally determined. The method comprises the steps of utilizing a kernel function and the one-dimension range profile of the radar target to train a vector to determine matrixes of Ui, Vrj, (K)ij, W alpha and B alpha; determining a vector alpha i (i=1, 2, ..., n) in optimal orthogonal nolinear subspace, determining the transformational matrix A of the optimal orthogonal nolinear subspace, wherein the A ranges from alpha 1 to alpha n; determining a base template vector of the target; determining an optimal orthogonal nolinear projection vector of the one-dimension range profile xt of the input target; determining the Euclidean distance between the optimal orthogonal nolinear projection vector and the base template vector of the target and determining the category of the one-dimension range profile of the input target. The one-dimension range profile optimal orthogonal nolinear subspace identification method for radar targets can effectively improve target identification performance.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Indoor positioning method based on WiFi

The invention relates to an indoor positioning method based on WiFi, which comprises the following steps of: in an offline stage, acquiring fingerprint vectors of N reference point positions in an indoor positioning area, and storing the fingerprint information of the N reference points into a fingerprint database DB; roughly positioning at an online stage, namely determining a target floor; utilizing the K-means algorithm to carry out clustering analysis on the sub-fingerprint libraries DBjk of the corresponding floors, and further dividing positioning sub-regions; in the real-time positioning stage, firstly, carrying out AP selection, and then using a KNN classification algorithm for determining a sub-region where a target is located; and finally, finding out K nearest neighbors, and estimating the position (x, y) of the target in a weighted average mode. According to the method, for a large-range indoor positioning scene, the intensity information of all APs is reserved; for indoorfloor positioning, an SVM classifier is used, an encoder is added to a classifier model, the data dimension is reduced through introduction of the encoder, redundant information and noise interferenceare effectively reduced, and the classification precision is improved.
Owner:HEFEI UNIV OF TECH

Improved self-adaptive sparse sampling fault classification method

An improved self-adaptive sparse sampling fault classification method belongs to the technical field of fault diagnosis. A traditional sparse classification method is improved. Firstly, a wavelet module maximum value and a kurtosis method are used for carrying out feature enhancement processing on signals, and on the premise that signal sparsity is guaranteed, a unit matrix is adopted to replace aredundant dictionary. Secondly dimension reduction is carried out on data by adopting a Gaussian random measurement matrix, thereby reducing redundant information in the signal, and reserving effective and small amount of data. Then, a sparse coefficient is solved by adopting a sparsity adaptive matching pursuit (SAMP) algorithm, and the compressed signal is reconstructed; and finally, a cross correlation coefficient is adopted as a judgment basis of the category of the fault, so that an improved adaptive sparse sampling fault classification method is provided. Experimental verification proves that redundant information in signals is effectively reduced, the influence of time shift deviation on fault type judgment is avoided, meanwhile, the operation complexity is reduced, and the calculation speed and the reconstruction precision are improved.
Owner:BEIJING UNIV OF CHEM TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products