Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

87results about How to "Discriminating" patented technology

Track and convolutional neural network feature extraction-based behavior identification method

The invention discloses a track and convolutional neural network feature extraction-based behavior identification method, and mainly solves the problems of computing redundancy and low classification accuracy caused by complex human behavior video contents and sparse features. The method comprises the steps of inputting image video data; down-sampling pixel points in a video frame; deleting uniform region sampling points; extracting a track; extracting convolutional layer features by utilizing a convolutional neural network; extracting track constraint-based convolutional features in combination with the track and the convolutional layer features; extracting stack type local Fisher vector features according to the track constraint-based convolutional features; performing compression transformation on the stack type local Fisher vector features; training a support vector machine model by utilizing final stack type local Fisher vector features; and performing human behavior identification and classification. According to the method, relatively high and stable classification accuracy can be obtained by adopting a method for combining multilevel Fisher vectors with convolutional track feature descriptors; and the method can be widely applied to the fields of man-machine interaction, virtual reality, video monitoring and the like.
Owner:XIDIAN UNIV

A behavior recognition method of depth supervised convolution neural network based on training feature fusion

The invention provides a behavior recognition method of depth supervised convolution neural network based on training feature fusion, belonging to the artificial intelligence computer vision field. This method extracts multi-layer convolution features of target video, designs local evolutionary pooling layer, and maps video convolution features to a vector containing time information by using local evolutionary pooling layer, thus extracts local evolutionary descriptors of target video. The local evolutionary descriptors of target video are extracted by using local evolutionary pooling layer.By using VLAD coding method, multiple local evolutionary descriptors are coded into meta-action based video level representations. Based on the complementarity of the information among the multiple levels of convolution network, the final classification results are obtained by integrating the results of the multiple levels of convolution network. The invention fully utilizes the time information to construct the video level representation, and effectively improves the accuracy of the video behavior recognition. At the same time, the performance of the whole network is improved by integrating the multi-level prediction results to improve the discriminability of the middle layer of the network.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Video human action reorganization method based on sparse subspace clustering

The invention belongs to computer visual pattern recognition and a video picture processing method. The computer visual pattern recognition and the video picture processing method comprise the steps that establishing a three-dimensional space-time sub-frame cube in a video human action reorganization model, establishing a human action characteristic space, conducting the clustering processing, updating labels, extracting the three-dimensional space-time sub-frame cube in the video human action reorganization model and the human action reorganization from monitoring video, extracting human action characteristic, confirming category of human sub-action in each video and classifying and merging on videos with sub-category labels. According to the computer visual pattern recognition and the video picture processing method, the highest identification accuracy is improved by 16.5% compared with the current international Hollywood2 human action database. Thus, the video human action reorganization method has the advantages that human action characteristic with higher distinguishing ability, adaptability, universality and invariance property can be extracted automatically, the overfitting phenomenon and the gradient diffusion problem in the neural network are lowered, and the accuracy of human action reorganization in a complex environment is improved effectively; the computer visual pattern recognition and the video picture processing method can be applied to the on-site video surveillance and video content retrieval widely.
Owner:UNIV OF ELECTRONICS SCI & TECH OF CHINA

Multi-granularity cross modal feature fusion pedestrian re-identification method and re-identification system

ActiveCN110598654AImprove network recognition abilityGuaranteed robustnessCharacter and pattern recognitionRe identificationIr image
The invention discloses a multi-granularity cross modal feature fusion pedestrian re-identification method and a re-identification system. The pedestrian re-identification method comprises the steps:1, constructing a training sample set; 2, constructing a fine-grained feature extraction network and a coarse-grained feature extraction network; 3, training the fine-grained feature extraction network and the coarse-grained feature extraction network by adopting the training sample set to obtain a trained network; 4, respectively inputting a to-be-identified IR image into the fine-grained featureextraction network and the coarse-grained feature extraction network; and extracting fine-grained features and coarse-grained features of the to-be-identified image, fusing the extracted features toobtain a fused feature Ftest, obtaining the probability that pedestrians in the to-be-identified image belong to each category, and selecting the pedestrian category with the maximum probability valueas an identification result. According to the method, fine-grained features of small regions of an image and global coarse-grained features are combined to obtain more discriminative fusion featuresfor pedestrian classification and recognition.
Owner:HEFEI UNIV OF TECH

Pedestrian re-identification model training method and device and pedestrian re-identification method and device

The invention discloses a pedestrian re-identification model training method and device and a pedestrian re-identification method and device, and the pedestrian re-identification model training methodcomprises the steps: carrying out the feature extraction of a pedestrian image through a convolution network of a pedestrian re-identification model, and obtaining the original features of the pedestrian image; processing the original features by using an attention module of the pedestrian re-identification model to obtain a plurality of pedestrian local features; determining a similarity matrixamong the pedestrian local features by using a graph neural network of the pedestrian re-identification model, and adjusting the pedestrian local features according to the similarity matrix; and determining a pedestrian recognition result and training loss of a pedestrian re-recognition model based on the adjusted pedestrian local features, and optimizing model parameters according to the trainingloss. According to the invention, the important pedestrian local features in the image can be automatically extracted without introducing extra annotation information, so that the final pedestrian local features have higher discrimination capability, and the model recognition performance is also improved.
Owner:BEIJING SANKUAI ONLINE TECH CO LTD

Multi-domain fusion micro-expression detection method based on motion unit

The invention relates to a multi-domain fusion micro-expression detection method based on a motion unit, and the method comprises the steps: (1) carrying out the preprocessing of a micro-expression video: obtaining a video frame sequence, carrying out the face detection and positioning, and carrying out the face alignment; (2) performing motion unit detection on the video frame sequence to obtainmotion unit information of the video frame sequence; (3) according to the motion unit information, finding out a facial motion unit sub-block containing the maximum micro-expression motion unit information amount ME as a micro-expression detection area through a semi-decision algorithm, and meanwhile, extracting a plurality of peak frames of the micro-expression motion unit information amount ME as reference climax frames of micro-expression detection by setting a dynamic threshold value; and (4) realizing micro-expression detection through a multi-domain fusion micro-expression detection method. According to the method, the influence of redundant information on micro-expression detection is reduced, the calculated amount is reduced, and the micro-expression detection has higher comprehensive discrimination capability. The calculation speed is high, and the micro-expression detection precision is high.
Owner:SHANDONG UNIV

Optical flow gradient amplitude characteristic-based subtle facial expression detection method

The invention discloses an optical flow gradient amplitude characteristic-based subtle facial expression detection method and relates to processing for identifying a graphic recording carrier. According to the method, face edges are obtained by means of fitting according to face key points, and a face region of interest is extracted; an optical flow field between face image frames in a video sequence is extracted by using a FlowNet2 network; the optical flow gradient amplitude characteristics of the face region of interest are extracted; characteristic distances are calculated and processed, and noise elimination is carried out; and therefore, subtle facial expression detection based on the optical flow gradient amplitude characteristics is completed. According to subtle facial expressiondetection in the prior art, subtle facial expression motion cannot be captured in extracted face image motion features; the features contain excessive interference information, and as a result, the subtle facial expression detection is susceptible to the influence of head deviation, blinking motion, accumulated noise and single-frame noise in feature distance analysis. However, with the method adopted, the above defects in the prior art can be eliminated.
Owner:HEBEI UNIV OF TECH

Driver road rage identification method based on deep fusion of facial expressions and voice

The invention discloses a driver road rage identification method based on deep fusion of facial expressions and voice. The method comprises the following steps: extracting facial image information and voice information of a driver from facial video image information of the driver; preprocessing the facial image frame information and inputting the facial image frame information into a multilayer convolutional neural network to obtain facial expression features; firstly, extracting Mel-frequency cepstral coefficients and first-order and second-order coefficient values of voice information for initial feature extraction, and splicing initial features of two sections of voice segments and inputting the splicing initial features into a full connection layer network to obtain discriminative voice frame features corresponding to facial expression frames; performing low-rank bilinear pooling fusion on the obtained facial frame expression features and the voice frame features to obtain fusion features; and performing decision fusion on the facial expression features, the voice features and the fusion features to obtain a final road rage recognition result. In a complex driving environment, the driver anger identification result can still be output in a high-precision mode, and then safe driving early warning is effectively carried out.
Owner:LANHAI FUJIAN INFORMATION TECH CO LTD

Point cloud classification method based on multi-level aggregation feature extraction and fusion

The invention provides a point cloud classification method based on multi-level aggregation feature extraction and fusion. The method comprises the following steps: (1) constructing a multi-level point set; (2) point set feature extraction based on LLC-LDA; (3) point set feature extraction based on multi-scale maximum pooling (LLC-MP); and (4) point cloud classification based on multi-level pointset feature fusion. The invention provides a multi-level point set aggregation feature extraction and fusion method based on multi-scale maximum pooling and local Discriminant alteration (LDA), and point cloud classification is realized based on the fused aggregation features. The multi-level point set aggregation feature extraction and fusion method based on multi-scale maximum pooling is used for point cloud classification. According to the algorithm, multi-level clustering is carried out; adaptively acquiring a multi-level and multi-scale target point set; the method comprises the followingsteps of: expressing point cloud single-point features by using local linear constraint sparse coding (LLC), and extracting the point cloud single-point features; a scale pyramid is constructed by using point coordinates, features capable of representing local distribution of a point set are constructed based on a maximum pooling method, then the features and an LLC-LDA model are fused to extractglobal features of the point set, and finally point cloud classification is realized by using multi-level aggregation features of the fused point set.
Owner:NANJING FORESTRY UNIV

A pedestrian re-identification method and system

PendingCN109886242AEnhanced regional featuresEnhance the more prominent regional features in the input pedestrian imageCharacter and pattern recognitionNeural architecturesData setFeature extraction
The invention provides a pedestrian re-identification method and system, and the method comprises the steps: obtaining a plurality of pedestrian images, building a sample data set: carrying out the training of the sample data set, building a pedestrian feature extraction network; carrying out feature extraction preprocessing on the pedestrian image by adopting the pedestrian feature extraction network to obtain a first feature map with multiple feature channels, carrying out partition processing on the first feature map to obtain a second feature map, carrying out feature extraction on the second feature map to obtain standard feature parameters; obtaining a to-be-detected pedestrian image; extracting features of the to-be-detected pedestrian image to obtain feature parameters of the to-be-detected pedestrian image; calculating the similarity between the characteristic parameters of the to-be-detected pedestrian image and the standard characteristic parameters to obtain similarity parameters; and completing pedestrian re-identification according to the similarity parameters. Therefore, the features extracted by the weighted partition feature extraction mode are more discriminative,so that the discrimination capability of the depth model is improved.
Owner:CHONGQING INST OF GREEN & INTELLIGENT TECH CHINESE ACADEMY OF SCI

Adversarial network texture surface defect detection method based on abnormal feature editing

The invention belongs to the technical field of image processing, and particularly discloses an adversarial network texture surface defect detection method based on abnormal feature editing. The adversarial network texture surface defect detection method comprises the following steps of: acquiring defect-free good product images and corresponding defect images to form an image data set jointly; constructing an adversarial network which comprises a generator and a discriminator, wherein the generator is used for extracting image features, detecting abnormal features and then editing the abnormal features by using normal features to obtain a reconstructed image, and the discriminator is used for discriminating the good product images and the reconstructed image; training the adversarial network through using the image data set according to a pre-constructed optimization target, so as to obtain a reconstructed image generation model; and inputting an image to be detected into the reconstructed image generation model to obtain a reconstructed image corresponding to the image to be detected, and further acquiring the texture surface defects according to the image to be detected and thecorresponding reconstructed image. The adversarial network texture surface defect detection method has high detection precision for defects of different shapes, sizes and contrast ratios on differenttexture surfaces.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products