Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

62results about How to "Increase the distance between classes" patented technology

Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding

The invention discloses a method for extracting the characteristic of a natural image based on dispersion-constrained non-negative sparse coding, which comprises the following steps of: partitioning an image into blocks, reducing dimensions by means of 2D-PCA, non-negative processing image data, initializing a wavelet characteristic base based on 2D-Gabor, defining the specific value between intra-class dispersion and extra-class dispersion of a sparsity coefficient, training a DCB-NNSC characteristic base, and image identifying based on the DCB-NNSC characteristic base, etc. The method has the advantages of not only being capable of imitating the receptive field characteristic of a V1 region nerve cell of a human eye primary vision system to effectively extract the local characteristic of the image; but also being capable of extracting the characteristic of the image with clearer directionality and edge characteristic compared with a standard non-negative sparse coding arithmetic; leading the intra-class data of the characteristic coefficient to be more closely polymerized together to increase an extra-class distance as much as possible with the least constraint of specific valuebetween the intra-class dispersion and the extra-class dispersion of the sparsity coefficient; and being capable of improving the identification performance in the image identification.
Owner:SUZHOU VOCATIONAL UNIV

Real-time labeling method and system for breast ultrasonic focus areas based on artificial intelligence

The embodiment of the invention provides a real-time labeling method for breast ultrasonic focus areas based on artificial intelligence. The method includes the steps that a breast ultrasonic image video is divided into picture sets according to the timestamp order at frames; the picture sets are sequentially detected according to a focus-area detection classification model, the BI-RADS type levelis determined, and meanwhile the focus areas are determined; contour lines of all the focus areas are labeled in pictures, wherein the types of the contour lines are related to the BI-RADS type level; all the frame pictures are anew synthesized to form a video according to the timestamp order. The embodiment of the invention also provides an establishing method for the focus-area detection classification model, and the establishing method is used for establishing the focus-area detection classification model in the real-time labeling method for the focus areas. The embodiment of the inventionalso provides a real-time labeling system for the breast ultrasonic focus areas based on artificial intelligence. According to the real-time labeling method and system for the breast ultrasonic focusareas based on artificial intelligence and the establishing method for the focus-area detection classification model in the embodiment, the focus recognition capability is enhanced, the misdiagnosisrate is reduced, and the embodiment assists doctors in giving a more-accurate suggestion.
Owner:广州尚医网信息技术有限公司

Human face expression recognition method based on Curvelet transform and sparse learning

InactiveCN106980848AImprove discrimination abilityGood refactoring abilityAcquiring/recognising facial featuresMultiscale geometric analysisSparse learning
The invention discloses a human face expression recognition method based on Curvelet transform and sparse learning. The method comprises the following steps: 1, inputting a human face expression image, carrying out the preprocessing of the human face expression image, and cutting and obtaining an eye region and a mouth region from the human face expression image after processing; 2, extracting the human face expression features through Curvelet transform, carrying out the Curvelet transform and feature extraction of the human face expression image after preprocessing, the eye region and the mouth region, carrying out the serial fusion of the three features, and obtaining fusion features; 3, carrying out the classification recognition based on the sparse learning, and respectively employing SRC for classification and recognition of the human face Curvelet features and fusion features; or respectively employing FDDL for classification and recognition of the human face Curvelet features and fusion features. The Curvelet transform employed in the method is a multi-scale geometric analysis tool, and can extract the multi-scale and multi-direction features. Meanwhile, the method employs a local region fusion method, and enables the fusion features to be better in imaging representing capability and feature discrimination capability.
Owner:HANGZHOU DIANZI UNIV

Face recognition neural network training method, system and device and storage medium

The invention discloses a face recognition neural network training method, system and device and a storage medium, and the method comprises the following steps: obtaining a face image as a training set and a test set, and combining a loss function of a face recognition neural network with an adaptive additional loss function; inputting the preprocessed training set into a face recognition neural network for training; inputting the test set into the trained face recognition neural network, and verifying the recognition accuracy of the trained face recognition neural network. According to the invention, when the face recognition neural network is trained, the loss function is combined with an adaptive additional loss function to obtain a final loss function; the intra-class distance when theface images are classified is shortened through the final loss function, the inter-class distance when the face images are classified is increased, meanwhile, balance of multi-sample classes and few-sample classes is considered, when sample distribution is unbalanced, the generalization performance of the face recognition neural network can be guaranteed, and the accuracy and reliability degree of face recognition are improved.
Owner:GUANGDONG ELECTRIC POWER SCI RES INST ENERGY TECH CO LTD

Image processing method and apparatus, and server

Embodiments of the invention disclose an image processing method and apparatus, and a server. The method comprises the following steps of obtaining a to-be-processed human face image; inputting the human face image to a convolutional neural network model with a loss function, wherein the loss function directionally screens and increases a between-class distance after image classification accordingto a preset expectation; and obtaining classification data output by the convolutional neural network model, and according to the classification data, performing content understanding on the human face image. The new loss function is established on the convolutional neural network model and has the effect of screening and increasing the between-class distance after the image classification; and the between-class distance of the classification data output by the convolutional neural network model obtained by training through the loss function is increased, so that the between-class distance inan image identification process is increased, the saliency of difference between images is remarkably improved, the image comparison accuracy is remarkably improved, and the security of applying theimage processing method is effectively guaranteed.
Owner:BEIJING DAJIA INTERNET INFORMATION TECH CO LTD

Face recognition method, system and device based on centralized coordination learning

The invention discloses a face recognition method, system and device based on centralized coordination learning, and the method comprises the following steps: obtaining a to-be-recognized face image,carrying out the face detection of the face image, and obtaining a first face image; After alignment processing is carried out on the first face image, a second face image with a preset size is obtained; inputting the second face image into a preset face recognition model based on centralized coordination learning for feature extraction, and obtaining a face feature vector of the second face image; and calculating cosine similarity by combining the face feature vector and a preset face database, and obtaining a face recognition result according to the cosine similarity. According to the invention, a face recognition model based on centralized coordination learning is adopted to carry out feature extraction on the face image, each feature is pulled to an original point and is respectively put into all quadrants, the inter-class distance is larger, the classification efficiency and recognition accuracy of the face are improved, and the method can be widely applied to the technical fieldof face recognition.
Owner:GUANGZHOU HISON COMP TECH

Image big data-oriented class increment classification method, system and device and medium

The invention discloses an image big data-oriented class increment classification method, system and device and a medium. The method comprises an initialization training stage and an increment learning stage. The initialization training stage comprises the following steps: constructing an initial data set of an image; and training an initial classification model according to the initial data set. The incremental learning stage comprises the following steps: constructing an incremental learning data set according to the initial data set and new data of the image; obtaining a new incremental learning model according to the initial classification model, and training the new incremental learning model according to an incremental learning data set and a distillation algorithm to obtain a model capable of identifying new and old categories, wherein the distillation algorithm enables the inter-class distance of the model to be enlarged and the intra-class distance to be reduced. The incremental learning model is updated through the distillation algorithm, the inter-class distance of the model is enlarged, the intra-class distance of the model is reduced, the new and old data recognition performance of the model can be improved under limited storage space and computing resources, and the method, system and device can be widely applied to the field of big data application.
Owner:SOUTH CHINA UNIV OF TECH

Multi-view three-dimensional model retrieval method and system based on pairing depth feature learning

The invention discloses a multi-view three-dimensional model retrieval method and a multi-view three-dimensional model retrieval system based on pairing depth feature learning. The multi-view three-dimensional model retrieval method comprises the steps of: acquiring two-dimensional views of a to-be-retrieved three-dimensional model at different angles, and extracting an initial view descriptor ofeach two-dimensional view; aggregating the plurality of initial view descriptors to obtain a final view descriptor; extracting potential features and category features of the final view descriptor respectively; performing weighted combination on the potential features and the category features to form a shape descriptor; and performing similarity calculation on the obtained shape descriptor and ashape descriptor of the three-dimensional model in a database to realize retrieval of the multi-view three-dimensional model. According to the multi-view three-dimensional model retrieval method, a multi-view three-dimensional model retrieval framework GPDFL is provided, potential features and category features of the model are fused, and the feature recognition capability and the model retrievalperformance can be improved.
Owner:SHANDONG NORMAL UNIV

Human body activity recognition method based on grouping residual joint spatial learning

A human body activity recognition method based on grouping residual joint spatial learning comprises the following steps: step 1, collecting human, object and environment signals by using various sensors, grouping, aligning and slicing single-channel data based on a sliding window, and constructing a two-dimensional activity data subset; step 2, building a grouping residual convolutional neural network, and constructing a joint space loss function optimization network model by utilizing a center loss function and a cross entropy loss function in order to extract a feature map of a two-dimensional activity data subset; and step 3, training a multi-classification support vector machine by utilizing the extracted two-dimensional features to realize a human body activity classification task based on the feature map. According to the invention, fine human body activities can be identified; the inter-class distance of the extracted spatial features is increased in combination with a joint spatial loss function, and the intra-class distance is reduced; based on the spatial feature map of the human body activity data, a multi-classification support vector machine is combined to carry outclassification learning on the feature map, and the accuracy of human body activity classification is improved.
Owner:ZHEJIANG UNIV OF TECH

Internet encrypted traffic interaction feature extraction method based on graph structure

The invention discloses an Internet encrypted traffic interaction feature extraction method based on a graph structure, belongs to the technical field of encrypted network traffic classification, andis applied to fine-grained classification of network traffic after TLS encryption. Encrypted traffic interaction characteristics based on the graph structure are extracted from an original packet sequence, and the graph structure characteristics include sequence information, packet direction information, packet length information, burst traffic information and the like of data packets. Through quantitative calculation, compared with a packet length sequence, the intra-class distance is obviously reduced and the inter-class distance is increased after the graph structure characteristics are used. According to the method, the encrypted traffic characteristics with richer dimensions and higher discrimination can be obtained, and then the method is combined with deep neural networks such as agraph neural network to carry out refined classification and identification of the encrypted traffic. A large number of experimental data experiments prove that compared with an existing method, the method adopting the graph structure characteristics in combination with the graph neural network has higher accuracy and lower false alarm rate.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products