Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

344 results about "Cosine Distance" patented technology

Surveillance video pedestrian re-recognition method based on ImageNet retrieval

The present invention discloses a surveillance video pedestrian re-recognition method based on ImageNet retrieval. The pedestrian re-recognition problem is transformed into the retrieval problem of an moving target image database so as to utilize the powerful classification ability of an ImageNet hidden layer feature. The method comprises the steps: preprocessing a surveillance video and removing a large amount of irrelevant static background videos from the video; separating out a moving target from a dynamic video frame by adopting a motion compensation frame difference method and forming a pedestrian image database and an organization index table; carrying out alignment of the size and the brightness on an image in the pedestrian image database and a target pedestrian image; training hidden features of the target pedestrian image and the image in the image database by using an ImageNet deep learning network, and performing image retrieving based on cosine distance similarity; and in a time sequence, converging the relevant videos containing recognition results into a video clip reproducing the pedestrian activity trace. The method disclosed by the present invention can better adapt to changes in lighting, perspective, gesture and scale so as to effective improve accuracy and robustness of a pedestrian recognition result in a camera-cross environment.
Owner:WUHAN UNIV

Method for face recognition scene adaptation based on convolutional neural network

A method for face recognition scene adaptation based on a convolutional neural network comprises the steps of: 1) collecting face data and making classification tags, performing preprocessing and enhancement of the data, and dividing the data into a training set and a verification set; 2) sending the data in the training set into a designed convolutional neural network for training, and obtaininga pre-training model; 3) testing the pre-training model by employing the data in the verification set, and regulating training parameters to perform retraining according to a test result; 4) repeatedly performing the step 3) to obtain an optimum pre-training model; 5) collecting face image data according to different application scenes, performing fine tuning of the pre-training model on the newlycollected data, and obtaining a new adaption scene model; 6) extracting features of a face image to be tested by employing the adaption scene model, performing weighting operation of the five sense organs of the face in the features, and obtaining final feature vectors; and 7) measuring the final feature vectors by employing a cosine distance, determining whether the face is a target face or not,and outputting a result. The method for face recognition scene adaptation based on the convolutional neural network ensures accuracy of face recognition and scene adaptability of the model.
Owner:ANHUI UNIVERSITY

Multi-channel network-based video human face detection and identification method

The invention discloses a multi-channel network-based video human face detection and identification method. The method comprises the following steps of S1, performing video preprocessing: adding time information to each frame image; S2, detecting a target human face and calculating a pose coefficient; S3, correcting a human face pose: for m human faces obtained in the step S2, performing pose adjustment; S4, extracting human face features based on a deep neural network; and S5, comparing the human face features: for an input human face, obtaining eigenvectors by utilizing the step S4, matching a matching degree of an eigenvector of the input human face and a vector in a feature library by utilizing a cosine distance, and adding a class to alternative classes, and if the cosine distances between a feature of the to-be-identified human face and central features of all classes are all smaller than a set threshold phi, regarding that a database does not store information of a person, and ending the identification, wherein the cosine distance between the class and the to-be-identified human face is greater than the set threshold phi. The multi-channel network-based video human face detection and identification method with relatively high accuracy is provided.
Owner:ENJOYOR COMPANY LIMITED

Variational automatic encoder-based zero-sample image classification method

InactiveCN107679556AEffective semantic associationFully consider the probability distribution characteristicsCharacter and pattern recognitionNeural architecturesClassification methodsSample image
The present invention relates to a zero-sample classification technology in the computer vision field, in particular, a variational automatic encoder-based zero-sample image classification method. Asto the zero-sample image classification method, the distribution of the mappings of semantic features and visual features of categories in a semantic space is fitted, and more efficient semantic associations between the visual features and category semantics are built. According to the variational automatic encoder-based zero-sample image classification method, a variational automatic encoder is adopted to generate embedded semantic features on the basis of the visual features; it is regarded that the variational automatic encoder has a latent variable Z<^>; the latent variable Z<^> is adoptedas an embedded semantic feature; as for a zero-sample image classification task and the visual feature xj of a category-unknown sample, the encoding network of the variational automatic encoder whichis trained on visual categories is utilized to calculate a latent variable Z<^>j which is generated through encoding; the latent variable Z<^>j is adopted as an embedded semantic feature, cosine distances between the latent variable Z<^>j and the semantic feature of each invisible category are calculated, wherein the semantic feature of each invisible category is represented by a symbol describedin the descriptions of the invention; and a category of which the semantic feature is separated from the latent variable Z<^>j by the smallest distance is regarded as the category of the vision sample. The method of the present invention is mainly applied to video classification conditions.
Owner:TIANJIN UNIV

Method and device for human face recognition

The invention discloses a method and a device for human face recognition. The method comprises steps: through an already-trained first convolutional neural network, a first feature group for a first human face area in a picture is extracted, wherein the first feature group presents human face features in the picture; through an already-trained second convolutional neural network, a second feature group for a second human face area in the picture is extracted, wherein the second human face area is determined by a second area where the human face in the picture is, and the second feature group presents clothes features in the picture; the first feature group and the second feature group are combined, dimension reduction processing is carried out on the feature combination after the combination, and a third feature group is obtained; and according to the cosine distance between the third feature group and already-extracted reference human face features, whether the human face in the picture and the human face corresponding to the reference human face features are the same human face is determined. According to the technical scheme of the invention, the peripheral clothes and ornaments of the user face area can be combined with the user face features for human face recognition, and the human face recognition accuracy is greatly improved.
Owner:XIAOMI INC

Method and system for retrieving license plate

The invention discloses a method for retrieving a license plate. The method comprises a step of obtaining license plate precise positioning coordinates according to an inputted license plate image, a step of intercepting a predetermined license plate area image in the license plate image by using the license plate precise positioning coordinates, a step of taking the predetermined license plate area image as the input of a convolutional neural network model, obtaining a full connection layer characteristic, and forming a characteristic vector according to a predetermined order, a step of using a dimension reduction model to carry out dimension reduction of the characteristic vector into a predetermined dimension characteristic vector, and a step of carrying out cosine distance comparison of the predetermined dimension characteristic vector of a corresponding vehicle and a predetermined dimension characteristic vector of a specified vehicle, and determining license plate information. According to the method, in the condition of license plate matching failure, the license plate information corresponding to the vehicle can be accurately retrieved in a specified range, the license plate retrieval is realized, and the number of security personnel arranged at each gate of a parking lot is reduced. The invention also discloses a system for retrieving a license plate with the above advantages.
Owner:SHENZHEN JIESHUN SCI & TECH IND

Deep learning-based face recognition and face verification supervised learning method

The invention discloses a deep learning-based face recognition and face verification supervised learning method. The method comprises the following steps: a soft maximum loss function is used to increase a between-class distance for full connection layer output characteristics of a convolutional neural network model, a center is learnt for the depth characteristics of each class through a centralloss function, a super parameter is used to balance the two functions to thus jointly supervise the learning characteristics; backward propagation of the convolution neural network model is calculated, a stochastic gradient descent algorithm based on minimum batch processing is adopted to optimize the convolutional neural network model, and a weight matrix and the depth characteristic center of each class are updated; and after the depth characteristics are subjected to principal component analysis and dimension reduction, the cosine distance between each two characteristics is calculated to calculate a score, wherein the score is used for target matching in nearest neighbor and threshold comparison, and a face is recognized and verified. The identification ability of the neural network learning characteristics can be effectively improved, and a face characteristic recognition and face verification mode with robustness is acquired.
Owner:TIANJIN UNIV

Vehicle load dynamic weighing method for orthotropic bridge deck steel box girder bridge

The invention discloses a vehicle load dynamic weighing method for an orthotropic bridge deck steel box girder bridge, which relates to the field of bridge health monitoring. The method comprises the following steps of: mounting a fiber grating strain sensor at the bottom of a U-shaped rib of an internal top plate of an orthotropic bridge deck steel box girder; measuring longitudinal bridge strain of the U-shaped rib when a vehicle passes through the position of the sensor; converting the strain into an optical signal by the sensor; demodulating the optical signal by using a fiber grating demodulator; carrying out cross-correlation analysis on actually measured strains of measuring points on the same U-shaped rib in the steel box girder at different sections so as to determine vehicle speed of the vehicle; analyzing actually measured strain area vectors of measuring points on different U-shaped ribs in the steel box girder at the same section; and carrying out angle Cosine distance analysis by using a strain effect linear area vector of the U-shaped rib of the steel box girder so as to figure out transverse acting position and weight of each vehicle on a running lane. The vehicle load dynamic weighing method for the orthotropic bridge deck steel box girder bridge, disclosed by the invention, has the advantages of convenience for mounting, low manufacturing price, no need of interrupting transportation, no excavation or damage of the road surface and capability of achieving nondestructive and automatic dynamic weighing of the bridge vehicle load.
Owner:CHINA RAILWAY BRIDGE SCI RES INST LTD +1

Fabric property picture collection and recognition method and system based on deep learning

The invention discloses a fabric property picture collection and recognition method based on deep learning. The method comprises the steps that multiple fabric property pictures are acquired, macro information and micro information of the fabric property pictures are collected, and a training set is generated; the training set is trained through a deep learning model; deep features which are trained through the deep learning model and contain global information and local information at the same time are extracted, and linear discriminant analysis is performed on the deep features to complete training of the deep learning model; and fabric recognition is performed through a cosine distance according to the trained deep learning model. Through the method, multiple fabric property recognition problems including weaving process problems, background color process problems, surface process problems, printing process problems, spinning process problems and the like are solved, meanwhile, the trained model contains the local information and the global information at the same time, and the accurate recognition rate and the matching rate of a local pattern and a global pattern of a fabric are increased. The invention furthermore discloses a fabric property picture collection and recognition system based on deep learning.
Owner:湖州易有科技有限公司

Low-quality face comparison method based on deep convolution neural network

The invention discloses a low-quality face comparison method based on a deep convolution neural network. The method comprises the following steps: each image in a face training database is sent to a constructed deep convolution neural network for feature extraction; the extracted features are inputted to a fully-connected layer and are projected to a projection matrix in low-latitude space throughaffine; feature vectors obtained and calculated by the projection matrix are trained through a two-norm normalized spherical loss function; through a gradient descent method, the weight value of eachfilter in the fully-connected layer and the deep convolution neural network is found out, and the deep convolution neural network with a comparison passing rate to be the highest is selected; and thefeature vector of a to-be-detected face image and the feature vector of each fact answer image in a low-quality face test database are subjected to cosine distance calculation, and the same person isjudged if the cosine value is smaller than a threshold. The method is used for high-efficiency comparison on a low-quality face, fewer calculation resources are used, and both the face comparison precision and the comparison speed can be considered at the same time.
Owner:上海敏识网络科技有限公司

Learning type image processing method, system and server

The embodiment of the invention discloses a learning type image processing method, a system and a server. The method comprises the following steps of acquiring a to-be-detected target image; inputtingthe to-be-detected target image into a preset convolution neural network model; obtaining classification data outputted by the convolution neural network model in response to the input of a human face image, wherein the convolution neural network model takes a loss function as a constraint condition and the cosine distance of the in-class features in the classification data is defined to tend tobe the Euclidean distance; acquiring the classification data, and performing content understanding on the to-be-detected target image according to the classification data. According to the invention,the classification data are screened through a loss function based on the cosine distance in the joint loss function. The cosine distance in the classification data is maximized. However, the color ina simple image is single, and the maximization of the cosine distance with relatively strong internal convergence is achieved. The cosine distance can tend to be a calculation result of the Euclideandistance, so that the implementation complexity is simplified.
Owner:BEIJING DAJIA INTERNET INFORMATION TECH CO LTD

User interface display identification method and terminal device

ActiveCN108363599AImprove display recognition accuracySolve the problem of low script maintenance efficiencyExecution for user interfacesFeature vectorGraphical user interface
The invention is suitable for the technical field of mobile terminals, and provides a user interface display identification method and a terminal device. The method comprises the steps of obtaining atarget interface screenshot of a to-be-detected user interface, and extracting an eigenvector of the target interface screenshot; obtaining a pre-stored normal interface screenshot of a normal user interface, and extracting an eigenvector of the normal interface screenshot; according to the eigenvectors of the target interface screenshot and the normal interface screenshot, determining a cosine distance between the target interface screenshot and the normal interface screenshot; judging whether the cosine distance exceeds a preset threshold or not; if the cosine distance is judged to exceed the preset threshold, obtaining levels and positions of element controls of the to-be-detected user interface; detecting whether the levels and the positions of the element controls are abnormal or not;and if the levels and the positions of the element controls are detected to be normal, judging that the display of the to-be-detected user interface is normal. The problem of low script maintenance efficiency of an existing user interface display detection method is solved.
Owner:ONE CONNECT SMART TECH CO LTD SHENZHEN

Fingerprint identification method and device

The invention discloses a fingerprint identification method and device. The method comprises the following steps: extracting features from a first fingerprint image collected by a fingerprint sensor and a second fingerprint image stored in a database by a convolutional neural network to obtain a first fingerprint feature corresponding to the first fingerprint image and a second fingerprint feature corresponding to the second fingerprint image, wherein the dimensions of the first fingerprint feature and the second fingerprint feature are the same; carrying out dimensionality reduction processing on the first fingerprint feature and the second fingerprint feature to obtain a third fingerprint feature and a fourth fingerprint feature respectively, wherein the dimensions of the third fingerprint feature and the fourth fingerprint feature are the same; and determining whether the first fingerprint image and the second fingerprint image are the same fingerprint according to a cosine distance of the third fingerprint feature and the fourth fingerprint feature. The fingerprint identification method and device disclosed by the technical scheme in the invention can be used for avoiding the problem that fingerprint can be only identified by the global feature points and local feature points of the fingerprint in the prior art, and improving the fingerprint identification accuracy of low-quality fingerprint images.
Owner:XIAOMI INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products