Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1686 results about "Positive sample" patented technology

Triple loss-based improved neural network pedestrian re-identification method

The invention discloses a triple loss-based improved neural network pedestrian re-identification method. The method comprises the following steps of constructing a sample database, establishing positive and negative sample libraries based on the sample database, and randomly selecting two positive samples and one negative sample to form a triple; constructing a triple loss-based neural network, and performing training, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer; inputting a to-be-tested picture and each sample picture in the expanded sample database, which serve as a group of inputs, to the trained neural network in sequence, wherein another input of the neural network is zero or zero input; and calculating a distance of eigenvectors of two input pictures output by the neural network by utilizing a Euclidean distance, and querying and arranging first 20 Euclidean distances in an ascending order, and then performing simple manual screening to obtain a final identification result. The method has the beneficial effects that the identification method can be suitable for a picture scene with a relatively great change, can ensure robustness, and has relatively high identification accuracy.
Owner:CHINACCS INFORMATION IND

Surface defect detection method based on positive case training

The invention relates to a surface defect detection method based on positive case training. The method comprises two steps of image reconstruction and defect detection, image reconstruction is to reconstruct an inputted original image into an image without defects, reconstruction steps are as follows, artificial defects and noise are added to the positive case image during training, a self-encoderis utilized for reconstruction, the L1 distance between the reconstruction result and the noise-free original image is calculated, the distance is minimized as a reconstruction target, in cooperationwith the generative adversarial network, the reconstruction image effect is optimized; defect detection is performed after image reconstruction, LBP features of the reconstructed image and the original image are calculated, after difference between the two feature images is made, the two images are binarized based on the fixed threshold, so the defects are found. The method is advantaged in thatthe depth learning method is utilized, the method can be sufficiently robust to be less susceptible to environmental changes when positive samples are enough, moreover, based on regular training, themethod does not rely on a large number of negative samples and manual annotation, the method is suitable for being used in real-world scenarios, and the surface defects can be better detected.
Owner:NANJING UNIV +2

State prediction method and device

The invention discloses a state prediction method and device. The method comprises the following steps of: sampling target users; respectively generating a negative sample, a positive sample and a verification sample according to account information of lost and unlost users according to a recognized sampling moment; training a decision-making tree which is used for predicting user loss state afterthe sampling moment; inputting the verification sample into the trained decision-making tree model to obtain a predicted loss state; and if a correct recall rate obtained through carrying out calculation according to the predicted loss state and a practical loss state is not smaller than a threshold value, determining that the training of the decision-making tree model is completed, and carryingout user loss state prediction. According to the state prediction method and device, the decision-making tree model is trained by a training sample generated through sampling the target users, and user loss state prediction is carried out according to the trained decision-making tree model, so that the technical problems of relatively recognition efficiency low and reusing difficulty caused by user loss state prediction carried out through artificial experiences or established rules in the prior art are solved.
Owner:BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD

Target detection method based on high resolution optical satellite remote sensing images, and system thereof

ActiveCN108304873AObject detection optimizationImage enhancementImage analysisPositive sampleComputer vision
The invention relates to a target detection method based on high resolution optical satellite remote sensing images, and a system thereof. The target detection method based on high resolution opticalsatellite remote sensing images includes the steps: obtaining a marked target positive sample and a marked background negative sample to form a training sample; for the training sample, extracting a plurality of different weak characteristic channels, and according to the plurality of different weak channels, obtaining a candidate region; obtaining the context scene of the candidate region, performing characteristic extraction on the candidate region and the context scene of the candidate region, and fusing the extracted characteristics to form the characteristics of the candidate region; training the training sample, and obtaining a classifier; classifying the characteristics of the candidate region by means of the classifier, and obtaining a target region including the target; and performing duplicate removal on the target region to obtain a detection target. The target detection method based on high resolution optical satellite remote sensing images realizes target detection on remote sensing images with enlarged formats, optimizes the targets with very short interval and the target detection effect with uncommon length breadth ratio.
Owner:深圳市国脉畅行科技股份有限公司

Optical remote-sensing image, GIS automatic registration and water body extraction integrated method

ActiveCN103400151ATaking into account relative positional constraintsExact matchImage analysisCharacter and pattern recognitionPositive sampleSvm classifier
An optical remote-sensing image, GIS (Geographic Information System) automatic registration and water body extraction integrated method comprises the following steps: 1, segmenting input optical remote-sensing images to obtain initial segmentation results of water systems; 2, performing local level set evolution segmentation to obtain a set R; 3, matching basic geographic information water system layer vector objects with objects in the set R, registering the images, executing the step 4 if the registration is successful, otherwise, returning to the step 2; 4, obtaining unchanged water bodies, suspected newly-added water bodies and suspected changed water bodies through buffer detection, further filtering out suspected water body objects, and confirming real changed water bodies and unchanged water bodies; 5, taking the multispectral values of corresponding pixels in unchanged water body objects as positive samples, randomly selecting the multispectral values of pixels on the optical remote-sensing image except the area in the set R as negative samples, training an SVM (Support Vector Machine) classifier, verifying whether the objects in a filtered suspected water body object set are water bodies or not, and obtaining water body results of segmentation, registration and extraction.
Owner:WUHAN UNIV

Target tracking device and method, and computer readable storage medium

The invention discloses a target tracking device based on a convolutional neural network. The device comprises a memory and processor. A target tracking program which can be operated on the processoris stored in the memory. The program is executed by the processor. And an execution process comprises the following steps of according to sampling point distribution, collecting a picture sample froma video frame image and recording a position coordinate of the picture sample; based on a CNN model, extracting a sample characteristic from the picture sample, and according to the sample characteristic, calculating a confidence coefficient of the picture sample and a tracking target; according to the confidence coefficient, adjusting a weight of the picture sample, and according to the positioncoordinate and the weight, calculating a position coordinate of the tracking target; according to the position coordinate, collecting a positive sample and a negative sample from the video frame imageso as to train the sample set and train the CNN model, and then updating a model parameter of the CNN model; and repeating the above steps till that tracking of a video is completed. The invention also provides a target tracking method based on the convolutional neural network and a computer readable storage medium. In the invention, accuracy of target tracking is increased.
Owner:PING AN TECH (SHENZHEN) CO LTD

Pedestrian re-identifying method based on coordination scale learning

The invention discloses a pedestrian re-identifying method based on coordination scale learning and belongs to the technical field of monitoring video retrieval. First, according to color and texture features of images in a marked training sample set L, scale learning is carried out, and covariance matrixes Mc and Mt in corresponding Mahalanobis distance are obtained; and checking targets are selected randomly, the Mc and the Mt are used for Mahalanobis distance measuring, a corresponding sorting result is obtained, positive samples and negative samples are obtained, a new marked training sample set L is obtained, the Mc and the Mt are updated until an unmarked training sample set U is empty, a final marked sample set L* is obtained, the color and texture features are fused, an Mf is obtained, and a Mahalanobis distance function based on the Mf can be used for pedestrian re-identifying. Under a semi-supervised framework, the pedestrian re-identifying technology based on scale learning is studied, scale learning is carried out with the marked samples assisted by the unmarked samples, the requirement that practical video investigation application marked training samples are hard to obtain is met, and re-identifying performance under few marked samples can be effectively improved.
Owner:WUHAN UNIV

Video target detecting and tracking method based on optical flow features

The invention provides a video target detecting and tracking method based on optical flow features. According to the technical scheme of the method, during the first step, an input image frame sequence is subjected to background sampling, and the optical flow vector of each pixel point after the sampling process is calculated. Meanwhile, the background motion is estimated based on the Mean Sift algorithm, and then the overall significance of a target is estimated. Finally, a threshold value is set according to the detection result of the target significance detection, so that a target region and a background region are separated. During the second step, the tracking of a video target is conducted: firstly, the target region is selected as a positive sample, and the background region is selected as a negative sample. The target is described based on the Haar features and the global color features of the target. Meanwhile, original features are subjected to sampling and compressing in the random matrix manner. Based on the bayesian criterion, the similarity between the target and a target of a previous frame is judged. Finally, the target is continuously tracked based on the particle filtering algorithm. In this way, multiple features including the target motion saliency, the color, the texture and the like are fused together, so that the success rate of target detection is improved. Therefore, the target can be quickly, effectively and continuously tracked.
Owner:湖南优象科技有限公司

Target tracking method based on difficult positive sample generation

The invention discloses a target tracking method based on difficult positive sample generation. According to the method, for each video in training data, a variation auto-encoder is utilized to learna corresponding flow pattern, namely a positive sample generation network, codes are slightly adjusted according to an input image obtained after encoding, and a large quantity of positive samples aregenerated; the positive samples are input into a difficult positive sample conversion network, an intelligent body is trained to learn to shelter a target object through one background image block, the intelligent body performs bounding box adjustment continuously, so that the samples are difficult to recognize, the purpose of difficult positive sample generation is achieved, and sheltered difficult positive samples are output; and based on the generated difficult positive samples, a twin network is trained and used for matching between a target image block and candidate image blocks, and positioning of a target in a current frame is completed till processing of the whole video is completed. According to the target tracking method based on difficult positive sample generation, the flow pattern distribution of the target is learnt directly from the data, and a large quantity of diversified positive samples can be obtained.
Owner:ANHUI UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products