Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

334 results about "Pixel classification" patented technology

Stereo matching method based on disparity map pixel classification correction optimization

InactiveCN103226821AMatch Pixel ThinningAccurate matching accuracyImage enhancementImage analysisCost aggregationParallax
The invention relates to the technical field of stereo vision, in particular to a stereo matching method. The method solves the problem that the accuracy of disparity correction optimization of the existing stereo matching method is insufficient. The stereo matching method based on disparity map pixel classification correction optimization comprises the following steps that (I) cost aggregation is conducted by taking a left view and a right view as references and based on a method combining a gray scale difference with a gradient, and a left disparity map and a right disparity map are obtained and subjected to left and right consistency detection to generate an initial reliable disparity map; (II) correlation credibility detection and weak texture area detection are conducted, and a pixel is classified into stable matching pixel points, unstable matching pixel points, occlusion area pixel points and weak texture area pixel points; (III) the unstable matching points are corrected by an adaptive weight algorithm based on improvement, and the occlusion area points and the weak texture area points are corrected by a mismatching pixel correction method; and (IV) the corrected disparity maps are optimized by an algorithm based on division, and dense disparity maps are obtained.
Owner:SHANXI UNIV

Method and system for forming very low noise imagery using pixel classification

A method and system for generating images from projection data comprising inputting from at least one data receiving element first values representing correlated positional and recorded data; each of said first values forming a point in an array of k data points; forming an image by processing the projection data utilizing a pixel characterization imaging subsystem that combines the positional and recorded data to form the SAR imagery utilizing one of a back-projection algorithm or range migration algorithm; integrating positional and recorded data from many aperture positions, comprising: forming the complete aperture A0 for SAR image formation comprising collecting the return radar data, the coordinates of the receiver, and the coordinates of the transmitter for each position k along the aperture of N positions; forming an imaging grid comprising M image pixels wherein each pixel Pi in the imaging grid is located at coordinate (xP(i),yP(i), zP(i)); selecting and removing a substantial number of aperture positions to form a sparse aperture Ai; repeating the selecting and removing step for L iterations for each Ai; classifying each pixel in the image into either target class based on the statistical distribution of its amplitude across L iterations (1≦i≦L); whereby if an image pixel is classified so as to be associated with a physical object, its value is computed from its statistics; otherwise, the pixel is assumed to come from a non-physical object and is given the value of zero.
Owner:US SEC THE ARMY THE +1

Semantic segmentation network training method, image semantic segmentation method and devices

InactiveCN108537292AImprove training recognition effectMake up for edge feature lossCharacter and pattern recognitionNeural architecturesFeature extractionVisual perception
The embodiments of the present invention belongs to the computer vision technological field and provide a semantic segmentation network training method, an image semantic segmentation method and corresponding devices. The semantic segmentation network training method includes the following steps that: a to-be-trained image is acquired; the to-be-trained image is inputted into a pre-established semantic segmentation network, the front network layer of the semantic segmentation network is adopted to extract the features of the to-be-trained image, and a feature map containing the block, global and edge features of the to-be-trained image can be obtained; the feature image containing the block, global and edge features of the to-be-trained image is inputted to the rear network layer of the semantic segmentation network so as to be subjected to image pixel classification, so that a semantic segmentation image containing segmentation pixel types can be obtained; and the parameters of the semantic segmentation network are update according to the semantic segmentation image. Compared with the prior art, the method of the invention separately extracts and restores the edge features of theto-be-trained image, thereby improving the training recognition effect of the edge of a segmentation region.
Owner:上海白泽网络科技有限公司

An image semantic segmentation method based on an area and depth residual error network

ActiveCN109685067ASolve the disadvantages of prone to rough segmentation boundariesGood segmentation effectCharacter and pattern recognitionNetwork modelPixel classification
The invention discloses an image semantic segmentation method based on a region and a deep residual network. According to the region-based semantic segmentation method, mutually overlapped regions areextracted by using multiple scales, targets of multiple scales can be identified, and fine object segmentation boundaries can be obtained. According to the method based on the full convolutional network, the convolutional neural network is used for autonomously learning features, end-to-end training can be carried out on a pixel-by-pixel classification task, but rough segmentation boundaries areusually generated in the method. The advantages of the two methods are combined: firstly, a candidate region is generated in an image by using a region generation network, then feature extraction is performed on the image through a deep residual network with expansion convolution to obtain a feature map, the feature of the region is obtained by combining the candidate region and the feature map, and the feature of the region is mapped to each pixel in the region; And finally, carrying out pixel-by-pixel classification by using the global average pooling layer. In addition, a multi-model fusionmethod is used, different inputs are set in the same network model for training to obtain a plurality of models, and then feature fusion is carried out on the classification layer to obtain a final segmentation result. Experimental results on SIFT FLOW and PASCAL Context data sets show that the algorithm provided by the invention has relatively high average accuracy.
Owner:JIANGXI UNIV OF SCI & TECH

Medical ultrasound assisted automatic diagnosis device and medical ultrasound assisted automatic diagnosis method

The invention relates to a medical ultrasound assisted automatic diagnosis device and a medical ultrasound assisted automatic diagnosis method. The method comprises the following steps of: obtaining an ultrasound echo signal of a human body tested position through an ultrasound probe; obtaining an ultrasound gray-scale image for diagnosis through host processing; inputting the ultrasound gray-scale image into an automatic diagnosis module; and performing computer analysis on the image by the automatic diagnosis module, wherein the computer analysis includes pixel classification, parameter calculation and parameter abnormality judgment; according to the pixel classification, pixel data of the image is divided into lesion suspected pixels and normal tissue pixels; according to the parameter calculation, lesion suspected pixels and peripheral pixels are subjected to geometrical and gray-scale relevant parameter calculation; and according to the parameter abnormality judgment, whether each geometrical and gray-scale relevant parameter is abnormal or not is judged, the disease degree is judged according to the abnormal condition of the geometrical or gray-scale relevant parameter, and a detection report is finally output. The device and the method have the advantages that an ultrasound image computer processing technology is used, and gland lesion assisted automatic diagnosis equipment is provided, so that the efficiency and the accuracy of the gland ultrasound examination performed by a user are improved.
Owner:CHISON MEDICAL TECH CO LTD

High-resolution remote sensing image impervious surface extraction method and system based on deep learning and semantic probability

ActiveCN108985238AGet goodReasonable impervious surface extraction resultsMathematical modelsEnsemble learningConditional random fieldSample image
A high-resolution remote sensing image impervious surface extraction method and system based on deep learning and semantic probability. The method includes: obtaining a high-resolution remote sensingimage of a target region, normalizing image data, dividing the image data into a sample image and a test image; constructing a deep convolutional network, wherein the deep convolutional network is composed of a multi-layer convolution layer, a pooling layer and a corresponding deconvolution and deconvolution layer, and extracting image features of each sample image; predicting each sample image pixel by pixel, and constructing a loss function by using the error between the predicted value and the true value, and updating and training the network parameters; extracting the test image features by the deep convolutional network, and carrying out the pixel-by-pixel classification prediction, then constructing a conditional random field model of the test image by using the semantic associationinformation between pixel points, optimizing the test image prediction results globally, and obtaining the extraction results. The invention can accurately and automatically extract the impervious surface of the remote sensing image, and meets the practical application requirements of urban planning.
Owner:WUHAN UNIV

Remote sensing image semantic segmentation method based on region description self-attention mechanism

The invention discloses a remote sensing image semantic segmentation method based on a region description self-attention mechanism. The method comprises the steps that a visible light remote sensing image is input into an encoder, advanced semantic features of the visible light remote sensing image are extracted, feature maps of different levels are obtained, global scene extraction and essentialfeature extraction based on a self-attention module are conducted on the basis of the feature maps of different levels, and a scene guiding feature map and a noiseless feature map are correspondinglyobtained; and inputting the scene guide feature map and the noiseless feature map into a decoder, performing up-sampling to return to the size of an original image, and performing pixel-by-pixel classification to obtain a remote sensing image semantic segmentation result. Through the encoder for extracting the semantic features, the self-attention module for increasing the internal relation of theimage and the decoder for mapping the attention-weighted semantic features back to the original space so as to perform pixel-by-pixel classification, the receptive field of the model is improved, themodel can adapt to the scale change of data, and the problem of category imbalance can be solved.
Owner:BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products