Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

734results about How to "Good segmentation effect" patented technology

Method for recognizing license plate character based on wide gridding characteristic extraction and BP neural network

The invention discloses a vehicle license plate character recognition method based on rough grid feature extraction and a BP neural network. The method comprises the following steps: (1). preprocessing a vehicle license plate image to eliminate various interferences and obtain a minimum vehicle license plate region; (2). segmenting the vehicle license plate characters by combining vertical projection and a drop-fall algorithm; (3). screening the segmentation results to eliminate interferences of vertical frames, separators, rivets and the like; (4). normalizing the characters according to the position of a centre of mass; (5). taking each pixel of a normalized characters dot-matrix as a grid to extract the original features of the characters; (6). designing the BP neural network with a secondary classifier according to the situation of the vehicle license plate; and (7). reasonably constructing a training sample database to train the neural network, and adjusting the training samples by the recognition effect to realize accurate recognition of the network. The method effectively eliminates the noise interference, quickly and accurately segments the characters, steadily and effectively recognizes Chinese characters, and realizes the balance between real-time property and accuracy in the whole recognition course.
Owner:ZHEJIANG NORMAL UNIVERSITY

A retinal blood vessel image segmentation method based on a multi-scale feature convolutional neural network

The invention belongs to the technical field of image processing, in order to realize automatic extraction and segmentation of retinal blood vessels, improve the anti-interference ability to factors such as blood vessel shadow and tissue deformation, and make the average accuracy rate of blood vessel segmentation result higher. The invention relates to a retinal blood vessel image segmentation method based on a multi-scale feature convolutional neural network. Firstly, retinal images are pre-processed appropriately, including adaptive histogram equalization and gamma brightness adjustment. Atthe same time, aiming at the problem of less retinal image data, data amplification is carried out, the experiment image is clipped and divided into blocks, Secondly, through construction of a multi-scale retinal vascular segmentation network, the spatial pyramidal cavity pooling is introduced into the convolutional neural network of the encoder-decoder structure, and the parameters of the model are optimized independently through many iterations to realize the automatic segmentation process of the pixel-level retinal blood vessels and obtain the retinal blood vessel segmentation map. The invention is mainly applied to the design and manufacture of medical devices.
Owner:TIANJIN UNIV

Generative adversarial network-based pixel-level portrait cutout method

The invention discloses a generative adversarial network-based pixel-level portrait cutout method and solves the problem that massive data sets with huge making costs are needed to train and optimizea network in the field of machine cutout. The method comprises the steps of presetting a generative network and a judgment network of an adversarial learning mode, wherein the generative network is adeep neural network with a jump connection; inputting a real image containing a portrait to the generative network for outputting a person and scene segmentation image; inputting first and second image pairs to the judgment network for outputting a judgment probability, and determining loss functions of the generative network and the judgment network; according to minimization of the values of theloss functions of the two networks, adjusting configuration parameters of the two networks to finish training of the generative network; and inputting a test image to the trained generative network for generating the person and scene segmentation image, randomizing the generated image, and finally inputting a probability matrix to a conditional random field for further optimization. According tothe method, a training image quantity is reduced in batches; and the efficiency and the segmentation precision are improved.
Owner:XIDIAN UNIV

Mura defect detection method based on sample learning and human visual characteristics

The invention discloses a mura defect detection method based on sample learning and human visual characteristics, which belongs to the TFT-LCD display defect detection field. According to the invention, the method comprises the following steps: firstly, utilizing the Gaussian filter smoothing and Hough transform rectangle to preprocess the TFT-LCD display image, removing a large amount of noise and segmenting the image areas to be detected; then, using the PCA algorithm to conduct learning to a large amount of defect-free samples; automatically extracting the differential characteristics between the background and the target and re-constructing a background image; and then, thresholding the differential characteristics between a testing image and the background; through the reconstructing of the background and the threshold calculating, jointly creating a model. According to the invention, based on the training sample learning, a relationship model between the background structure information and the threshold value is established; and a self-adaptive segmentation algorithm based on human visual characteristics is proposed. The main purpose of the invention is to detect different mura defects in a TFT-LCD, to raise the qualification rate and to increase accuracy for the detection of mura defects.
Owner:NANJING UNIV

Image type fire flame identification method

The invention discloses an image type fire flame identification method. The method comprises the following steps of 1, image capturing; 2, image processing. The image processing comprises the steps of 201, image preprocessing; 202, fire identifying. The fire identifying comprises the steps that indentifying is conducted by the adoption of a prebuilt binary classification model, the binary classification model is a support vector machine model for classifying the flame situation and the non-flame situation, wherein the building process of the binary classification model comprises the steps of I, image information capturing;II, feature extracting; III, training sample acquiring; IV, binary classification model building; IV-1, kernel function selecting; IV-2, classification function determining, optimizing parameter C and parameter D by the adoption of the conjugate gradient method, converting the optimized parameter C and parameter D into gamma and sigma 2; V, binary classification model training. By means of the image type fire flame identification method, steps are simple, operation is simple and convenient, reliability is high, using effect is good, and the problems that reliability is lower, false or missing alarm rate is higher, using effect is poor and the like in an existing video fire detecting system under a complex environment are solved effectively.
Owner:东开数科(山东)产业园有限公司

Method for segmenting HMT image on the basis of nonsubsampled Contourlet transformation

The invention discloses a method for segmenting HMT images which is based on the nonsubsampled Contourlet transformation. The method mainly solves the problem that the prior segmentation method has poor area consistency and edge preservation, and comprises the following steps: (1) performing the nonsubsampled Contourlet transformation to images to be segmented and training images of all categories to obtain multi-scale transformation coefficients; (2) according to the nonsubsampled Contourlet transformation coefficients of the training images and the hidden markov tree which represents the one-to-one father and son state relationship, reckoning the model parameters; (3) calculating the corresponding likelihood values of the images to be segmented in all scale coefficient subbands, and classifying by examining possibility after integrating a labeled tree with a multi-scale likelihood function to obtain the maximum multi-scale; (4) updating category labels for each scale based on the context information context-5 model; and (5) with the consideration of the markov random field model and the information about correlation between two adjacent pixel spaces in the images to be segmented, updating the category labels to obtain the final segmentation results. The invention has the advantages of good area consistency and edge preservation, and can be applied to the segmentation for synthesizing grainy images.
Owner:探知图灵科技(西安)有限公司

An image semantic segmentation method based on an area and depth residual error network

ActiveCN109685067ASolve the disadvantages of prone to rough segmentation boundariesGood segmentation effectCharacter and pattern recognitionNetwork modelPixel classification
The invention discloses an image semantic segmentation method based on a region and a deep residual network. According to the region-based semantic segmentation method, mutually overlapped regions areextracted by using multiple scales, targets of multiple scales can be identified, and fine object segmentation boundaries can be obtained. According to the method based on the full convolutional network, the convolutional neural network is used for autonomously learning features, end-to-end training can be carried out on a pixel-by-pixel classification task, but rough segmentation boundaries areusually generated in the method. The advantages of the two methods are combined: firstly, a candidate region is generated in an image by using a region generation network, then feature extraction is performed on the image through a deep residual network with expansion convolution to obtain a feature map, the feature of the region is obtained by combining the candidate region and the feature map, and the feature of the region is mapped to each pixel in the region; And finally, carrying out pixel-by-pixel classification by using the global average pooling layer. In addition, a multi-model fusionmethod is used, different inputs are set in the same network model for training to obtain a plurality of models, and then feature fusion is carried out on the classification layer to obtain a final segmentation result. Experimental results on SIFT FLOW and PASCAL Context data sets show that the algorithm provided by the invention has relatively high average accuracy.
Owner:JIANGXI UNIV OF SCI & TECH

Automatic division method for pulmonary parenchyma of CT image

The invention provides an automatic division method for pulmonary parenchyma of a CT image. According to the automatic division method, the CT is divided through carrying out a random migration algorithm for two times to obtain the accurate pulmonary parenchyma; in the first time, the random migration algorithm is used for dividing to obtain a similar pulmonary parenchyma mask; and in the second time, the random migration algorithm is used for repairing defects of the periphery of a lung and dividing to obtain an accurate pulmonary parenchyma result. Seed points, which are set by adopting the random migration algorithm to divide, are rapidly and automatically obtained through methods including an Otsu threshold value, mathematical morphology and the like; and manual calibration is not needed so that the working amount and operation time of doctors are greatly reduced. According to the automatic division method provided by the invention, a process of 'selecting the seed points for two times and dividing for two times' is provided and is an automatic dividing process from a coarse size to a fine size; and finally, the dependence on the selection of the initial seed points by a dividing result is reduced, so that the accuracy, integrity, instantaneity and robustness of the dividing result are ensured. The automatic division method provided by the invention is funded by Natural Science Foundation of China (NO: 61375075).
Owner:HEBEI UNIVERSITY

Complicated background image and character division method

The invention discloses a complicated background image and character division method. The complicated background image and character division method mainly comprises the following steps of: reading an image, reading a character area of the read image, extracting a bottom layer color feature and a bottom layer texture feature of the character area, fusing the extracted bottom layer color feature and the bottom layer texture feature to obtain a bottom layer local feature, extracting a label layer global feature of the character area, fusing the bottom layer local feature and the label layer global feature of the character area to obtain feature vectors of all the pixels in the character area, training the feature vectors of all the pixels in the character area to obtain a first-stage division classifier, carrying out first-stage character division by using the trained classifier, carrying out connected element calibration on the result of the first-stage division, extracting connected element features to carry out second-stage character division, and outputting the result of the character division. The complicated background image and character division method is capable of improving the division correctness of the characters in complicated background images and has certain generality and practicability.
Owner:SHANDONG UNIV OF SCI & TECH

Image segmentation method based on rapid density clustering algorithm

ActiveCN106447676AEfficient Adaptive SegmentationImprove accuracyImage analysisPattern recognitionScale variation
The invention discloses an image segmentation method based on a rapid density clustering algorithm. The image segmentation method comprises the following steps: 1) for a natural image to be processed, firstly carrying out preprocessing and initialization, comprising filtering noise reduction, gray-level registration, area dividing, scale zooming and the like; 2) then carrying out calculation of similarity distance between data points on sub-graphs completing scale variation, and obtaining correlation between pixel points; 3) then carrying out concurrent segmentation processing in each sub-graph, comprising drawing a decision graph based on the density clustering algorithm, carrying out residual analysis to determine a clustering center based on the decision graph and comparing based on the similarity distance to classify remaining points on an original scale sub-graph; and 4) then merging the sub-graphs after the segmentation is completed, and carrying out secondary re-clustering to obtain a segmentation result graph with original size dimensions. The image segmentation method based on the rapid density clustering algorithm for parameter robust provided by the invention can automatically determine the number of segmented classes to realize relatively high segmentation accuracy rate.
Owner:ZHEJIANG UNIV OF TECH

Yellow River ice semantic segmentation method based on multi-attention mechanism double-flow fusion network

The invention discloses a Yellow River ice semantic segmentation method based on a multi-attention mechanism double-flow fusion network. The method is used for solving the technical problem that an existing Yellow River ice detection method is poor in accuracy. According to the technical scheme, firstly, data sets are collected and labeled, the labeled data sets are divided into a training data set and a test data set, then a segmentation network structure is constructed, the network comprises shallow branches and deep branches, and a channel attention module is added to the deep branch, a position attention module is added to the shallow branch, the fusion module is used for fusing the shallow branches and the deep branches, the data in the training set is added into the network in batches, the constructed neural network is trained by adopting cross entropy loss and an RMSprop optimizer, and finally, a to-be-tested image is input and a test is carried out by using the trained model. According to the method, multi-level and multi-scale feature fusion can be selectively carried out, context information is captured based on an attention mechanism, a feature map with higher resolutionis obtained, and a better segmentation effect is obtained.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

BP neural network image segmentation method and device based on adaptive genetic algorithm

ActiveCN106023195ASolve the problem of evolutionary stagnationAvoid local convergenceImage enhancementImage analysisMutationChromosome encoding
The invention relates to a BP neural network image segmentation method and device based on an adaptive genetic algorithm, and the method comprises the following steps: 1), analyzing a to-be-segmented image, and generating a training sample of a neural network; 2), setting the parameters of the neural network and population parameters, and carrying out the chromosome coding; 3), inputting the training sample for the training of the network, optimizing the weight value and threshold value of the network through employing a new adaptive genetic algorithm, adapting to the crossing and mutation operations, and introducing an adjustment coefficient; 4), inputting the to-be-segmented image, carrying out classifying of the trained neural network, and achieving the image segmentation. The device comprises a training sample generation module, a neural network structure determining module, a network training module, and an image segmentation module. The method introduces the adjustment coefficient which is related with the evolution generations, solves a problem that the individual evolution stagnates at the initial stage of population evolution, and also solves a problem of local convergence caused when the individual adaption degrees are close, thereby obtaining the neural network which can maximize representation of the image features, and achieving the more precise image segmentation.
Owner:HENAN NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products