Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

875 results about "Superpixel segmentation" patented technology

Significant object detection method based on sparse subspace clustering and low-order expression

ActiveCN105574534ASolve the problem that it is difficult to detect large-scale salient objectsOvercome the difficulty of detecting large-scale saliency objects completely and consistentlyImage enhancementImage analysisGoal recognitionImage compression
The invention discloses a significant object detection method based on sparse subspace clustering and low-order expression. The method comprises the steps of: 1, carrying out super pixel segmentation and clustering on an input image; 2, extracting the color, texture and edge characteristics of each super pixel in clusters, and constructing cluster characteristic matrixes; 3, ranking all super pixel characteristics according to the magnitude of color contrast, and constructing a dictionary; according to the dictionary, constructing a combined low-order expression model, solving the model and decomposing the characteristic matrixes of the clusters so as to obtain low-order expression coefficients, and calculating significant factors of the clusters; and 5, mapping the significant value of each cluster into the input image according the spatial position, and obtaining a significant map of the input image. According to the invention, the significant objects relatively large in size in the image can be completely and consistently detected, the noise in a background is inhibited, and the robustness of significant object detection of the image with the complex background is improved. The significant object detection method is applicable to image segmentation, object identification, image restoration and self-adaptive image compression.
Owner:XIDIAN UNIV

Target detection method in Codebook dynamic scene based on superpixels

The invention discloses a target detection method in a Codebook dynamic scene based on superpixels. The method is characterized by comprising the following steps that (1) a superpixel partition method is used for partitioning video frames, K superpixels are obtained by partitioning; (2) a Codebook background modeling method is used, a Codebook is established for each superpixel partitioned in the step (1), each Codebook comprises one or more Codewords, each Codeword has the maximin threshold values during learning, the maximin threshold values are detected, background modeling is completed; (3) after background modeling is completed, currently-entering video frames are subjected to target detection, if a certain pixel value of the current frames accords with distribution of the background pixel values, the certain pixel value is marked as the background, otherwise, the certain pixel value is marked as the foreground; finally the current video frames are used for updating the background model. The method solves the problems that a traditional Codebook background modeling algorithm is large in calculated amount and high in memory requirement, and established Codewords are not accurate are solved, target detecting accuracy and speed are improved, the requirement for real-time accuracy is met, and accordingly the requirement for intelligent monitoring in real life is met.
Owner:苏州华创智城科技有限公司

Tracking method based on integral and partial recognition of object

ActiveCN103413120AValid representationFlexible representationCharacter and pattern recognitionPartial representationSuperpixel segmentation
The invention discloses a tracking method based on integral and partial recognition of an object. As for recognition based on partial information, superpixel division is conducted on all candidate areas, different weight values are given to superpixels according to characteristics of partial representation of the object, weighting similarity measurement is put forward, and the confidence coefficients of all the candidate target areas are calculated. As for recognition based on integrality, object property measurement is introduced into a target object detection portion of a current frame, colors, edges and the superpixels are chosen to serve as three types of clues of object property measurement, marking rules of the clues are provided respectively, the confidence coefficients are calculated by combining the three types of clues and recognition based on the partial information so as to mark all the candidate target areas in an extension area, and a target area is determined according to marks. According to the tracing method, the target object in a tracking scene dynamically changing can be described preferably, the target area is converged in the target object better by combing object property measurement, the probability of a background existing in the target are is reduced, and the tracking accuracy rate and tracking stability are improved.
Owner:SOUTH CHINA AGRI UNIV

Moving object extraction method based on optical flow method and superpixel division

The invention discloses a moving object extraction method based on superpixel division and an optical flow method, and mainly solves the problems of more noises, high-frequency information loss, inaccurate boundary and the like of the existing moving object extraction method. The implementation steps of the method are as follows: (1), inputting an image, and pre-dividing the image into a superpixel set S to obtain a mark sheet I 2; (2), taking images of two adjacent frames in a video sequence and determining a rough position of a moving object by a Horn-Schunck optical flow method; (3), using the optical flow method to obtain the speed u in the horizontal direction and the speed v in the vertical direction, wherein V is speed amplitude of the optical flow method; (4) performing median filtering, Gauss filtering, binarization operation and morphology opening and closing operation on the optical flow result V to obtain V4; (5) using a superpixel division result to further correct the optical flow result, and extracting to obtain the accurate moving object. Superpixels belonging to a moving area are extracted accurately. Simulation experiments show that compared with the prior art, the moving object extraction method has the advantages of simple operation, small noise, clear boundary and the like, and can be used for extracting the moving object in the video sequence.
Owner:XIDIAN UNIV

Low-resolution airport target detection method based on hierarchical reinforcement learning

The invention provides a low-resolution airport target detection method based on hierarchical reinforcement learning. The method comprises the steps of (1) carrying out super pixel division on an inputted remote sensing image, (2) extracting the boundary super pixel of the input image to construct a background information set, (3) learning the characteristic similarity between each super pixel and a background information set through a minimum distance similarity measurement operator and extracting a deep layer characteristic, (4) defining the ending condition of a learning process, judging whether the step (3) satisfies an ending condition or not, executing a step (6) if so, otherwise, executing a step (5), (5) using the back-propagation theory to act the deep layer characteristic in the step (3) as an reinforcement factor to a local layer input image, and taking the image which is subjected to reinforcement processing as the input image of a next layer learning process, executing the step (1), and continuing a next layer learning, (6) stopping learning, taking the deep layer characteristic learned in the local layer in the step (3) as the salient characteristic of a super pixel, and obtaining a final salient map, and (7) generating the linear feature map of an original image, fusing the linear feature map and the salient map, through salient area positioning and area combination, an airport target area is determined, and the target detection is completed.
Owner:BEIHANG UNIV

Video foreground object extracting method based on visual saliency and superpixel division

The invention discloses a video foreground object extracting method based on visual saliency and superpixel division. The video foreground object extracting method includes steps: a, dividing multiple layers of superpixels of video: dividing the superpixels of the video used as a three-dimensional video body, and grouping elements of the video body into body areas; b, detecting visual saliency areas of key frames of the video and extracting foreground objects of the key frames: analyzing the visual saliency areas in images of the key frames of the video by a visual saliency detecting method, then using the visual saliency areas as initial values and obtaining the foreground objects of the key frames by an image foreground extracting method; and c, matching the foreground objects of the key frames with a dividing result of the superpixels of the video and transmitting foreground object extracting results of the key frames among the frames: diffusing areas, covered by the foreground objects of the key frames, of the video body, and further continuously transmitting the foreground object extracting results among the frames. The video foreground object extracting method is high in efficiency, accurate in result and little in manual intervention and is robust.
Owner:TSINGHUA UNIV +1

Image classification method and system based on image salient region

The invention discloses an image classification method and system based on an image salient region. The method includes offline training and online test. The offline training comprises: performing ultra-pixel segmentation on an image to obtain multidimensional segmentation blocks, and calculating the characteristic contrast of the segmentation blocks to obtain a target salient map; performing threshold segmentation on the target salient map to obtain a binary image, performing morphological processing on the binary image, and performing automatic segmentation extraction on the target salient map by employing a segmentation algorithm to obtain the salient region; and inputting the salient region to a convolutional neural network for training to obtain an image classifier based on the image salient region. The online test includes: performing automatic segmentation extraction of the salient region on a test image, inputting a salient region image of the test image to the trained image classifier, and performing image classification to obtain an image class mark. According to the method and system, the segmentation result is guaranteed, the workload of artificial interaction is reduced, and the accuracy of image classification is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Saliency object detection method based on Faster R-CNN

InactiveCN107680106ASolve the problem that the effect of saliency detection is not idealImage enhancementImage analysisSaliency mapRound complexity
The invention discloses a saliency object detection method based on Faster R-CNN. The method comprises the steps of first performing multi-scale segmentation on an image, then outlining possible saliency objects using the Faster R-CNN, establishing an object analogue map, thereafter distributing a foreground specific gravity to a superpixel via foreground connectivity, then obtaining round and smooth saliency maps in combination with specific gravities of a foreground and a background using a saliency optimization technology, and at last performing fusion using an MCA (Multi-layer Cellular Automata) to obtain a final saliency map. The segmentation is performed on an input image on three scales using a superpixel segmentation algorithm, and the superpixel segmentation algorithm is to aggregate adjacent and similar pixel points into different sizes of image areas according to low-level characteristics such as a color, a texture and a brightness, such that the complexity of saliency detection can be effectively reduced; and by taking different scales of segmented images as a layer of cells and performing fusion on the different scales of superpixel segmented images using the MCA, theconsistency of an image saliency detection result is guaranteed.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

SAR image change detecting method based on superpixel segmentation and characteristic learning

The invention discloses an SAR image change detecting algorithm based on superpixel segmentation and characteristic learning. The algorithm comprises the first step of starting a SAR image change detecting method based on the superpixel segmentation and the characteristic learning; the second step of conducting the superpixel segmentation on two SAR images at the same area and different time phases after rectification; the third step of utilizing a difference degree clustering method to generate an initiation change result; the fourth step of selecting samples with the same quantity as training samples from a changed category and an unchanged category according to the initiation change result; the fifth step of inputting samples to be trained into a designed deep neural network to be subjected to training; the sixth step of inputting the two images to be detected into the trained deep neural network to obtain a final change detecting result; the seventh step of finishing. According to the SAR image change detecting algorithm based on the superpixel segmentation and the characteristic learning, a superpixel block is adopted as a basic processing unit, the time spent in processing data can be shortened to some degree, sensitive problems of noise are improved to a large extent, and the detecting result and the detecting accuracy are obviously improved.
Owner:XIDIAN UNIV

SAR image segmentation method based on wavelet pooling convolutional neural networks

The invention discloses an SAR image segmentation method based on wavelet pooling convolutional neural networks. The SAR image segmentation method comprises 1. constructing a wavelet pooling layer and forming wavelet pooling convolutional neural networks; 2. selecting image blocks and inputting the image blocks into the wavelet pooling convolutional neural networks, and training the image blocks; 3. inputting all the image blocks into the trained networks, and testing the image blocks to obtain a first class mark of an SAR image; 4. performing superpixel segmentation of the SAR image, and blending the superpixel segmentation result with the first class mark of the SAR image to obtain a second class mark of the SAR image; 5. obtaining a third class mark of the SAR image according to a Markov random field model, and blending the third class mark of the SAR image with the superpixel segmentation result to obtain a fourth class mark of the SAR image; and 6. blending the second class mark of the SAR image with the fourth class mark of the SAR image according to an SAR image gradient map to obtain the eventual segmentation result. The SAR image segmentation method based on wavelet pooling convolutional neural networks improves the segmentation effect of the SAR image and can be used for target detection and identification.
Owner:XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products