Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

137 results about "Salient object detection" patented technology

Salient object detection is a task based on a visual attention mechanism, in which algorithms aim to explore objects or regions more attentive than the surrounding areas on the scene or images.

Deep learning-based weakly supervised salient object detection method and system

The invention discloses a deep learning-based weakly supervised salient object detection method and system. The method comprises the steps of generating salient images of all training images by utilizing an unsupervised saliency detection method; by taking the salient images and corresponding image-level type labels as noisy supervision information of initial iteration, training a multi-task fullconvolutional neural network, and after the training process is converged, generating a new type activation image and a salient object prediction image; adjusting the type activation image and the salient object prediction image by utilizing a conditional random field model; updating saliency labeling information for next iteration by utilizing a label updating policy; performing the training process by multi-time iteration until a stop condition is met; and performing general training on a data set comprising unknown types of images to obtain a final model. According to the detection method and system, noise information is automatically eliminated in an optimization process, and a good prediction effect can be achieved by only using image-level labeling information, so that a complex andlong-time pixel-level manual labeling process is avoided.
Owner:SUN YAT SEN UNIV

RGB-D salient object detection method based on foreground and background optimization

The invention discloses an RGB-D salient object detection method based on foreground and background optimization. The method comprises the following steps: initial foreground modeling is performed based on low-level feature contrast, and a superpixel-level initial salient figure is obtained; a middle-level aggregation processing is performed on the superpixel-level initial salient figure, and a middle-level salient figure is obtained; a high-level prior is introduced in the middle-level salient figure to improve the detection effect, and a foreground probability is generated; edge connectivity mixing depth information is calculated, and the edge connectivity is converted into a background probability; the foreground probability and the background probability are optimized, and a objective function is obtained; the objective function is solved, a optimal salient figure is obtained, and the detection of a salient object is realized. According to the invention, a optimization framework based on foreground and background measurement and the depth information of a scene is fully utilized by the invention, a high recall rate can be obtained, and the accuracy is high; the method can accurately position the salient object in different scenes and different sizes of objects and can also obtain nearly equal salience values in the target object.
Owner:TIANJIN UNIV

Perceptual high-definition video coding method based on salient target detection and saliency guidance

The invention discloses a perceptual high-definition video coding method based on salient target detection and saliency guidance. The method comprises the following steps: constructing a salient target detection model of a multi-scale pyramid shuffling network; carrying out salient region prediction on video data through the salient target detection model of the multi-scale pyramid shuffling network; and guiding an HEVC video compression standard by utilizing a prediction result, and performing video coding through an adaptive quantization parameter and a significance-based coding unit partitioning strategy. The significant target detection model of the multi-scale pyramid shuffling network is stronger in generalization, and can output a prediction result image of significant target segmentation with higher accuracy; the HEVC video compression standard is guided on the basis of the prediction result image, the video image is divided into a salient region and a non-salient region, dynamic optimization is carried out on rate distortion optimization and quantization parameter selection, finally, a video coding result with better indexes is obtained, the video code stream is smaller, and the image quality is better.
Owner:深圳市北辰星途科技有限公司

Video saliency object detection model and system based on cross attention mechanism

The invention relates to a video saliency object detection method and system based on a cross attention mechanism. The method comprises the following steps: A, inputting an input adjacent frame imageinto a similar network structure sharing parameters, and extracting high-level and low-level features; b, performing feature re-registration and alignment on the saliency features in the single-frameimage by using a self-attention module; c, utilizing an inter-frame cross attention mechanism to obtain the relationship dependence on the position of the salient object on the inter-frame space-timerelationship, acting on the advanced feature as a weight, and capturing the consistency of salient object detection on the space-time relationship; d, fusing the extracted intra-frame advanced features and low-level features of the adjacent frames and the space-time features with the inter-frame dependency relationship; e, performing feature dimension reduction on the input features, and outputting a pixel-level classification result by using a classifier; and F, establishing a depth video saliency object detection model based on a cross attention mechanism, and accelerating the training of the model by using GPU parallel computing.
Owner:HARBIN INST OF TECH SHENZHEN GRADUATE SCHOOL

Salient object detection method based on central rectangular composition prior

ActiveCN106204615AMechanism of visual attentionConforms to human visual attention mechanismImage enhancementImage analysisPattern recognitionSalient objects
The invention provides a salient object detection method based on a central rectangular composition prior. The central rectangle refers to a rectangle surrounded by four composition intersections of thirds of the composition line. The method comprises steps of: supposing that a salient object is arranged along the composition line of the central rectangle, performing correlation ordering on the super-pixels on the four sides of the central rectangle to obtain a central rectangle composition line salient map; supposing that the salient target is located at the intersection of the center rectangle composition, removing composition intersections unlikely to be the salient object according to the central rectangle composition line salient map, and then computing the spatial distance between all the super-pixel nodes and a center node in an image by using the rest composition intersections as the center node to form a corresponding salient map, and finally adding and fusing the salient maps to form a central rectangle composition intersection salient map; then acquiring a compactness relation salient map by using a compactness relation; and finally, fusing the three maps to obtain a final salient map. The method conforms to the principle of photographic composition and conforms to the human visual attention mechanism.
Owner:ANHUI UNIVERSITY

Unconstrained in-video salient object detection method combined with objectness degree

The invention discloses an unconstrained in-video salient object detection method combined with an objectness degree. The unconstrained in-video salient object detection method specifically comprises the steps of: (1) inputting an original video sequence F={F<1>, F<2>, ..., F<M>}, wherein a t-th frame in the sequence is referred to as F<t>; (2) adopting a video saliency model and an objectiveness object detection algorithm for the video frame F<t>, so as to obtain an initial rectangular region for salient object detection; (3) updating an objectness degree probability graph and an object probability graph through iteration for the video frame F<t>, and adjusting the size of the rectangular region for salient object detection continuously, so as to obtain a single-frame salient object detection result; (4) and utilizing a dense optical flow method algorithm to obtain a motion vector field of pixel points of the video frame F<t>, and calculating the overlapping degree of the rectangular regions for salient object detection of the adjacent frames, so as to obtain a final salient object detection result. The unconstrained in-video salient object detection method updates the objectness degree probability graph and the object probability graph through iteration, enhances the precision of spatial domain salient object detection results, improves time consistency through sequence-level refining, and can detect salient objects in a video more accurately and completely.
Owner:SHANGHAI UNIV

Robust sparse representation and Laplace regular term-based salient object detection method

The invention discloses a robust sparse representation and Laplace regular term-based salient object detection method, and mainly aims at solving the problem that the existing method cannot completely and consistently detect the salient objects in complicated images. The method comprises the following steps of: 1, segmenting an input image to obtain a superpixel set; 2, constructing a background dictionary by adoption of superpixels at a boundary region; 3, respectively restraining the consistency between representation coefficients and reconstruction errors in a robust sparse representation model by adoption of two Laplace regular terms, and obtaining a representation coefficient matrix and a reconstruction error matrix by utilizing a background dictionary solution model; 5, constructing salient factors by combining the representation coefficient matrix and the reconstruction error matrix, so as to obtain a superpixel-level saliency map; and 6, mapping the superpixel-level saliency map to obtain a pixel-level saliency map. Experiments indicate that the method has relatively good background suppression effect, is capable of completely detecting salient objects of images, and can be used for the salient object detection of complicated scene images.
Owner:重庆江雪科技有限公司

Automatic detection method of salient object based on salience density and edge response

The invention provides an automatic detection method of a salient object based on salience density and edge response, and relates to a method for automatically detecting the salient object, solving the problems of the convectional salient object detection method that only one attribute that is the salience is utilized, but the edge attribute of the salient object is not taken into account, therefore, the detection accuracy of the salient object is relatively low. The automatic detection method of the salient object based on the salience density and the edge response comprises the following steps of: calculating and generating a salient map S of an input map according to the regional salience calculation method in combination of the global color comparison and the color space distribution; generating an edge response map E on the salient map S by utilizing a group of Gabor filters; efficiently searching a global optimal sub-window containing the salient object in the input map by utilizing the maximized branch-and-bound algorithm of the salience density and the edge response; adopting the obtained optimal sub-window as the input; initializing the GrabCut graphic cutting method; carrying out the GrabCut graphic cutting method; and automatically extracting the salient object with a good edge. The automatic detection method is applicable to the image processing field.
Owner:中数(深圳)时代科技有限公司

Amplitude spectrum analysis based salient object detection method

ActiveCN106296632AEven detectionImage enhancementImage analysisPattern recognitionQuaternion fourier transform
The invention discloses an amplitude spectrum analysis based salient object detection method, which comprises the steps of extracting a brightness feature I, an antagonistic feature RG and an antagonistic feature BY from an acquired image; the extracted features are transformed into a frequency domain through quaternion Fourier transform so as to acquire an amplitude spectrum, a phase spectrum and an intrinsic axis spectrum of the image; detecting the size of each salient object and a central position of each salient object in the image by using an image signature operator; acquiring an optimal filtering scale corresponding to each of the different salient object through utilizing a specific relation between the optimal filtering scale of the amplitude spectrum and the size of the salient objects, and respectively carrying out different scales of Gaussian filtering on the amplitude spectrum of the image; determining a weight value of an optimal saliency map corresponding to each salient object according to central bias Gaussian distribution and salient object locations, carrying out adaptive Gaussian weight fusion on the acquired different saliency maps, and calculating a fused saliency map; carrying out Gaussian filtering on the fused saliency map; and performing normalization on saliency values to acquire a final saliency map. The method disclosed by the invention can suppress the background quickly and effectively, the salient objects are highlighted uniformly, and more salient information of the image is reserved.
Owner:OCEAN UNIV OF CHINA

Visual attention fusion method for redirected image quality evaluation

The invention relates to a visual attention fusion method for redirected image quality evaluation. The method comprise: step one, an original image is read and two kinds of saliency maps are generatedby using two kinds of salient object detection algorithms; step two, equalization is carried out to reduce the distribution difference between the two saliency maps and two equalization saliency mapsare generated; step three, the equalization saliency maps are fused by using a method of adding saliency values of corresponding points and calculating an average value and then a normalization operation is performed to generate a fused saliency map; step four, face and line information in the original image is detected; step five, saliency values of a face rectangular frame and a line region inthe fused saliency map are magnified adaptively under the condition of constraining a magnification extreme value and a fused saliency map containing facial and line information is generated; and stepsix, the contrast of the fused saliency map is enhanced b using a saliency enhancement model and then normalization is performed to generate a visual attention fusion saliency map. Therefore, the consistency between objective quality assessment results and subjective perception is enhanced.
Owner:FUZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products