Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

78 results about "Texture enhancement" patented technology

Polyp segmentation method and device, computer equipment and storage medium

The invention discloses a polyp segmentation method and device, computer equipment and a storage medium, and the method comprises the steps: obtaining a to-be-segmented polyp image, carrying out the feature extraction of the polyp image through a Res2Net network, and obtaining a multi-layer feature map; performing refining processing on each layer of feature map by using a texture enhancement module, and then performing feature fusion on multiple layers of feature maps by using a cross-layer feature fusion module to obtain a target polyp segmentation rough prediction map; inputting each layer of feature map into a grouping supervision context module, and performing context combination on the target polyp segmentation rough prediction map and the multi-layer feature map through the grouping supervision context module; taking a context combination result as a final polyp segmentation prediction map so as to construct a polyp segmentation network; and performing segmentation processing on the polyp image by using the polyp segmentation network. According to the method, richer polyp features are extracted by considering information complementation between hierarchical feature maps and feature fusion under multiple views, so that the segmentation precision of the polyp image is improved.
Owner:SHENZHEN UNIV

Illumination-robust facial image local texture enhancement method

ActiveCN107392866AReduce lossAlleviate the problem of contrast imbalanceImage enhancementPattern recognitionTexture enhancement
The present invention relates to an illumination-robust facial image local texture enhancement method includes the following steps that: logarithmic transformation is performed on the gray value of an inputted original face image I, so that a logarithmic transformation result image I' can be obtained; Gaussian differential filtering and bilateral differential filtering are performed on the logarithmic transformation result image I', so that differential filtering result images IDoG and IDoB are obtained, image information fusion is performed on IDoG and IDoB, so that a fusion result image I" is obtained; and the fusion result image I" is divided into sub-image blocks, grayscale-equalization processing is performed on each sub-image block by means of a mean normalization method, and the sub-image blocks are spliced according to division positions, and then the pixel gray value range of a spliced image is compressed by means of a hyperbolic tangent function, and an image is outputted. With the method of the invention adopted, facial images imaged under different illumination conditions can be processed; illumination influence can be eliminated; face local texture information can be enhanced; and recognition accuracy in face recognition application can be improved. The method has the advantages of low algorithm complexity and high light robustness.
Owner:WUHAN UNIV OF SCI & TECH

A picture texture enhancement super-resolution method based on a deep feature translation network

The invention relates to a picture texture enhancement super-resolution method based on a deep feature translation network, and belongs to the technical field of computer vision. The method comprisesthe steps of firstly, processing the training data, and then designing a network structure model which comprises a super-resolution reconstruction network, a fine-grained texture feature extraction network and a discrimination network; and then, designing a loss function for training the network by adopting a method of combining various loss functions, and training the network structure model by using the processed training data to obtain a super-resolution reconstruction network with a texture enhancement function, and finally, inputting the low-resolution image into the super-resolution reconstruction network, and performing reconstruction to obtain a high-resolution image. According to the method, the picture texture information can be extracted under finer granularity, a mode of combining multiple loss functions is adopted, compared with other methods, the method guarantees that the original picture is loyalty, the texture feature information can be recovered, and the picture is clearer. The method is suitable for any picture, has a good effect, and has good universality.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Method for detecting overall dimension of running vehicle based on binocular vision

PendingCN112991369ASolve the problem of incomplete contour measurementImprove relevanceImage enhancementImage analysisPattern recognitionStereo matching
The invention discloses a method for detecting the overall dimension of a running vehicle based on binocular vision. The method comprises the following steps: calibrating and correcting a binocular camera; performing moving object recognition and tracking on the corrected view to obtain a vehicle feature region; enabling the identified vehicle surface to be subjected to texture enhancement processing, so that the problem of low detection precision of a weak texture surface is solved; based on vehicle driving scene characteristics, providing a stereo matching algorithm based on time sequence propagation to generate a standard disparity map, and improving the vehicle overall dimension measurement precision; performing three-dimensional reconstruction on the generated disparity map to generate a point cloud map; providing a space coordinate fitting algorithm, fitting a plurality of frames of point cloud images of the tracked vehicle, generating a standard vehicle overall dimension image. The problem that the overall dimension of the vehicle cannot be completely displayed through a single frame of point cloud image is solved. The method is not limited by the vehicle speed in measurement effect, high in measurement precision, wide in measurement range and low in cost. The binocular camera has the advantages of being flexible in structure, convenient to install and suitable for measurement of all road sections.
Owner:HUBEI UNIV OF TECH

Texture-based method of calculating index of building zone of high-resolution remote sensing image

The invention discloses a texture-based method of calculating an index of a building zone of a high-resolution remote sensing image. The method comprises the following steps that NSCT conversion is conducted on the image, multi-scale and multi-directional sub-band coefficients are formed, partial texture energy of each sub-band coefficient is counted, building zone texture enhancement calculation is conducted on the partial texture energy to make characteristics of the building zone stand out, visual saliency is defined from the perspective of information theory, the index of the building zone is generated by adopting a visual attention mechanism based on self-information maximization, and the larger the index is, the higher significance the building zone has in a human vision process. According to the texture-based method of calculating the index of the building zone of the high-resolution remote sensing image, the characteristics of the building zone in the high-resolution remote sensing image are taken into consideration fully, multi-scale and multi-directional textural features of the building zone are built, the index is calculated according to a visual attention process to describe the building zone, the index can describe the building zone intuitively, the good effect of extracting the building zone from the high-resolution remote sensing image is achieved, and extracting accuracy can also be guaranteed in a complex environment.
Owner:WUHAN UNIV

Method for super-resolution of image and video based on fractal analysis, and method for enhancing super-resolution of image and video

The invention discloses a method for super-resolution of images and videos based on a fractal analysis, and a method for enhancing the super-resolution of the images and videos. The methods comprise the following steps: reading an image or a frame in a video, recording as I, calculating a gradient Gradori of the I; performing a super-resolution process on the I based on interpolation to obtain an estimated value H' of the high-definition image; calculating a gradient Gradest of the H'; calculating fractal dimension Dori and Dest, and fractal length Lori and Lest of corresponding pixel of the I and H' through the Gradori and the Gradest, respectively; reestimating the gradient GradH of the high-definition image according to scale invariance of the fractal dimensions and the fractal lengths; and reestimating the high-definition image, with the H' and the GradH as restrictions. The methods uses pixel of the image as a fractal set and the gradient corresponding to the pixel as a measure of the fractal set, and local fractal dimensions and local fractal lengths of the image are calculated. According to the scale invariance of the fractal dimensions, a super-resolution problem is restricted. Through bringing in a restriction of the scale invariance of the fractal lengths, the methods can be used in image quality enhancement of images and videos, and especially in texture enhancement.
Owner:SHANGHAI JIAO TONG UNIV

Appearance texture synthesis method and device for three-dimensional model

The invention provides an appearance texture synthesis method and device for a three-dimensional model and relates to the field of three-dimensional model technology. The method comprises the steps that global geometric features of an appearance texture model, global geometric features of the three-dimensional model and similarity information of the appearance texture model and the three-dimensional model are determined; first-stage geometric texture enhancement is performed on the three-dimensional model to generate a first-stage geometric texture enhanced three-dimensional model, and a geometric feature map is determined; appearance and material cross-correlation synthesis is performed under the guidance of the geometric feature map to generate synthesized geometric texture and synthesized material texture; second-stage geometric texture enhancement is performed on the first-stage geometric texture enhanced three-dimensional model according to the synthesized geometric texture to generate a second-stage geometric texture enhanced three-dimensional model; the synthesized material texture is applied to the second-stage geometric texture enhanced three-dimensional model, and rendering is performed to obtain an appearance texture synthesis result of the three-dimensional model. Through the appearance texture synthesis method and device, the problems that in the prior art, appearance texture is not truthful visually, and appearance texture information cannot be captured sufficiently can be solved.
Owner:SHENZHEN UNIV

Underwater image enhancement and restoration method based on convolutional neural network

ActiveCN111462002ASolve the problem of missing training dataQuality improvementImage enhancementImage analysisTexture enhancementImage pair
The invention discloses an underwater image enhancement and restoration method based on a convolutional neural network, and the method comprises the steps: degrading a plurality of conventional imagesthrough a to-be-processed underwater image, forming a training set through the plurality of conventional images and the corresponding degraded conventional images, and sequentially inputting the training set into the convolutional neural network for training; then, inputting the underwater image to be processed into the trained convolutional neural network, and outputting a first image; carryingout CIELAB color space transformation on the image, and extracting an L brightness channel, an A color channel and a B color channel of the image; and finally, performing texture enhancement on the to-be-processed underwater image, replacing the L brightness channel of the first image with the image after texture enhancement, and combining the image after texture enhancement with the A color channel and the B color channel in the first image to obtain a final image. According to the method, the problem of missing of underwater image training data is solved, complex calculation of a traditionalunderwater imaging model is avoided, and the quality of underwater images is well improved.
Owner:CHONGQING UNIV OF TECH

Target clothing image processing method based on generative adversarial network model

The invention provides a target clothing image processing method based on a generative adversarial network model. The target clothing image processing method comprises the following steps: pairing a sample standard image with corresponding sample area images to form a sample pairing image set; optimizing loss function parameters of the generative adversarial network model according to the sample pairing image set; inputting the to-do area image into the generative adversarial network model, and outputting a template image; stretching and deforming the to-do area image to output a distorted image, so that the distorted image is aligned with the frame of the template image; and obtaining a pixel weight matrix, fusing the distorted image and the template image, and outputting a target clothing image. According to the method, the generative adversarial network model based on the perception loss function and the step-by-step image fusion technology are constructed, and the garment images with different angles and postures are converted into the target garment images with regular postures and enhanced textures to be searched and used by an intelligent system, so that the quality of the target garment images is improved, and the retrieval accuracy of the intelligent system is improved.
Owner:HARBIN INSTITUTE OF TECHNOLOGY SHENZHEN (INSTITUTE OF SCIENCE AND TECHNOLOGY INNOVATION HARBIN INSTITUTE OF TECHNOLOGY SHENZHEN)
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products