Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

491results about How to "Keep details" patented technology

Gray scale image fitting enhancement method based on local histogram equalization

InactiveCN105654438ASuppresses "cold reflection" imagesEvenly distributedImage enhancementImage analysisImage contrastBlock effect
The invention provides a gray scale image fitting enhancement method based on local histogram equalization. The gray scale image fitting enhancement method has advantages of improving gray scale image contrast and detail information and eliminating block effect and over-enhancement. The gray scale image fitting enhancement method comprises the steps of performing segmental linear transformation on a gray scale image in an overwide dynamic range, obtaining the gray scale image in an appropriate dynamic range, dividing an image gray scale distribution interval to two segments to multiple segments, adjusting the gradient of a segmenting point and a transformation line of each image gray scale distribution interval, performing expansion or compression on a random gray scale interval; performing subblock part overlapping histogram equalization on a transformation result, obtaining the transformation function of the current subblock through performing weighted summation on a subblock transform function in the neighborhood, performing histogram equalization processing on the current subblock by means of the transformation function; and performing nonlinear fitting on the gray scale map after histogram equalization, and performing histogram distribution correction on the gray scale image after subblock part overlapping histogram equalization.
Owner:SOUTH WEST INST OF TECHN PHYSICS

Double exposure implementation method for inhomogeneous illumination image

InactiveCN103530848AKeep detailsRemove color distortionImage enhancementVisual matchingIlluminance
The invention discloses a double exposure implementation method for an inhomogeneous illumination image. The double exposure implementation method for the inhomogeneous illumination image comprises the following steps: obtaining an illumination image; obtaining a reflection image; overlaying the illumination image with the reflection image to obtain the global enhancement effect; fusing the global enhancement effect with the original image; and carrying out color correction on an enhancement result to obtain a visual matching image. According to the double exposure implementation method for the inhomogeneous illumination image, which is disclosed by the invention, the smooth property of the illumination image is constrained, and the reflection image is sharpened by the visual threshold value characteristic to guarantee the detail information of the image; with an image fusion method, the luminance, the contrast ratio and the color information of an original image luminance range can be effectively kept; because the characteristic of the human eye visual perception of average background brightness is introduced in, the fused image can be used for effectively eliminating image color distortion near a shadow boundary; the color of a low-illuminance zone is restored by the color correction technology; the colors of the low-illuminance zone and the luminance range are free from obvious distortion; the continuity is good; and the visual effect is more nature.
Owner:AIR FORCE UNIV PLA

Video noise reduction device and video noise reduction method

The invention discloses video noise reduction device and a video noise reduction method. The method comprises the following steps of: obtaining a brightness difference histogram of a current image by using a denoising result of a previous frame of image and a gradient magnitude histogram of the current image; carrying out noise level evaluation on the current image according to the brightness difference histogram; calculating the spatial distance of any two pixel points in the current image, so as to obtain the spatial similarity of the any two pixel points; carrying out denoising on the current image according to the spatial similarity; calculating a pixel time domain distance between any pixel point in the current image and the pixel point at the position corresponding to the previous frame of denoised image, and calculating the corresponding time domain similarity; carrying out three-dimensional recursive denoising on the video image according to the obtained time domain similarity, the spatial similarity denoising result and the previous frame of denoising result. By adopting the device and method disclosed by the invention, three-dimensional recursive denoising is carried out by using the pertinence of the pixel in space and time, so that strong complicated noise can be removed; an image detail can be kept; the stability of the denoising effect also can be ensured.
Owner:SHANGHAI TONGTU SEMICON TECH

Three-dimensional human head and face model reconstruction method based on random face image

The invention provides a three-dimensional human head and face model reconstruction method based on a random face image. The method includes; establishing a human face bilinear model and an optimization algorithm by using a three-dimensional human face database; gradually separating the spatial attitude of the human face, camera parameters and identity features and expression features for determining the geometrical shape of the human face through the two-dimensional feature points, and adjusting the generated three-dimensional human face model through Laplace deformation correction to obtaina low-resolution three-dimensional human face model; finally, calculating the face depth, and achieving high-precision three-dimensional model reconstruction of the target face through registration ofthe high-resolution template model and the point cloud model, so as to enable the reconstructed face model to conform to the shape of the target face. According to the method, while face distortion details are eliminated, original main details of the face are kept, the reconstruction effect is more accurate, especially in face detail reconstruction, face detail distortion and expression influences are effectively reduced, and the display effect of the generated face model is more real.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Self-adaptive wavelet threshold image de-noising algorithm and device

The invention brings forward a self-adaptive wavelet threshold image de-noising algorithm and device. The image de-noising algorithm comprises the following steps: a noised image is subjected to wavelet transformation operation, and wavelet coefficients of all layers can be obtained; with signal correlation considered, coefficients in an area adjacent to each coefficient are averaged in wavelet coefficients of each layer; threshold is determined based on a wavelet coefficient which is obtained via an absolute mean value estimation method, and a self-adaptive threshold method is adopted for determining thresholds suitable for all different scales; as for the wavelet coefficients and thresholds, self-adaptive threshold functions for all directions at all layers are constructed, wavelet inverse transformation and reconstruction are performed, and a de-noised image can be obtained. According to the image de-noising algorithm, the self-adaptive threshold method is adopted for determining the thresholds, an overall uniform threshold is replaced with thresholds for different scales, wavelet threshold de-noising operation is performed via use of the self-adaptive thresholds and the self-adaptive threshold functions, and detailed information of the image can be protected; the self-adaptive wavelet threshold image de-noising algorithm is better than a conventional wavelet threshold de-noising algorithm in terms of peak signal to noise ratio and visual perception.
Owner:JINAN UNIVERSITY

Dynamic scale distribution-based retinal vessel extraction method and system

The present invention discloses a dynamic scale distribution-based retinal vessel extraction method and system. The method includes the following steps of: retinal image preprocessing: contrast enhancement is performed on the green channel component of a color retinal image; image segmentation: the preprocessed retinal image is segmented into a set number of sub-images; vessel classification: the vessels of each sub-image are divided into three categories, namely, a large category, a medium category and a small category; dynamic scale allocation: filters of different scales are dynamically selected to enhance vessels of different widths; multi-scale matched filtering; threshold processing: vascular structures are extracted, nonvascular structure was removed, the extraction results of all the sub-images are re-spliced, so that a retinal vessel network binary image can be obtained; and post-processing: post-processing is carried out, so that a high-segmentation accuracy retinal vessel network image can be obtained. With the method and system of the invention adopted, the vessel extraction of the retinal image can be realized; excessive estimation of the widths of the vessels can be avoided when complex nonvascular structures are removed; and simpler and more accurate retinal vessel extraction can be realized.
Owner:SHANDONG UNIV

Real-time transparent object GPU (graphic processing unit) parallel generating method based on three-dimensional point cloud

The invention discloses a real-time transparent object GPU (graphic processing unit) parallel generating method based on three-dimensional point cloud, which comprises steps of: (1) generating a background map of a nontransparent object; (2) generating geometric buffer of a transparent object, generating all three-dimensional points by a sphere generating way, utilizing hardware depth detection to acquire depth of an approximate plane, and simultaneously saving material information of the transparent object; (3) smoothing the depth, using the depth information in the geometric buffer to perform smooth filtering to the depth to acquire a smooth surface; (4) calculating thickness of the transparent object, generating all three-dimensional points, and utilizing hardware Alpha to mix and calculate the thickness of the transparent object; and (5) coloring the transparent object, using the depth and the material information to perform illumination computation to the transparent object, utilizing the thickness to calculate refraction and reflection properties of the transparent object, and utilizing the background map to finish coloring. The method avoids the surface rebuilding step in the traditional method, and can meet the requirement of real-time generation of a million-level transparent object based on point cloud.
Owner:BEIHANG UNIV

Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features

The invention provides a visible light and infrared image fusion algorithm based on non-subsample contourlet transform (NSCT) domain bottom layer visual features.Firstly, visible light and infrared images are subjected to NSCT, high and low frequency subband coefficients of the visible light and the infrared images are obtained, then phase equalization, neighborhood space frequency, neighborhood energy and other information are combined, the pixel active levels of the low frequency subband coefficients are comprehensively measured, fusion weights of the low frequency subband coefficients of the visible light and infrared images are obtained respectively, and therefore low frequency subband coefficients of fusion images are obtained; the pixel active levels of the high frequency subband coefficients are measured through the combination of phase equalization, definition, brightness and other information, fusion weights of the high frequency subband coefficients of the visible light and infrared images are obtained respectively, then high frequency subband coefficients of the fusion images are obtained, finally, NSCT reverse transformation is utilized, and final fusion images are obtained.Detail information of source images can be effectively reserved, and meanwhile useful information of the visible light images and the infrared images is synthesized.
Owner:云南联合视觉科技有限公司

Convolutional neural network-based low-dosage CT image noise inhibition method

The invention relates to a convolutional neural network-based low-dosage CT image noise inhibition method. The convolutional neural network-based low-dosage CT image noise inhibition method comprisesthe following steps: (1) performing normalization processing on the input original low-dosage CT image L by utilizing the low-dosage CT image obtained through low tube current tube voltage scanning, evaluating the mean value and the standard deviation of the gray level of all the pixels of the low-dosage CT image, and subtracting the mean value from the L and dividing the standard deviation to obtain a CT image L0; (2) taking the acquired preprocessed low-dosage CT image L0 as input of the convolutional neural network and predicting a noise CT image D0 corresponding to a low-dosage CT image I;and (3) subtracting the predicted noise image D0 from the L0, multiplying the standard deviation of the low-dosage CT image and adding the mean value of the low-dosage CT image to acquire the denoised image H0. The low-dosage CT image is subjected to denoising processing by the convolutional neural network, so that the image is guaranteed to meet the diagnosis quality, the irradiation dosage of asubject is reduced, the detection rate of the focus is increased and the disease is diagnosed early.
Owner:SOUTHERN MEDICAL UNIVERSITY

Multi-spectrum remote sensing image change detection method

The invention discloses a multi-spectrum remote sensing image change detection method. The traditional remote sensing image change detection method comprises the following steps of forming a difference image on the two remote sensing images different in time phases through an algebraic method, then modeling the difference image, determining the change threshold value, and detecting the change of the image. However, the process has the problems that the difference image may not meet the designated model and the change threshold value is difficultly determined, so the remote sensing image change detection method based on an SVM (support vector machine) mixed kernel function is put forward for solving the problems. The implementation process comprises the following steps of firstly, constructing the difference image in a way of combining PCA (primary component analysis) conversion and a correlation coefficient integrating method; then, extracting the gray feature and texture feature of the difference image, and carrying out normalization and formatting; selecting a training area, and constructing the SVM mixed kernel function to train; finally, enabling an SVM mixed kernel classifier to classify the difference image, so the change detection result of the target image is obtained.
Owner:HOHAI UNIV +1

Synthetic aperture radar image denoising method based on non-down sampling profile wave

ActiveCN101482617AAvoid jitter distortionAdaptive denoisingImage enhancementRadio wave reradiation/reflectionSynthetic aperture radarRadar
The invention discloses a denoising method of synthetic aperture radar image based on a non-lower sampling configuration wave, which is mainly to solve the problem that the image detail is difficult to keep effectively by the existing method, the new method comprises: (1) inputting a SAR image X and performing the L layer non-lower sampling configuration wave transformation; (2) calculating speckle noise variance delta C#-[B] of subband in each high-frequency direction of different dimensions; (3) distinguishing the high-frequency direction subband coefficients into the signal or the noise transformation coefficients by the local average value mean[C1, i(a, b)] high-frequency direction suband coefficient C1 and the i (a, b); (4) reserving the signal part in the judged high-frequency direction subband coefficient C1 and i (a, b) to obtain the denoised high-frequency direction subband coefficient C1 and i (a, b); (5) performing the non-lower sampling configuration wave inverse transformation for the low-frequency subband amd the denoised high-frequency direction subband coefficient C1 and I (a, b) to obtain the denoised SAR image X . The invention can effectively eliminate the coherent speckle noise, meanwhile can effectively keep the image detail, the denoised image has no shake and distortion and can be used for the preprocessing stage of the synthetic aperture radar image.
Owner:XIDIAN UNIV

Semi-inverse method-based rapid single image dehazing algorithm

The invention discloses a semi-inverse method-based rapid single image dehazing algorithm, which comprises the following steps: In light of an atmospheric scattering model, working out an atmospheric global illumination value by utilizing an improved semi-inverse algorithm, wherein the robustness of the obtained atmospheric global illumination value is stronger than that of the maximum gray value in a dark channel; secondly, fusing the edge information and the scene depth information of an image upon the characteristic of atmospheric scattered light by taking the edge information of the image as a synthesis condition, and accurately estimating an atmospheric streamer; then, obtaining an initial restored haze-free image according to the atmospheric scattering model; finally, performing color adjustment and detail enhancement processing on the image after being initially dehazed to obtain a haze-free image with strong sense of reality. The semi-inverse method-based rapid single image dehazing algorithm has a very good processing effect on depth mutation or prospect pixels, so that a vignetting effect is eliminated; shown by a large number of experiments, the semi-inverse method-based rapid single image dehazing algorithm disclosed by the invention is capable of well keeping color and detail information, is better in automaticity and robustness, and can be further used for a video dehazing system.
Owner:SOUTHWEAT UNIV OF SCI & TECH

Control method of monitoring ball machine

The invention discloses a control method of a monitoring ball machine. The method comprises the following steps of (1) vertically and horizontally dividing a to-be-monitored space into a plurality of small partitions, and setting a shooting focal distance for each small partition; (2) setting corresponding preset positions for a central point position and an edge point position of each small partition, and storing horizontal position information, vertical position information and shooting focal distance information of the ball machine; (3) reading a video frame, and performing target detection on the video frame; (4) according to direction information of a detected target in a monitoring scene, mapping the target to the corresponding preset position; (5) calling the preset position by the ball machine to acquire a monitored image. The defects that the automation degree of control is not high, the real-time property and the flexibility are not enough, and human manual interference is required in a ball machine of the traditional video monitoring system are overcome; the control method is convenient in operation, high in automation degree of control and good in instantaneity, and is particularly good in capture effect on the monitored image of a quickly moving target.
Owner:HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products