Patents
Literature
Hiro is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Hiro

116 results about "Human visual perception" patented technology

Human visual perception simulation-based self-adaptive low-illumination image enhancement method

The invention provides a human visual perception simulation-based self-adaptive low-illumination image enhancement method. The method is put forwards based on characteristics of low brightness and low contrast of a low-illumination color image and through researching an automatic adjustment process of pupils and photoreceptor cells for an environment. The method includes the following steps that: the adjustment process of the pupils for light is simulated, and the total brightness level of the image is improved; and the self-adaptive adjustment and control ability of eye vision for a low-illumination environment is simulated, and a nonlinear mapping model is designed to simulate the adjustment process of rod cells and cone cells, so that a bright and dark self-adaptation function can be obtained, and a bright and dark information fusing function is determined according to illumination distribution, and global self-adaptive adjustment is performed on brightness components; and the local contrast of the enhanced brightness image will be decreased, so that local self-adaptive contrast enforcement is performed on the image through adopting an exponential function; and finally, and color restoration is performed on the enhanced image. With the human visual perception simulation-based self-adaptive low-illumination image enhancement method of the invention adopted, the brightness, local contrast and detail information of the low-illumination color image can be effectively improved, and especially, the method has obvious effects in dark area and highlight area enhancement.
Owner:SOUTHWEAT UNIV OF SCI & TECH

Stereoscopic video objective quality evaluation method based on machine learning

InactiveCN103338379AReflect changes in visual qualityGet evaluation resultTelevision systemsSteroscopic systemsStereoscopic videoSingular value decomposition
The invention discloses a stereoscopic video objective quality evaluation method based on machine learning. When spatial domain quality of luminance component images of single-frame images is evaluated, each image block of luminance component images of each frame of image in original and distorted stereoscopic videos is subjected to singular value decomposition, and dot product of singular vectors obtained from singular value decomposition is adopted to evaluate distortion degree of each frame of image in the distorted stereoscopic video. Because the singular vectors can greatly reflect structural information of images, when the dot product of the singular vectors is adopted to evaluate the quality of the images, changes of the structural information are considered, and therefore, evaluation results can reflect changes of visual quality of the stereoscopic video more objectively when the stereoscopic video is under various kinds of distortion influences. According to the stereoscopic video objective quality evaluation method based on machine learning, a method of machine learning is adopted to process the relations between objective quality evaluation predicted values and the quality of a left-view point video and a right-view point video, and degree of difference among point views of the left-view point video and the right-view point video, and therefore, evaluation results which are more consistent with human visual perception can be effectively obtained.
Owner:NINGBO UNIV

Colorful animal image retrieval method based on content and colorful animal image retrieval system based on content

The invention provides a colorful animal image retrieval method based on content and a colorful animal image retrieval system based on content. The method comprises the following steps: step 1, inputting an inquiry image which serves as a retrieval object; step 2, extracting the color, grain and framework characteristic descriptions of interested content in the inquiry image based on the guidance of a characteristic template library, and performing normalization processing, wherein the characteristic descriptions, which are different in animal color, grain and framework and are formed on the basis of sample statistics, are stored in the characteristic template library; step 3, performing retrieval based on an R-tree index structure on the characteristic descriptions extracted in the step 2, performing search calculation on image characteristic indexes in a characteristic index library, and extracting a corresponding image from an image database according to the matching result. The method and the system can be used for overcoming the influence of illumination changes on image retrieval judgment to a certain extent, describing the angle and posture semantic information of an inquiry animal according to the framework characteristic analysis during retrieval, and generating a retrieval result consistent with the human visual perception.
Owner:NANJING NORTH OPTICAL ELECTRONICS

Bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method

ActiveCN105809173AThe RSTN invariant property hasInvariant properties haveCharacter and pattern recognitionFeature extractionImage resolution
The invention discloses a bionic vision transformation-based image RSTN (rotation, scaling, translation and noise) invariant attributive feature extraction and recognition method. The method includes the following steps: 1) gray processing is performed on an original image, and the size of the image is reset by using a bilinear interpolation method; 2) the directional edge of the target image is detected based on a Gabor and bipolar filter F-based filter-filter filter, so that an edge image E can be obtained; 3) the spatial resolution pitch detection value of the edge image E is calculated, so that a first-stage output image S1 is obtained; and 4) directional edge detection mentioned in the step 2) and spatial resolution pitch detection mentioned in the step 3) are performed on the first-stage output image S1 again, so that a second-stage feature output image S2 can be obtained, and invariant attributive features can be obtained. According to the method of the invention, a human visual perception mechanism is simulated, and bionic vision transformation-based RSTN invariant attributive features are used in combination, and therefore, the accuracy of image recognition can be improved, and robustness to noises is enhanced.
Owner:CENT SOUTH UNIV

Perceptual color matching method between two different polychromatic displays

The invention relates to a color matching method for transforming a color representation of a first set of color primaries with a plurality of first signals to a second set of color primaries with a plurality of second signals in a first domain. The color matching method of the invention is to consider the characteristics of human visual perception. Since human is more sensitive to the luminous intensity than chrominance, the color matching method of the invention is considered to match the luminous intensity. The color matching method of the invention can minimize the intensity difference by utilizing the optimality of resource distribution. An additional step of smoothing the intensity difference among color primaries at the level of color primaries is appended. It enhances the visual quality especially for the images with a gradual change in numerous levels of color. Besides, when the color is outside the gamut, we keep the information of luminance by adding extra white. According to the invention, the color matching method of handling colors outside gamut can provide a higher contrast which is especially good for displaying a color change with numerous levels, such as sunrise or sunset scenes. The color matching method further considers color interactions of each color primary regarding the configuration of surrounding color primaries. With the consideration of exploiting the perceived luminous intensity instead of physical luminous intensity, a superior color matching algorithm can be made.
Owner:VP ASSETAB
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products