Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

394 results about "Color consistency" patented technology

Color consistency refers to the average amount of variation in chromaticity among a batch of supposedly identical lamp samples. Generally speaking, the more complicated the physics and chemistry of the light source, the more difficult it is to manufacture with consistent color properties.

Method for acquiring range image by stereo matching of multi-aperture photographing based on color segmentation

InactiveCN101720047AEliminate mismatch pointsImproved Parallax AccuracyImage analysisSteroscopic systemsParallaxStereo matching
The invention discloses a method for acquiring a range image by stereo matching of multi-aperture photographing based on color segmentation. The method comprises the following steps of: (1) carrying out image correction on all input images; (2) carrying out the color segmentation on a reference image, and extracting an area with consistent color in the reference image; (3) respectively carrying out local window matching on the multiple input images to obtain multiple parallax images; (4) removing mismatched points which are generated during the matching by applying a bilateral matching strategy; (5) synthesizing the multiple parallax images into a parallax image, and filling parallax information of the mismatched points; (6) carrying out post-treatment optimization on the parallax image to obtain a dense parallax image; and (7) converting the parallax image into a range image according to the relationship between parallax and depth. By acquiring range information from multiple viewpoint images and utilizing image information provided by the multiple viewpoint images, the invention can not only solve the problem of mismatching brought by periodic repetitive textural features and shelter and the like in the images, but also can improve matching precision and obtain an accurate range image.
Owner:SHANGHAI UNIV

System and method for obtaining color consistency for a color print job across multiple output devices

A method for maintaining color consistency in an environment of networked devices is disclosed. The method involves identifying a group of devices to which a job is intended to be rendered; obtaining color characteristics from devices in the identified group; modifying the job based on the obtained color characteristics; and rendering the job on one or more of the devices. More specifically, device controllers associated with each of the output devices are queried to obtain color characteristics specific to the associated output device. Preferably, the original job and the modified job employ device independent color descriptions. Modifications are computed by a transform determined by using the color characteristics of the output devices along with the content of the job itself. The method further comprises mapping colors in the original job to the output devices' common gamut, i.e., intersection of the gamuts of the individual printers wherein the color gamut of each device is obtained from a device characterization profile either by retrieving the gamut tag or by derivation using the characterization data in the profile. The color gamut of each device is computed with knowledge of the transforms that relate device independent color to device dependent color using a combination of device calibration and characterization information. Alternatively, transformations are determined dynamically based on the characteristics of the target group of output devices. From the individual color gamuts of the devices, a common intersection gamut is derived. The common intersection gamut derivation generally comprises an intersection of two three-dimensional volumes in color space. This may be performed geometrically by intersecting the surfaces representing the boundaries of the gamut volumes—which are typically chosen as triangles. Alternately, the intersection may be computed by generating a grid of points known to include all involved device gamuts. This is then mapped sequentially to each individual gamut in turn resulting in a set of points that lie within the common gamut to produce a connected gamut surface. Once the common intersection gamut is derived, the input job colors are mapped to this gamut. The optimal technique generally depends on the characteristics of the input job and the user's rendering intent. Final color correction employs a standard calorimetric transform for each output device that does not involve any gamut mapping.
Owner:XEROX CORP

Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment

The invention relates to an automatic matching correction method of a video texture projection in a three-dimensional virtual-real fusion environment and a method of fusing an actual video image and a virtual scene. The automatic matching correction method comprises the steps of constructing the virtual scene, obtaining video data, fusing video textures and correcting a projector. A shot actual video is subjected to virtual scene fusion on the surface of the complicated scene such as an earth surface and a building by a texture projection mode, the expression and showing abilities of dynamic scene information in the virtual-real environment are improved, and the layered sense of the scene is enhanced. A dynamic video texture coverage effect of the large-scale virtual scene can be realized by increasing the number of the videos at different shooting angles, so that a dynamic reality effect of the virtual-real fusion of the virtual-real environment and the display scene is realized. The obvious color jump is eliminated, and a visual effect is improved by conducting color consistency processing on video frames in advance. With the adoption of an automatic correction algorithm, the virtual scene and the actual video can be fused more precisely.
Owner:北京微视威信息科技有限公司

White balance processing method and device

The embodiment of the invention discloses a white balance processing method and device. The white balance processing method comprises the following steps of: obtaining a picture obtained by a module of a camera to be corrected under a pre-set light source, calculating an average value of red R, green G and blue B within a picture setting range, and calculating a white balance actual value corresponding to a white balance standard value through the average value, wherein the white balance standard value is obtained by obtaining a picture under the pre-set light source by using a standard module, calculating the average value of the red, the green and the blue within the picture setting range, and calculating through average value; obtaining gains of the red, the green and the blue of the white balance into a control register of a sensor of the camera to be corrected through a white balance value actual value and the white balance standard value; and writing the gains of the red, the green and the blue of the white balance to a control register of a sensor of the camera to be corrected. The problems of color cast of a single module, color consistency of different modules, and image color cast can be solved by carrying out proper correction of the white balance on RAW DATA output by the camera.
Owner:TRULY OPTO ELECTRONICS

LED (light emitting diode) pixel unit device structure and preparation method thereof

The invention discloses an LED (light emitting diode) pixel unit device structure. In the structure, three LED chip units which are connected by a same substrate but electrically separated are used and are packaged in a face-down mode to reduce the packaging area of the device, thus the resolution ratio of an LED display screen can be improved; and each LED chip unit has the very same structure, an R color filter film, a G color filter film and a B color filter film are respectively formed on each LED chip unit for sending out a red light, a green light and a blue light respectively, thus each LED chip unit can have an uniform attenuation and the color consistency of the display screen can be improved. The invention also discloses a preparation method for the structure. The LED modules are arranged on a heat conducting substrate in the face-down mode, thus the procedures of chip die bond and gold wire bonding can be omitted, the manufacturing costs can be reduced, the production efficiency can be improved, the problem of light-blocking of the pad and the lead in small-sized LED packaging can be solved, the light extraction efficiency of the LED can be considerably improved, the packaging space can be saved and the further miniaturization and integration of the LED packaging size can be realized.
Owner:ENRAYTEK OPTOELECTRONICS

360-degree lighting bulb and processing method thereof

A 360-degree lighting bulb and a processing method thereof are provided; the bulb comprises a glass shield and a core rod; a glass shield port and a core rod bell mouth are fused to form a vacuum seal cavity filled with heat radiation gas; an inner wall of the glass shield is coated with a phosphor layer; a 360-degree lighting element is arranged in the glass shield, and comprises a rack arranged on the core rod; the rack is connected with a plurality of LED lighting chips; electrodes of each LED lighting chip are connected with conduction pins on the rack through metal wires according to polarity characteristics, thus forming more than one loop; the rack having the plurality of LED lighting chips is coated with a glue layer; the conduction pins on the rack are welded with conduction lead wires on the core column according to polarity characteristics; the glass shield is connected with a lamp holder. The lighting bulb uses the LED lighting chips of the 360-degree lighting element to illuminate, and glue mutual reflection (refraction) characteristic can excite the phosphor layer on the surface of the glass shield, so the bulb is uniform in lighting, good in color consistency, has no color difference, is high in yield rate, low in cost, and simple in technology.
Owner:王志根

Correcting method of multiple projector display wall colors of arbitrary smooth curve screens independent of geometric correction

The invention relates to a correcting method of multiple projector display wall colors of arbitrary smooth curve screens independent of geometric correction, which belongs to the image display processing of a computer system. The method comprises the following steps: firstly, obtaining a corresponding relation between a projector image and a camera image by utilizing an image alignment algorithm; secondly, shooting four sets of projection images of the projector image in three channels of RGB under different inputs; thirdly, calculating the color information of each pixel of a projector by utilizing an image alignment relation and a shooting result; and finally, combining all calculating results to obtain a projector common lightness response region and realizing a color consistency correction effect by adjusting inputs in real time by GPU. The correcting method is separated from geometric correction results, establishes the pixel level alignment relation between the projector image and the CCD camera image by utilizing the self-adaptive image alignment algorithm, can measure the lightness information of each pixel of the projector and adapt to arbitrary smooth projection screens. The alignment algorithm does not need any projectors, cameras and screen curve parameter calibrations and has the characteristics of simplicity, stability, self-adaptation and high speed.
Owner:WISESOFT CO LTD

RGB-D camera depth image recovery method combining color images

The invention relates to a RGB-D camera depth image recovery method combining color images, and belongs to the field of depth image recovery. The method comprises that a Zhang's calibration method isadopted to calibrate color and depth cameras, and internal and external parameters of the cameras are acquired; according to the principle of pin-hole imaging and coordinate transformation, coordinatealignment of the color and depth cameras is achieved; color and depth images of a scene are respectively acquired, and a depth threshold method is adopted to carry out binaryzation processing on theimages; a size of a cavity connected domain is judged to determine existence of cavities; an expansion operation is carried out to obtain a cavity neighborhood; depth value variances of pixel points of a cavity domain are calculated, according to the depth value variances, the cavities are divided into shielding cavities and in-plane cavities; color consistency and domain similarity are respectively adopted to recover the two types of cavities; a partial filter method is adopted to filter the recovered images, and noises are removed. The method ensures the depth image recovery, keeps the original information of the depth images as much as possible, and improves the recovery efficiency of the depth images.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Optical image color consistency self-adaption processing and quick mosaic method

ActiveCN105528797AImplement automatic mosaic processingFix color inconsistenciesImage enhancementImage analysisBatch processingTriangulation
The invention discloses optical image color consistency self-adaption processing and a quick mosaic method and is applied in micro and mini unmanned aerial vehicle optical images. The self-adaption processing and the quick mosaic method comprise following steps: A: aero-triangulation: aero-triangulation is performed to a raw image on the basis of POS data and camera data to obtain high-precision POS data and a DEM of a test zone; B: ortho-rectification: digital differential rectification is performed to the image on the basis of the high-precision POS data and the test zone DEM to obtain a single ortho-image; C: image color-homogenizing: a global color-homogenizing template is constructed, batch processing of color-homogenizing is performed to the single image, and color blending is performed to overlapping regions of each image; D: image mosaicing: a mosaic line algorithm is adopted, images with overlapping degrees are jointed, and a photo map is output. The optical image color consistency self-adaption processing and quick mosaic method can be applied in fully automatic processing of micro and mini unmanned aerial vehicle optical images; by use of the method, the problem of color inconstancy of single image and overlapping regions is solved, and automatic mosaic processing of images with all overlapping degrees can be realized.
Owner:深圳飞马机器人科技有限公司

Method for detecting bird nests in power transmission line poles based on unmanned plane images

The invention provides a method for detecting bird nests in power transmission line poles based on unmanned plane images, which is method of perceiving and analyzing power transmission line structure features. Firstly, line segments in different directions are extracted from a polling image, a Gestalt perception theory is adopted to merge small interrupted line segments, and the merged line segments are clustered into parallel line sets. Then, the image is divided into 8*4 blocks according to a structural feature (nearly symmetrical intersection feature) of a pole in the image, the quantity statistics of line segments in four different directions in each block is analyzed, and an area where the pole is in the image is detected. The invention provides a bird nest detection method which fuses colors and textures. Firstly, an area of color consistency in an image is obtained by mean-shift cluster segmentation. Then, according to features of an H histogram of a bird nest sample, multiple areas which are most similar to the bird nest sample in the image are selected as candidate areas of a bird nest through a histogram interaction method. Then, three co-occurrence matrix features of entropy and inertia moments and dissimilarity, which can best represent the bird nest, are selected to calculate texture features of the candidate areas of the bird nest. Finally, matching between each candidate area of the bird nest and the bird nest sample texture similarity is carried out to achieve the bird nest detection.
Owner:上海深邃智能科技有限公司

Depth integration and curved-surface evolution based multi-viewpoint three-dimensional reconstruction method

The invention discloses a depth integration and curved-surface evolution based multi-viewpoint three-dimensional reconstruction method. The method comprises the following steps: firstly, calibrating a camera array consisting of depth cameras and visible light cameras, projecting depth maps obtained by the depth cameras to a viewpoint in the array by using calibration parameters so as to obtain an initial depth map of the viewpoint, and then obtaining an initial depth curved-surface according to the initial depth map; secondary, defining a curved-surface evolution based energy function, wherein the definition of the energy function comprises the color consistency cost of a plurality of cameras, and the uncertainty and curved-surface smoothing items of the initial depth curved-surface; and finally, driving the depth curved-surface evolution through convexification and minimization on the energy function so as to obtain a final depth curved-surface, namely, three-dimensional scene information. According to the invention, the initial depth information and the color consistency cost of a plurality of cameras are simultaneously fused, therefore, the method is a three-dimensional reconstruction method by which scene space information can be effectively obtained, and can meet various application requirements in three-dimensional televisions, virtual realities and the like.
Owner:ZHEJIANG UNIV

Colour calibration method of display unit

ActiveCN101719362AAchieve regulationAchieve consistent results with color adjustmentsCathode-ray tube indicatorsColor calibrationColor consistency
The invention discloses a colour calibration method of a display unit, comprising the following steps: setting a splicing wall to display pure-colour pictures in turn; shooting the pure-colour pictures and establishing the position corresponding relationships of the pixels of the splicing wall in the photo to the pixels of the actual splicing wall; calibrating photos according to a pre-set calibration line; establishing a general table for storing the colour values and setting the initial colour value in the general table as the photo colour value; adjusting the colour values of the pixels at the splicing joint of the splicing wall in the photo and updating the general table; separating the data corresponding to the display units according to the general table and transmitting the data to each display unit; when the display units work normally, decomposing the display data of each pixel into three primary colours and respectively inquiring the colour value in the sub-tables with the same primary colour; if the inquired colour value is less than the colour value currently displayed by the display unit, adjusting the display unit to display the inquired colour value; otherwise, not adjusting. The invention can locally adjust the display units and practically satisfies the requirement on consistent colour adjustment.
Owner:GUANGDONG VTRON TECH CO LTD

3D-RGB point cloud registration method based on local gray scale sequence model descriptor

The invention belongs to the technical field of computer vision, image processing and three-dimensional measurement, and particularly relates to a 3D-RGB point cloud registration method based on a local gray scale sequence model descriptor. The method comprises the following steps: 1, calculating a four-neighborhood gray average value of each point in two point clouds; 2, dividing adjacent pointsof the key points into six parts according to gray values, and finally connecting the feature vectors of the six parts in series to form a key point feature descriptor; 3, constructing a point-to-point mutual corresponding relationship between the source point cloud and the target point cloud according to a nearest neighbor ratio method and an Euclidean distance threshold, and removing an error corresponding relationship by utilizing random sampling consistency and color consistency; and 4, solving a conversion matrix between the source point cloud and the target point cloud by using the corresponding relationship, and performing spatial transformation on the source point cloud to complete registration of the point cloud. According to the method, the influence of unobvious geometric information and light intensity change on point cloud registration can be effectively reduced. The method has a wider application range. The precision and the robustness of three-dimensional point cloud registration are improved.
Owner:HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products