Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

585 results about "Color map" patented technology

Method and apparatus for seismic signal processing and exploration

A method, a map and an article of manufacture for the exploration of hydrocarbons. In one embodiment of the invention, the method comprises the steps of: accessing 3D seismic data; dividing the data into an array of relatively small three-dimensional cells; determining in each cell the semblance/similarity, the dip and dip azimuth of the seismic traces contained therein; and displaying dip, dip azimuth and the semblance/similarity of each cell in the form a two-dimensional map. In one embodiment, semblance/similarity is a function of time, the number of seismic traces within the cell, and the apparent dip and apparent dip azimuth of the traces within the cell; the semblance/similarity of a cell is determined by making a plurality of measurements of the semblance/similarity of the traces within the cell and selecting the largest of the measurements. In addition, the apparent dip and apparent dip azimuth, corresponding to the largest measurement of semblance/similarity in the cell, are deemed to be estimates of the true dip and true dip azimuth of the traces therein. A color map, characterized by hue, saturation and lightness, is used to depict semblance/similarity, true dip azimuth and true dip of each cell; true dip azimuth is mapped onto the hue scale, true dip is mapped onto the saturation scale, and the largest measurement of semblance/similarity is mapped onto the lightness scale of the color map.
Owner:CORE LAB GLOBAL

Method for recovering real-time three-dimensional body posture based on multimodal fusion

InactiveCN102800126AThe motion capture process is easyImprove stability3D-image rendering3D modellingColor imageTime domain
The invention relates to a method for recovering a real-time three-dimensional body posture based on multimodal fusion. The method can be used for recovering three-dimensional framework information of a human body by utilizing multiple technologies of depth map analysis, color identification, face detection and the like to obtain coordinates of main joint points of the human body in a real world. According to the method, on the basis of scene depth images and scene color images synchronously acquired at different moments, position information of the head of the human body can be acquired by a face detection method; position information of the four-limb end points with color marks of the human body are acquired by a color identification method; position information of the elbows and the knees of the human body is figured out by virtue of the position information of the four-limb end points and a mapping relation between the color maps and the depth maps; and an acquired framework is subjected to smooth processing by time domain information to reconstruct movement information of the human body in real time. Compared with the conventional technology for recovering the three-dimensional body posture by near-infrared equipment, the method provided by the invention can improve the recovery stability, and allows a human body movement capture process to be more convenient.
Owner:ZHEJIANG UNIV

Identifying apparatus and method, position detecting apparatus and method, robot apparatus and color extracting apparatus

An identifying apparatus and method and a robot apparatus capable of reliably identifying other moving objects or other objects, a position detecting apparatus and method and a robot apparatus capable of accurately detecting the position of a moving object or itself within a region, and a color extracting apparatus capable of accurately extracting a desired color are difficult to be realized. Objects are provided with identifiers having different color patterns such that the color patterns are detected and identified through image processing. Also, the objects of interest are given color patterns different from each other, such that the position of the object can be detected by identifying the color pattern through image processing. Further, a plurality of wall surfaces having different colors are provided along the periphery of the region, such that the position of an object is detected on the basis of the colors of the wall surfaces through image processing. Further, a luminance level and color difference levels are sequentially detected for each of pixels to extract a color by determining whether or not the color difference levels are within a predetermined range.
Owner:SONY CORP

Method for generating virtual-real fusion image for stereo display

The invention discloses a method for generating a virtual-real fusion image for stereo display. The method comprises the following steps: (1) utilizing a monocular RGB-D camera to acquire a depth map and a color map in real scene; (2) rebuilding a three-dimensional scene surface model and calculating a camera parameter; (3) mapping, thereby acquiring the depth map and the color map in a virtual viewpoint position; (4) finishing the three-dimensional registration of a virtual object, rendering for acquiring the depth map and the color map of the virtual object, and performing virtual-real fusion, thereby acquiring a virtual-real fusion content for stereo display. According to the method provided by the invention, the monocular RGB-D camera is used for shooting, the three-dimensional scene surface model is rebuilt frame by frame and the model is simultaneously used for tracking the camera and mapping the virtual viewpoint, so that higher camera tracking precision and virtual object registration precision can be acquired, the cavities appearing in the virtual viewpoint drawing technology based on the image can be effectively handled, the shielding judgment and collision detection for the virtual-real scene can be realized and a stereo display device can be utilized to acquire a vivid stereo display effect.
Owner:ZHEJIANG UNIV

Method and system for dynamic identification and location of charging pile based on Kinect

The present invention provides a method and system for dynamic identification and location of a charging pile based on Kinect. The method comprises: the step 1: calculating a conversion matrix from acamera coordinate system to a setting world coordinate system according to three-dimensional point cloud data obtained by a Kinect sensor; the step 2: performing one-to-one alignment of pixels in a color map and a depth map; the step 3: removing invalid pixel points in an image obtained in the step 2, converting residual pixel points to three-dimensional spatial points and eliminating points higher than 50cm or lower than 3cm; the step 4: performing downsampling of point cloud obtained in the step 3 to reduce computing amount of follow-up processing, and performing radius filtering to remove outliers; the step 5: performing Euclidean cluster of the point cloud obtained in the step 4 to obtain one or more than one cluster objects; the step 6: processing the one or more than one cluster objects obtained in the step 5, and screening cluster objects having two feature planes; the step 7: processing a cluster body screened in the step 6, and calculating whether a geometrical relationship between two feature planes accords with a three-dimensional shape of a charging pile or not; and the step 8: performing calculation in geometry according to relative position of the two feature planes determined in the step 7, determining a position and an angle of deflection of the charging relative to an original point of the world coordinate system, and realizing location of the charging pile. The method and system for dynamic identification and location of charging pile based on the Kinect have the advantages of accurate recognition, strong robustness, stable dynamic tracking, less susceptible to light interference and the like, and the positioning of the target is small in calculation amount and accurate in calculation result, etc.
Owner:斯坦德机器人昆山有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products