Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

97 results about "Texture rendering" patented technology

Real-time three-dimensional scene reconstruction method for UAV based on EG-SLAM

ActiveCN108648270ARequirements for Reducing Repetition RatesImprove realismImage enhancementImage analysisPoint cloudTexture rendering
The present invention provides a real-time three-dimensional scene reconstruction method for a UAV (unmanned aerial vehicle) based on the EG-SLAM. The method is characterized in that: visual information is acquired by using an unmanned aerial camera to reconstruct a large-scale three-dimensional scene with texture details. Compared with multiple existing methods, by using the method provided by the present invention, images are collected to directly run on the CPU, and positioning and reconstructing a three-dimensional map can be quickly implemented in real time; rather than using the conventional PNP method to solve the pose of the UAV, the EG-SLAM method of the present invention is used to solve the pose of the UAV, namely, the feature point matching relationship between two frames is used to directly solve the pose, so that the requirement for the repetition rate of the collected images is reduced; and in addition, the large amount of obtained environmental information can make theUAV to have a more sophisticated and meticulous perception of the environment structure, texture rendering is performed on the large-scale three-dimensional point cloud map generated in real time, reconstruction of a large-scale three-dimensional map is realized, and a more intuitive and realistic three-dimensional scene is obtained.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Texture rendering method and system for real-time three-dimensional human body reconstruction, chip, equipment and medium

InactiveCN111243071AQuality improvementMeet real-time rendering requirementsAnimation3D-image renderingPattern recognitionHuman body
The invention discloses a texture rendering method and system for real-time three-dimensional human body reconstruction, a chip, equipment and a medium. The method comprises the steps of obtaining a current human body model and a depth image of a shooting object; selecting a current human body model as a standard model, reprojecting the vertex of the standard model to the depth image, extracting color information and image coordinates corresponding to the vertex, wherien the color information is a color initial value, and the image coordinates are converted into texture coordinates; calculating a weighted sum of the subsequent color information of the vertex of the human body model and the color initial value to serve as a new color of the vertex of the standard model; calculating sub-texture maps and sub-masks of the current human body model, and combining the sub-texture maps and the sub-masks into a complete texture map and mask; and performing rendering according to the texture mapand the texture coordinates. According to the method, generation and optimization of required textures can be rapidly completed based on the GPU, a high-quality texture atlas is obtained, and color cracks caused by illumination changes are eliminated. A human body model generated in a multi-camera system can be rendered, and a good visual reality sense is achieved.
Owner:PLEX VR DIGITAL TECH CO LTD

Face texture image acquisition method and device, equipment and storage medium

ActiveCN111325823AImprove texture renderingSmooth and detailed texture renderingCharacter and pattern recognition3D-image renderingPattern recognitionPoint cloud
The invention provides a face texture image acquisition method and device, equipment and a storage medium. The method comprises the steps that point cloud of a three-dimensional face model of a targetobject and face images of n head postures of the target object are acquired; index information of three-dimensional data points in the point cloud is calculated through cylindrical expansion; obtaining a mapping relationship between three-dimensional data points in the point cloud and pixel points on the face image; image areas corresponding to the head postures are obtained from the n face images respectively, and n effective areas are obtained; generating region texture images corresponding to the n effective regions according to the index information and the mapping relationship; and performing image fusion on the n region texture images to generate a face texture image of the target object. According to the method, the corresponding face texture image can be generated for the three-dimensional face model obtained through arbitrary reconstruction, so that the texture rendering effect of the three-dimensional face model can be improved, and the authenticity of the finally generatedface texture image can be improved.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Haptic texture rendering method based on practical measurement

InactiveCN102054122AConstant forceThe measurement method is scientific and simpleMeasurement devicesSpecial data processing applicationsTouch PerceptionStatic friction
The invention discloses a haptic texture rendering method based on practical measurement, which is characterized in that after an operational handle of haptic texture display equipment collides with the surface of a virtual texture, resultant force of normal textural force, normal binding force and tangential friction is taken as contact force to be output to an operator. The normal textural force is obtained by measuring pressure which really scratches the surface of the texture, a mechanical arm which applies constant force and is provided with a pressure sensor at the bottom is used to scratch the surface of a textural material at a constant speed and collect data meanwhile, and after error item correction, smoothness, voltage value conversion, mechanical arm constant force value subtraction of the data, the data are converted into the normal textural force; the modeling of the normal binding force adopts a spring damping model; and the modeling of the tangential friction synthesizes a static friction stage and a sliding friction stage, the modeling at the static friction stage adopts the product of maximal static friction and a sine function, a coefficient of kinetic friction at the sliding friction stage is obtained by calculating the normal textural force reflecting the concave-convex degree of the texture, and the naturalness of the texture rendering is improved.
Owner:SOUTHEAST UNIV

Video processing method and system

The embodiment of the invention discloses a video processing method, which is applied to a video processing system, and the video processing system comprises an on-screen module and an off-screen module. The on-screen module can obtain special effect parameters determined according to a to-be-processed video, wherein the special effect parameters comprise template information used for identifyingspecial effect types and text information used for identifying special effect contents. The on-screen module stores the input texture corresponding to the to-be-processed video and sends the special effect parameter to the off-screen module, and the off-screen module completes texture rendering of the input texture in the background according to the special effect parameter to obtain an output texture, thereby completing texture rendering of the input texture based on the special effect parameter. Wherein the output texture can be obtained by the upper screen module so as to be drawn on a display interface to enable a user to see a video with a character special effect. Video special effect processing is completed by the video processing system, a user does not need to have video processing skills, and user video communication experience is improved. The embodiment of the invention further discloses a video processing system.
Owner:TENCENT TECH (SHENZHEN) CO LTD

Three-dimensional object display method and device

The invention discloses a three-dimensional object display method and device and belongs to the three-dimensional display technical field. The three-dimensional object display method includes the following steps that: a first thread and a second thread are operated through a first processing unit; when a first display instruction is received, response is made for the first display instruction, and a three-dimensional object to be displayed is rendered into textures through the first thread, and the rendered textures are saved in a second processing unit GPU (Graphic Processing Unit); in the rendering process of the first thread, a second display instruction is transmitted to the second thread through the first thread, wherein the second display instruction carries the rendered textures; and response is made for the second display instruction, and the rendered textures are displayed through the second thread. According to the three-dimensional object display method and device of the invention, the textures are adopted as a cache, and the three-dimensional object is displayed in a two dimensional UI (User Interface), and texture rendering speed is increased by 3 to 5 times compared with bitmap rendering speed; and the GPU distributes resources for the textures, and therefore, memory resource occupation can be avoided.
Owner:LENOVO (BEIJING) LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products