Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

844 results about "Image texture" patented technology

An image texture is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image.

Coding and decoding methods and devices for three-dimensional video

The invention discloses a coding method for a three-dimensional video. The method comprises the following steps of: inputting a first frame image which comprises image texture information and depth information at a plurality of different viewpoints at the same time so as to form depth pixel images of the plurality of the viewpoints; selecting a viewpoint which is closest to a center as a main viewpoint and mapping the depth pixel image of each viewpoint onto the main viewpoint; acquiring motion information from the texture information by a motion target detection method, rebuilding all depth pixel points in the mapped depth pixel images by using the depth information and/or the motion information to acquire a background image layer image and one or more foreground image layer images; and coding the background image layer image and the foreground image layer images respectively, wherein the depth information and the texture information are coded respectively. The invention also discloses a decoding method for the three-dimensional video, a coder and a decoder. The coding method is particularly suitable for coding a multi-viewpoint video sequence with a stationary background, can enhance prediction compensation accuracy and decreases code rate on the premise of ensuring subjective quality.
Owner:华雁智科(杭州)信息技术有限公司

Rendering method and apparatus

The present invention is directed to a real-time controllable reflection mapping function which covers all stereoscopic directions of space. More specifically, the surface of a mirrored object is segmented into a plurality of polygonal elements (for example, triangles). Then, a polyhedron, which includes a predetermined point (for example, the center of the mirrored object) in a three-dimensional space (for example, a cube) in the interior thereof, is generated and a rendering process is performed for each surface of the polyhedron with the predetermined point as a view point. The rendered image is stored. Thereafter, a reflection vector is calculated between each vertex of the polygonal elements and a view point used when the entire three-dimensional space is rendered. Next, the surface of the polyhedron, in which an intersecting point between the reflection vector with the predetermined point as a start point and the polyhedron exists, is obtained. The coordinate in the image, which corresponds to each vertex of the polygonal elements, is calculated by using the surface where the intersecting point exists and the reflection vector. The image is texture-mapped onto the surface of the object by using the coordinate in the image which corresponds to each vertex of the polygonal elements. Finally, the result of the texture mapping is displayed.
Owner:GOOGLE LLC

Three-dimensional facial reconstruction method

InactiveCN101751689AGeometry reconstruction speed reducedImplement automatic rebuild3D-image rendering3D modellingAdaBoostFace model
The invention relates to a three-dimensional facial reconstruction method, which can automatically reconstruct a three-dimensional facial model from a single front face image and puts forward two schemes. The first scheme is as follows: a deformable face model is generated off line; Adaboost is utilized to automatically detect face positions in the inputted image; an active appearance model is utilized to automatically locate key points on the face in the inputted image; based on the shape components of the deformable face model and the key points of the face on the image, the geometry of a three-dimensional face is reconstructed; with a shape-free texture as a target image, the texture components of the deformable face model are utilized to fit face textures, so that a whole face texture is obtained; and after texture mapping, a reconstructed result is obtained. The second scheme has the following differences from the first scheme: after the geometry of the three-dimensional face is reconstructed, face texture fitting is not carried out, but the inputted image is directly used as a texture image as a reconstructed result. The first scheme is applicable to fields such as film and television making and three-dimensional face recognition, and the reconstruction speed of the second scheme is high.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI +1

Image texture tactile representation system based on force/haptic interaction equipment

The invention discloses an image texture tactile representation system based on force/haptic interaction equipment for virtual reality human-computer interaction, which is characterized in that when the virtual proxy of the force/haptic interaction equipment slides on a texture surface of a virtual object in a virtual environment, the surface height of the object texture corresponding to the contact point and a coefficient of kinetic friction for reflecting the rough degree of the contact point are firstly obtained on the basis of an image processing method, a continuous normal contact force model reflecting the concave-convex degree of the contact point and a tangential friction model reflecting the rough degree of the contact point are respectively established, and finally the texture contact force is fed back to an operator in real time through the force/haptic interaction equipment so as to realize the force haptic express and reappear when fingers slide over the surface texture of the virtual object. The feedback continuous change normal force not only enables the human-computer interaction to be more real, but also enables an interaction system to be more stable, and the feedback friction related to the rough degree of the contact point also further enhances the sense of reality when the texture reappears.
Owner:NANTONG MINGXIN CHEM +1

Encoding and decoding method and device, image element interpolation processing method and device

The invention discloses an interframe forecasting coding method, a decoding method, a coder, a decoder, a sub-pixel interpolation processing method and a sub-pixel interpolation processing device. The interframe forecasting coding and decoding technical proposal provided by the invention is that: sub-pixel interpolation is respectively performed on integer pixel samples of image blocks waiting for coding for a plurality of times through consideration of affection of texture distribution direction of images on precision of sub-pixel reference samples and by the means of adjusting interpolation directions, and multiple groups of sub-pixel reference sample values with different precisions are obtained, and then reference sample values with highest precision are selected from integer pixel sample values and various sub-pixel reference sample values, thereby encoding rate of a coding end is improved. The sub-pixel interpolation technical proposal provided by the invention is that: at least two different interpolation directions are set for each sub-pixel waiting for interpolation and corresponding predicted values are computed, and then optimal values are selected from all the predicated values and taken as optimal predicted values of sub-pixel samples waiting for interpolation, thereby the precision of the sub-pixel reference sample values is improved.
Owner:HONOR DEVICE CO LTD

Effective GPU three-dimensional video fusion drawing method

The invention relates to an effective GPU three-dimensional video fusion drawing method. The effective GPU three-dimensional video fusion drawing method includes steps that acquiring video data input by multi-video streaming through a video object buffer region; carrying out extensible layered decoding on the acquired video data in a GPU, wherein the decoding thread is controlled and driven by partial visual characters of the corresponding three-dimensional scene on which the video object is depended, and the visual characters comprise visibility, layered attribute and time consistency; after finishing decoding, binding all the image sequences and texture IDS obtained through decoding corresponding time slices according to a synchronization time, and storing in an image texture buffer region; using spatio-temporal texture mapping functions to sample the textures in the image texture buffer region, mapping to the surface of an object in the three-dimensional scene, finishing the other operations relevant to third dimension drawing, and outputting a video-based virtual-real fusion drawing result. The effective GPU three-dimensional video fusion drawing method is capable of meeting the effectiveness demand, precision demand and reliability demand of the virtual-real three-dimensional video fusion.
Owner:北京道和智信科技发展有限公司

Monitoring method based on image features and LLTSA algorithm for tool wear state

ActiveCN107378641ARealization of wear status monitoringFully automatedMeasurement/indication equipmentsTime–frequency analysisTool wear
The invention relates to a monitoring method based on image features and an LLTSA algorithm for a tool wear state. According to the method, an image texture feature extraction technology is introduced into the field of tool wear fault diagnosis, and monitoring for the tool wear state is realized in combination with three flows of ' signal denoising', 'feature extraction and optimization' and 'mode recognition'. The method comprises the steps of firstly, acquiring an acoustic emission signal in a tool cutting process through an acoustic emission sensor, and carrying out signal denoising processing through an EEMD diagnosis; secondly, carrying out time-frequency analysis on a denoising signal through S transformation, converting a time-frequency image to a contour gray-level map, extracting image texture features through a gray-level co-occurrence matrix diagnosis, and then further carrying out dimensionality reduction and optimization on an extracted feature vector through a scatter matrix and the LLTSA algorithm to obtain a fusion feature vector; and finally training a discrete hidden Markov model of the tool wear state through the fusion feature vector, and establishing a classifier, thereby realizing automatic monitoring and recognition for the tool wear state.
Owner:NORTHEAST DIANLI UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products