Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

164 results about "Skin" patented technology

In computing, a skin (also known as visual styles in Windows XP) is a custom graphical appearance preset package achieved by the use of a graphical user interface (GUI) that can be applied to specific computer software, operating system, and websites to suit the purpose, topic, or tastes of different users. As such, a skin can completely change the look and feel and navigation interface of a piece of application software or operating system.

Multimedia player and browser system

A multimedia software application that can include audio, video and / or graphics, in a manner that combines the multimedia experience with the transfer of information from and between a variety of sources (126), in a variety of directions, and subject to a variety of prompts. The application provides a “Web in Page” approach, in which a series of windows have the same or similar “look and feel”, yet can be used to access and display information from a variety of sources (126), including local content (112) (hard drive or other locally stored media), and web-based online content (118), including that available from a dedicated, integrated server (114), affiliated servers (114), or even other computer users. The application of the present invention can be provided in stand-alone form, to be loaded on a client device (116) (e.g., personal computer) from either a recorded medium or downloaded online. In addition to this “Web in Page” application interface, a “Web in Skin” interface may be provided, by which the application interface may be varied based on client user (110) or advertiser (126) preferences to provide a customized interface format. Optionally, and preferably, the application is provided in a form where it is recorded on, and thereby combined with, digitally recorded content, such as a music CD or DVD.
Owner:DEMERS TIMOTHY B +5

Optical flow-based gesture motion direction recognition method

The invention discloses an optical flow-based gesture motion direction recognition method. The method comprises the following steps of acquiring an image sequence on the front of a computer by using a common camera with video graphics array resolution, and preprocessing the image sequence; distributing skin samples in an approximately elliptical area in a CbCr plane in a concentrated way, and determining whether to accord with skin colors according to a fact whether a pixel point falls in the elliptical area in the CbCr plane; performing morphological reconstruction on binary images subjected to skin color detection, and adopting closed operation in morphology; marking each white connected region, calculating an area of each white connected region, arraying white connected regions from large to small, and reserving three largest connected regions; reducing the resolution of the images, and acquiring an optical flow motion vector in a skin color area by using a pyramid LK optical flow method; judging the direction of the optical flow motion vector; judging the direction once every other two frames, and giving a result if directions are consistent twice; after a user is familiar with and masters the gesture motion operation rule, moving the gesture in the upper, lower, left and right directions before the camera. According to the method, real-time interaction can be completed, and the gesture motion direction recognition accuracy can be higher than 95 percent.
Owner:COMMUNICATION UNIVERSITY OF CHINA

Multimedia player and browser system

A multimedia software application that can include audio, video and / or graphics, in a manner that combines the multimedia experience with the transfer of information from and between a variety of sources (126), in a variety of directions, and subject to a variety of prompts. The application provides a “Web in Page” approach, in which a series of windows have the same or similar “look and feel”, yet can be used to access and display information from a variety of sources (126), including local content (112) (hard drive or other locally stored media), and web-based online content (118), including that available from a dedicated, integrated server (114), affiliated servers (114), or even other computer users. The application of the present invention can be provided in stand-alone form, to be loaded on a client device (116) (e.g., personal computer) from either a recorded medium or downloaded online. In addition to this “Web in Page” application interface, a “Web in Skin” interface may be provided, by which the application interface may be varied based on client user (110) or advertiser (126) preferences to provide a customized interface format Optionally, and preferably, the application is provided in a form where it is recorded on, and thereby combined with, digitally recorded content, such as a music CD or DVD.
Owner:DEMERS TIMOTHY B +5

Method and device for playing large number of animations of same character, medium and equipment

ActiveCN111462283AReduce rendering consumptionSave computing resourcesAnimationAnimationSkin
The invention discloses a method and device for playing large number of animations of a same character, a medium and equipment . The method comprises the following steps: acquiring source model information, skin information and animation data; calculating the position information of the skeleton relative to the central point of the source model in each animation frame, and storing the position information of the skeleton relative to the central point of the source model in each animation frame into a mapping file; obtaining an operation instruction of each user for the corresponding role, calculating playing animation frame information of a to-be-rendered model corresponding to the role according to the operation instruction, obtaining index information corresponding to the playing animation frame, and sending the index information to a vertex shader; enabling a vertex shader to acquire the position information of the skeleton relative to the central point of the source model in the animation playing frame according to the index information, and calculates the position information of each vertex so as to play the animation according to the position information of each vertex; rendering consumption and GPU consumption in the playing process of a large number of same-role same-screen animations can be effectively reduced, and then computing resources are saved.
Owner:厦门梦加网络科技股份有限公司

Realistic virtual human multi-modal interaction implementation method based on UE4

PendingCN111724457AEasy to acceptIn line with interactive habitsAnimationSkinEngineering
The invention discloses a realistic virtual human multi-modal interaction implementation method based on UE4. The method comprises the steps of resource manufacturing, resource assembling and functionmanufacturing. The system comprises a resource making module used for role model making, facial expression BlendShape making, skeletal skin binding, action making, map making and material adjustment;a resource assembly module which is used for carrying out scene building, light design and UI interface building; and the function making module is used for identifying voice input of the user, carrying out intelligent answering according to the input, playing voice, lip animation, expression animation and body movement and reflecting interaction multi-modality. The module specifically comprisesa voice recognition module, an intelligent question and answer module, a Chinese natural language processing module, a voice synthesis module, a lip animation module, an expression animation module and a body movement module. The system has affinity similar to that of a real person and can be accepted by a user more easily; the interactive habit of human beings is better met, and the application has a wider popularization range; therefore, the application is truly intelligent, and the response of the application is more in line with human logic.
Owner:长沙千博信息技术有限公司

Method for manufacturing computer skin animation based on high-precision three-dimensional scanning model

The invention belongs to the technical field of computer graphics. The invention relates to a three-dimensional model skin animation generation method. The method for manufacturing the computer skin animation based on the high-precision three-dimensional scanning model specifically comprises the steps that firstly, skeleton binding is completed through a digital three-dimensional model skeleton line scanned by a three-dimensional scanner and an action template library of skeletons, skin weights are calculated, model vertex coordinates are updated in real time, and the computer skin animation is completed. According to the method, the three-dimensional model scanned by the high-precision three-dimensional scanner can be processed, the weight of the vertex of the model is calculated throughan algorithm, and compared with manual processing, the workload of model processing is greatly reduced; and only storing texture coordinates, indexes and node weights of the model by utilizing a skinanimation principle, and calculating new vertex data according to the node weights in each frame rendering process. Compared with a frame data merging method and a skeletal animation method, the method has the advantages that the occupied memory space is small, non-rigid vertex deformation is achieved, and the achieved animation effect is more natural.
Owner:SUZHOU DEKA TESTING TECH CO LTD

Three-dimensional virtual character intelligent skinning method

The invention belongs to the technical field of three-dimensional dynamic graph processing, and particularly relates to a three-dimensional virtual character intelligent skinning method based on deep learning, which particularly comprises the steps: skeleton generation (geometric contraction, edge folding and skeleton optimization) and skinning intelligent prediction (feature extraction, model construction, model training and skinning prediction). According to the method, skeleton generation, binding and skin weight assignment work can be automatically completed only through the grid model of the three-dimensional virtual character, so that a good effect can be achieved when the method is applied to grid models with different complexity degrees, skeleton binding and weight assignment are directly completed for the constructed three-dimensional virtual character, and the method is high in practicability and easy to popularize. When the method is applied to a high-precision three-dimensional model with a huge number of vertexes and patches, the manufacturing time of the skinned skeleton animation can be greatly shortened, the cost can be reduced, the animation manufacturing effect is vivid and natural, and the complexity of three-dimensional animation manufacturing is simplified; and therefore, the method can be applied to the technical field of skinned skeleton animation manufacturing and is wide in prospect.
Owner:青岛联合创智科技有限公司

Dynamic gesture recognition method and device, readable storage medium and computer equipment

The invention discloses a dynamic gesture recognition method and system, a readable storage medium and computer equipment. The method comprises the steps: carrying out the hand target detection of a target image through a trained hand detection deep learning model, so as to obtain the graphic information of a minimum enclosing rectangle of a hand region; according to the graphic information of the minimum bounding rectangles corresponding to the second moment and the first moment, calculating a center distance and a slope between the minimum bounding rectangles corresponding to the two moments; segmenting a hand skin region of the target image through a skin detection algorithm, and calculating average depth values of the hand skin region corresponding to the second moment and the first moment in combination with the depth map; and judging the gesture movement direction and the movement amount in the corresponding direction according to the center distance, the slope and the average depth values of the hand skin regions corresponding to the two moments. The problems that only the moving direction on the two-dimensional plane can be judged, the calculation process is complex, and the gesture recognition real-time performance is low in the prior art can be solved.
Owner:NANCHANG VIRTUAL REALITY RES INST CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products