Kinect-based body shape adaptive three-dimensional virtual human body model building method and animation system

A human body model and three-dimensional virtual technology, applied in animation production, image data processing, instruments, etc., can solve the problems of low animation reuse rate, complicated use and operation, and difficulty in getting started, so as to enhance the sense of reality and fun, and make production easy Effect

Inactive Publication Date: 2018-11-23
ZHEJIANG UNIV
4 Cites 7 Cited by

AI-Extracted Technical Summary

Problems solved by technology

Generally, commercial animation software such as Maya and 3DMax can produce 3D virtual human animations, but the use of these software is complex and requires a lot of work. The use process relies on previous experience, and it is difficult to get started. Professionals are often required to operate
Human body modeling and animation production process require a lot of human-computer interaction,...
View more

Abstract

The invention discloses a Kinect-based body shape adaptive three-dimensional virtual human body model building method. The method comprises the steps of acquiring color and depth images by utilizing Kinect; recognizing a human body from the images; extracting joint point data of the human body; performing smoothing operation on the joint point data of the human body; extracting the hip width and the body height of the real human body in the depth image; calculating a proportionality coefficient of body shape data of the real human body and a standard human body; adjusting a virtual human bodymodel through the proportionality coefficient; calculating a rotation matrix between skeletons; and performing skeleton movement and skin operation on the virtual human body model, and rendering the updated three-dimensional virtual human model. Correspondingly, the invention further provides a Kinect-based body shape adaptive three-dimensional virtual human body animation system. By utilizing a human body joint point recognition technology, a skeleton animation technology and a virtual reality technology of the Kinect, the three-dimensional virtual human body model suitable for the body shapeof the real human body is built, and the Kinect can be used for driving the virtual human body model to move; and the sense of reality can be enhanced, so that the user experience is greatly improved.

Application Domain

Animation

Technology Topic

Self adaptiveSkin operation +8

Image

  • Kinect-based body shape adaptive three-dimensional virtual human body model building method and animation system
  • Kinect-based body shape adaptive three-dimensional virtual human body model building method and animation system
  • Kinect-based body shape adaptive three-dimensional virtual human body model building method and animation system

Examples

  • Experimental program(7)

Example Embodiment

[0069] Example 1:
[0070] See figure 1 , A Kinect-based method for constructing an adaptive 3D virtual human body model, including the following steps:
[0071] S1, Kinect scans in real time to obtain color images and depth images of the human body;
[0072] S2, using the acquired color image and depth image to identify the human body;
[0073] When the human body is recognized, if the human body is not recognized, the color image and the depth image are re-acquired, and if the human body is recognized, the step S3 is executed;
[0074] S3. Obtain human body joint point data from the recognized human body; when recognizing human body joint points, if all the relevant nodes are not obtained, then re-acquire the color image and depth image, and if the relevant nodes of the human body are obtained, perform step S4;
[0075] S4. Smoothing the human body joint point data;
[0076] S5. Obtain the data of the width of the human buttocks and the height of the human body;
[0077] S6. Calculate the ratio coefficient between real human body shape data and standard human body shape data;
[0078] S7. The method for adjusting the body shape of the human body model is as follows: on the human body model that meets the standard human body shape, adjust the weight coefficient and the height coefficient to obtain a human body model that meets the real human body shape;
[0079] S8. Calculate the rotation matrix of each bone. The method is: According to the relationship between the bones of the real human body joint points and the relationship between the bones in the initial posture of the human body model, starting from the root node of the human body model, use the Rodrigo rotation formula to find The rotation matrix between the bones;
[0080] S9. Use the obtained bone rotation matrix and the linear hybrid skinning algorithm to render the updated human body model.
[0081] Specifically, in the step S4, a mean filter with a filter kernel size of 5 is used to smoothly process the coordinate data of each joint in the three-dimensional space.
[0082] Specifically, in the step S5, using the human hip width value and the value from the human head to the lowest point of the ankles in the depth image to represent the human hip width data and the human body height data, which specifically includes the following steps:
[0083] a) Convert the camera coordinates of the hip joint, head joint, left ankle joint, and right ankle joint obtained through Kinect joint point recognition into the coordinates in the depth image;
[0084] b) Binarize the depth image to obtain the contour of the human body;
[0085] c) Expand horizontally from the hip joint position in the depth image, identify the hip boundary in the human contour, obtain the pixel value, and mark it as the real human hip width data;
[0086] d) Obtain the longitudinal pixel values ​​from the head joints to the farthest left and right ankle joints in the depth image, and epitope the height data of the real human body.
[0087] Specifically, the step S6 further includes: fitting the relationship between the standard human body shape data and the distance from the standard human body to the Kinect, which specifically includes the following steps:
[0088] a) Designate a standard human body, and collect data on the hip width and height at different distances from Kinect;
[0089] b) Fit the reciprocal of the distance to the hip width data and the height data by the least squares method to obtain the linear relationship between the reciprocal of the distance and the hip width data and height data.
[0090] Specifically, the step S7 further includes: constructing a human body model suitable for a standard human body shape, specifically including the following steps:
[0091] a) Establish short, thin and tall human models respectively;
[0092] d) Use interpolation to adjust the human body model, including the surface vertex data of the human body model and the joint point data of the human body model;
[0093] c) Manually adjust the fat and thin coefficients and height coefficients to match the body shape of the human body model with the standard human body shape;

Example Embodiment

[0094] Example 2:
[0095] See figure 2 , A Kinect-based body shape adaptive 3D virtual human body animation system, the system includes:
[0096] Image acquisition module, used to use Kinect to acquire color images and depth images;
[0097] The human body recognition module is used to recognize the human body by using the acquired color image and depth image;
[0098] The human body joint point extraction module is used for the human body joint point data in the recognized human body;
[0099] Skeleton data smoothing processing module, used for smooth operation on human body joint point data;
[0100] Real human body shape extraction module, used to extract human hip width data and human body height data in depth images;
[0101] Body shape ratio calculation module, used to calculate the ratio coefficient between real human body shape data and standard human body shape data;
[0102] The virtual human body shape adjustment module is used to adjust the weight coefficient and height coefficient on the human body model that meets the standard human body shape to obtain the human body model that meets the real human body shape;
[0103] The bone rotation matrix calculation module is used to calculate the bones according to the relationship between the bones of the real human body joint points and the bones in the initial posture of the human body model, starting from the root node of the human body model and using the Rodriguez rotation formula Rotation matrix between
[0104] The skeleton deformation and skinning module is used to adjust the skeleton movement of the model using the bone rotation matrix, and use the linear hybrid skinning algorithm to perform the skinning operation to render the updated human body model;
[0105] The human joint data access module is used to save the motion sequence of Kinect scanning human joints and read the motion sequence of human joints.
[0106] Specifically, the real human body shape extraction module uses the human hip width value in the depth image and the value from the human head to the lowest point of the ankles to represent the human hip width data and the human body height data, and calculates the body shape of the real human body and the standard human body The scale factor specifically includes the following steps:
[0107] a) Convert the camera coordinates of the hip joint, head joint, left ankle joint, and right ankle joint obtained through Kinect joint point recognition into the coordinates in the depth image;
[0108] b) Binarize the depth image to obtain the contour of the human body;
[0109] c) Expand horizontally from the hip joint position in the depth image, identify the hip boundary in the human contour, obtain the pixel value, and mark it as the real human hip width data;
[0110] d) Obtain the longitudinal pixel values ​​from the head joints to the farthest left and right ankle joints in the depth image, and epitope the height data of the real human body.
[0111] Specifically, the virtual human body shape adjustment module further includes: fitting the relationship between the standard human body shape data and the distance from the standard human body to Kinect, which specifically includes the following steps:
[0112] a) Designate a standard human body, and collect data on the hip width and height at different distances from Kinect;
[0113] b) Fit the reciprocal of the distance to the hip width data and the height data by the least squares method to obtain the linear relationship between the reciprocal of the distance and the hip width data and height data.
[0114] Specifically, the virtual human body shape adjustment module further includes: constructing a human body model suitable for a standard human body shape, which specifically includes the following steps:
[0115] a) Establish short, thin and tall human models respectively;
[0116] d) Use interpolation to adjust the human body model, including the surface vertex data of the human body model and the joint point data of the human body model;
[0117] c) Manually adjust the fat and thin coefficients and height coefficients to match the body shape of the human body model with the standard human body shape;

Example Embodiment

[0118] Example 3:
[0119] See Figure 3.1-3.3 , Standard human hip width data fitting, the steps are:
[0120] a) Select a standard human body;
[0121] b) Collect the hip width data in the depth images at different distances from the standard human body to Kinect, such as Figure 3.1;
[0122] c) Reverse the distance data from the human body to Kinect, such as Figure 3.2;
[0123] d) Through the least squares curve fitting, the relationship between the inverted distance and the width of the hip in the depth image is obtained, such as Figure 3.3.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

One-step RT-PCR (Reverse Transcription Polymerase Chain Reaction) detection reagent kit for Cucumber green mottle mosaic virus and detection method thereof

InactiveCN101845513AEasy to makeEasy for large-scale industrial production
Owner:INSPECTION & QUARANTINE TECH CENT OF FUJIAN ENTRY EXIT INSPECTION & QUARANTINE BUREAU

Classification and recommendation of technical efficacy words

  • Easy to make

Process for preparing high polymer micro-flow control chips

InactiveCN1464303Alow priceEasy to make
Owner:TECHNICAL INST OF PHYSICS & CHEMISTRY - CHINESE ACAD OF SCI +1

Magneto-calorific system

ActiveCN103401474AThe realization condition is simpleeasy to make
Owner:FOSHAN CHENG XIAN TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products