Mobile device human body scanning and 3D model creation and analysis

Inactive Publication Date: 2017-02-23
2 Cites 13 Cited by

AI-Extracted Technical Summary

Problems solved by technology

The mass market available to most mobile device does not currently provide a solution to scan/image a human body using a 2D camera to create and analyze a human 3D body digital model, as w...
View more

Benefits of technology

[0005]The invention uses any 2D camera for the purpose of performing 3D full body self-scanning and body metrics analysis. A mobile device camera, for example, may take 2 or more, or 3 or more, 2D images, while the mobile device's accelerometer is used to determine the angular camera position. This information is shared with a CPU processor (cloud, server, etc.) to form an accurate 3D body model of a user. These steps may all be performed in a non-controlled environment (for an example home conditions) and in self-mode (i.e., performed by the user alone). The method further comprises performing an automatic segmentation of the 2D images to create a 3D model reconstruction, which can be further adjusted by the user. Once the 3D model is created, measurements are automat...
View more


A 2D camera is used to create a 3D full body image. A camera takes 3 or more 2D images, an accelerometer is used to calculate camera position, and a CPU is employed to construct a 3D body model. This may be performed in a non-controlled environment and by the user alone. An automatic segmentation of the 2D images creates special information for 3D model reconstruction. Once the 3D model measurements are extracted, the user has the option to further specify measurements. In one embodiment, the 3D model is shared via cloud and social media, and also used to assist in shopping while ensuring accurate measurements for the user. In another embodiment, the digital model of products designed for the target consumer body is automatically adjusted and shown as a 3D image on the user's body. The 3D model may be shared with businesses for manufacturing using 3D morphology.

Application Domain

Image enhancementDetails involving processing steps +3

Technology Topic

Human bodyComputer graphics (images) +8


  • Mobile device human body scanning and 3D model creation and analysis
  • Mobile device human body scanning and 3D model creation and analysis
  • Mobile device human body scanning and 3D model creation and analysis


  • Experimental program(1)


[0069]Example 1. A 3D image of a subject/user is obtained via the following steps:
[0070]Step 1. The subject uses a mobile device (e.g., smartphone) to make a self-scan in uncontrolled conditions using an application on the mobile device.
[0071]Step 2. The mobile device is placed on any plane surface, preferably a flat surface.
[0072]Step 3. The subject stands between 2 and 2.5 meters away from the mobile device and facing the front of the camera located on the mobile device. If the surface on which the mobile device is placed is not exactly flat, the interface of application located on the mobile device may assist the user to stand approximately parallel to the mobile device's surface. The interface may also assist the subject to be centered for an optimal view and image capture. The camera captures a first 2D image of the subject facing the camera.
[0073]Step 4. The subject turns clockwise or counter-clockwise until standing in profile view relative to the camera. The camera then captures a second 2D image of either the left or right profile image of the subject.
[0074]Step 5. The subject exits the scene captured by the camera, and the camera captures a background image without the subject in the image.
[0075]Step 6. The mobile device application runs one or more image segmentation algorithms, which extract(s) extremity feature points 60 from the front and side images of the subject (top, bottom). See FIG. 14. The image segmentation algorithms further obtain the camera's angular positions using an accelerometer of the mobile device. Using the algorithmic data, the camera's optical data, the photo sensor parameters, and a reported height of the subject, the application runs another set of algorithms/functions which allow for the computation of a sensed camera position and a map between image coordinates corresponding to the extremity points (x, y, z) 60 in a world coordinate system.
[0076]Step 7. Further image segmentation allows for segmentation of the front and side silhouettes of the subject, and a map between image coordinates can be constructed as described in Step 6 above, wherein the image coordinates now correspond to boundary points 70 of both silhouettes. See FIG. 15. More points 70 can be included in the set of images (both along the boundary of as well as within the silhouettes), using machine learning on the data base of information from a user's models, wherein relative spatial positions for such points 70 are continuously being analyzed and determined.
[0077]Step 8. A model body (test model) with an explicit surface boundary is used. The model body is represented by a polygon mesh for both genders. X1, y1, and z1 coordinate values are obtained for all topological corresponding points using the information from Step 6 and Step 7.
[0078]Step 9. A mapping function, Fm, is introduced. The mapping function, Fm, maps the surface of the test model (the model body) to the subject's body model as a linear combination of the 3D shape functions obtained in Steps 6-8.
[0079]Step 10. The subject's 3D model is represented by a final function, Ff, which transforms the test model surface polygon mesh into a subject's body model surface polygon mesh. The latter can be created on demand and based on multiple purposes. Among these are: 3D rendering, body metric analysis, product fit computations, etc.
[0080]It is noted that all 3D body models created via the process described in Example 1 above have a unique mapping function F corresponding to a particular general test model. This relationship will further allow for the creation of a 3D image containing all possible mutual mappings inside the data base pertaining to that particular test model.
[0081]The description of a preferred embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in this art. It is intended that the scope of the invention be defined by the following claims and their equivalents.
[0082]Moreover, the words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.


no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Hardness testing device

InactiveUS20060130566A1accurately measure

Classification and recommendation of technical efficacy words

Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products