Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Aggregated facial tracking in video

A video, face technology, applied in the field of image processing, can solve the problems of face tracking difficulty, face detector algorithm unable to detect faces, inaccuracy, etc.

Inactive Publication Date: 2012-09-19
MICROSOFT TECH LICENSING LLC
View PDF4 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] Face tracking in video can be difficult
Many face tracker algorithms can detect faces when a person is facing the camera, but may be less accurate when viewing the person from the side
As the person turns away from the camera, the face detector algorithm may not be able to detect the face at all

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Aggregated facial tracking in video
  • Aggregated facial tracking in video
  • Aggregated facial tracking in video

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 100

[0037] The system of embodiment 100 is shown contained within a single device 102 . In many embodiments, various software components can be implemented on many different devices. In some cases, a single software component may be implemented on a cluster of computers. Certain embodiments may operate using cloud computing technology for one or more of these components.

[0038] The system of embodiment 100 is accessible by various client devices 132 . Client devices 132 may access the system through a web browser or other application. In one such embodiment, the device 102 can be implemented as a web service that can process video in a cloud-based system, such an embodiment can receive video images from various clients, process video images in a large data center, And return the analyzed result to the client computer for operation.

[0039] In another embodiment, the operations of device 102 may be performed by a personal computer, server computer, or other computing platfor...

Embodiment 200

[0062] Embodiment 200 illustrates one method by which a video may be analyzed to create a track of faces in the video. After the video is split into shots, each shot can be analyzed on a frame-by-frame basis for static face detection. The frame-by-frame analysis results can then be used to link multiple frames together in order to show the movement or progression of a single face in the video.

[0063] At block 202, video to be analyzed may be received. A video may be any type of video image consisting of a series or sequence of individual frames. At block 204, the video may be divided into discrete shots. Each shot may represent a single scene or a group of related frames. An example of a process that may be performed in block 204 is found at embodiment 300 later in this specification.

[0064]At block 206, each shot may be analyzed. For each shot in block 206 and for each frame of each shot in block 208 , the frame may be analyzed for each face in block 210 . The analy...

Embodiment 300

[0075] The method of embodiment 300 shows one example of how to divide a video sequence into discrete shots. Each shot may be a sequence of similar frames and may have the same face image in face tracking.

[0076] Video to be analyzed may be received in block 302 . For each frame in the video in block 304 , the current frame may be characterized in block 306 and the next frame may be characterized in block 308 . The representations of the frames may be compared in block 310 to determine whether the frames are statistically different. In block 310, if the blocks are not significantly different, then in block 312 the metadata associated with the frames may be compared to determine if the shot has changed. If not, the process returns to block 304 to process the next frame.

[0077] If the statistical analysis or metadata analysis indicates that the shot has changed either in block 310 or in block 312 , then a new shot may be identified in block 314 . The process may return t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to aggregated facial tracking in a video. A facial detecting system may analyze a video by traversing the video forwards and backwards to create tracks of a person within the video. After separating the video into shots, the frames of each shot may be analyzed using a face detector algorithm to produce some analyzed information for each frame. A facial track may be generated by grouping the faces detected and by traversing the sequence of frames forwards and backwards. Facial tracks may be joined together within a shot to generate a single track for a person's face within the shot, even when the tracks are discontinuous.

Description

technical field [0001] The present invention relates to image processing technology, in particular to face tracking in video. Background technique [0002] Face tracking in video can be difficult. Many face tracker algorithms can detect faces when a person is facing the camera, but may be less accurate when viewing the person from the side. As the person turns away from the camera, the face detector algorithm may not be able to detect the face at all. Contents of the invention [0003] A face detection system can analyze a video by traversing the video forward and backward in order to create a track of a person's face within the video. After the video is divided into shots, the frames of each shot can be analyzed using a face detection algorithm to produce certain analyzed information for each frame. A face track can be generated by grouping the detected faces and by traversing the sequence of frames forward and backward. Even when the face tracks are discontinuous, th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00
CPCG06T7/20G06T2207/10016G06T2207/30201G06T2207/30241G06K9/00
Inventor I·莱希特E·克鲁普卡I·阿布拉莫夫斯基I·克维埃特克夫斯基
Owner MICROSOFT TECH LICENSING LLC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products