Video image sequence segmentation system and method

A video image and sequence technology, applied in character and pattern recognition, instruments, biological neural network models, etc., can solve problems such as 2D camera operation and setting, difficulty in equipment construction and setting, lengthy calculation process, etc., to achieve easy operation and Effects of settings, easy self-testing

Pending Publication Date: 2018-07-06
TSINGHUA UNIV
View PDF9 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method is able to obtain relatively accurate test results, but for non-professionals, the construction and setup of the device is difficult, requires a lengthy calculation process, and attaching the device to the subject may cause discomfort to the subject, especially for those with limited mobility. object
In the second type of method, environmental sensors such as force sensors and laser ranging scanners are used to measure and perform motion state recognition based on the measurement results. This method does not require equipment attached to the subject, but the installation and calibration of these Environmental sensors remain challenging for non-professionals
[0004] In the third type of method, motion state recognition based on video images, usually first obtain the outline of the object, and then identify the motion state of the object with the help of features such as height, width, and aspect ratio, which requires Obtaining video of objects in the environment, for example, requires a consistent background and a fixed camera angle. Due to the limitations of video analysis methods, the results are not satisfactory
In the fourth type of method, a depth camera is used to capture additional 3D information, this kind of device is not common in daily life and is not as easy to operate and set up as a 2D camera for non-professionals

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video image sequence segmentation system and method
  • Video image sequence segmentation system and method
  • Video image sequence segmentation system and method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] The specific implementation of the present invention will be described below with reference to the TUG test, but those skilled in the art should understand that this is not limiting, and the video image sequence segmentation system and method of the present invention can be used to perform video image segmentation on different motion states of objects. Segmentation of any scene, especially during various tests performed to monitor the mobility of the subject.

[0032] figure 1 The motion state transitions of the subjects during a typical TUG test are shown. Throughout the test, a camera 1 needs to be set directly in front of the object, so as to collect video images of the object during the entire test. At the beginning of the test, the subject sits on a chair, and then performs the actions of sitting, getting up, walking forward, turning around, walking back and sitting down in sequence according to the requirements. Each action corresponds to a different motion state...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The present invention provides a video image sequence segmentation system. The system includes a position estimator, a classifier and a motion state segmenter; the position estimator is configured toreceive the video image sequence of an object, wherein the video image sequence contains a plurality of consecutive frames indicating different motion states of the object; and the position estimatoris further configured to determine the positions of a plurality of body key portions of the object in each frame of the video image sequence on the basis of a deep learning algorithm, wherein the deeplearning algorithm includes an error iterative feedback algorithm or a partial association field algorithm; the classifier is configured to determine parameters indicating the types of the motion states of the object in each frame according to the positions of the plurality of body key portions; and the motion state segmenter is configured to segment the video image sequence on the basis of the parameters so as to obtain corresponding frames corresponding to each motion state of the object. With the system adopted, a video image sequences acquired in a non-strictly-controlled environment canbe segmented, and therefore, the remote monitoring of the motion capability of the object can be realized.

Description

technical field [0001] The present invention relates to video image sequence segmentation, in particular to a system and method for segmenting video image sequences corresponding to different motion states of objects in the process of evaluating the action ability of objects. Background technique [0002] Mobility monitoring is of great interest for people at high risk of falls, such as the elderly, or those with Parkinson's disease. The Timed Up-and-Go (TUG) test is a widely accepted test for assessing the mobility of subjects such as elderly people, and it typically involves the subject going from sitting in a chair, getting up, walking forward, turning around, walking back The whole process from sitting back to the chair specifically involves the movement stages of several different motion states of "sit", "sit to stand", "walk", "turn around", "walk back" and "sit back". The duration of each motor phase is a very meaningful parameter for evaluating the mobility of the s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04
CPCG06V20/49G06N3/045G06F18/214G06F18/24
Inventor 胡春华陈健生李天鹏李路明
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products