Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Traffic environment pedestrian multi-dimensional motion feature visual extraction method

A technology of motion features and extraction methods, applied to instruments, character and pattern recognition, computer components, etc., can solve problems such as complex calculations, large amounts of data, and easy local convergence of clustering algorithms

Active Publication Date: 2018-11-16
CENT SOUTH UNIV
View PDF10 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The patent has the following problems: 1. When extracting the motion vector of each pixel, the pixels are not effectively screened, the amount of data is large, and the calculation is complicated; 2. The clustering algorithm used in the patent is prone to local convergence

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Traffic environment pedestrian multi-dimensional motion feature visual extraction method
  • Traffic environment pedestrian multi-dimensional motion feature visual extraction method
  • Traffic environment pedestrian multi-dimensional motion feature visual extraction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0069] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0070] like figure 1 As shown, a method for visual extraction of multi-dimensional motion features of pedestrians in a traffic environment includes the following steps:

[0071] Step 1: Build pedestrian motion database;

[0072] Collect videos of various motion postures and road positions of pedestrians in each shooting direction of the depth camera. three directions, the postures include walking, running and standing;

[0073] Step 2: Extract images from videos in the pedestrian motion database, and preprocess the extracted images to obtain the pedestrian detection frame of each frame of image, and then extract the pedestrian detection frame images of the same pedestrian in consecutive image frames;

[0074] Step 3: Perform grayscale processing on each pedestrian detection frame image, synthesize the motion energy map of the grayscale image correspo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a traffic environment pedestrian multi-dimensional motion feature visual extraction method comprising a step 1 of constructing a pedestrian motion database; a step 2 of extracting a pedestrian detection frame image of the same pedestrian in consecutive image frames; a step 3 of extracting the HOG feature of the same pedestrian motion energy map; a step 4 of constructing a pedestrian motion pose recognition model based on an Elman neural network; a step 5 of determining a pedestrian pose in a current video by using the pedestrian motion pose recognition model based on the Elman neural network; a step 6 of calculating the instantaneous speed sequences of the pedestrian in the X-axis and Y-axis directions to obtain the real-time speed of the pedestrian; and a step 7 ofaccording to a three-dimensional scene in an intersection environment, obtaining the position information of the pedestrian in the image in real time, and obtaining the real-time motion feature of the pedestrian in combination with the pedestrian pose and the real-time speed. The method has high recognition accuracy and good robustness, is convenient to use, and has a good application and promotion space.

Description

technical field [0001] The invention belongs to the field of traffic monitoring, in particular to a method for visually extracting multi-dimensional motion features of pedestrians in a traffic environment. Background technique [0002] In recent years, with the rapid development of science and technology, more and more intelligent methods have been applied to transportation, especially in the field of intelligent driving. Traffic safety is an eternal topic. In collision accidents, collisions between vehicles and pedestrians also account for a large proportion. The timely detection and posture recognition of pedestrians is the key to the current intelligent traffic active protection system. To achieve accurate identification, the most important thing is pedestrian motion feature extraction. [0003] Pedestrian pose recognition includes global feature method and local feature method. The global feature mostly adopts the method of moving historical images, that is, the frame...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/20G06V10/50G06F18/214
Inventor 刘辉李燕飞韩宇阳
Owner CENT SOUTH UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products