Pedestrian re-identification method based on CNN and convolutional LSTM network

A pedestrian re-recognition and network technology, applied in character and pattern recognition, instruments, computer parts, etc., can solve the problems of difficult appearance features, inability to learn features, and the appearance relationship of pedestrians is not close, and achieve the effect of close relationship

Active Publication Date: 2016-11-09
TONGJI UNIV
View PDF2 Cites 78 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The traditional way to solve the problem of pedestrian re-identification based on video is to select the frame that best represents the features or manually adjust the time series, and then perform low-level feature extraction. The biggest disadvantage of this method is that it cannot

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Pedestrian re-identification method based on CNN and convolutional LSTM network
  • Pedestrian re-identification method based on CNN and convolutional LSTM network
  • Pedestrian re-identification method based on CNN and convolutional LSTM network

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0029] The method scheme of the present invention: Given a series of continuous pedestrian images in videos, first use the frame-level convolutional layer in CNN to extract the CNN features to capture complex changes in appearance, and then send the extracted features to the convolution In the LSTM encoding-decoding framework, the encoding framework uses a local adaptive core to capture the actions of pedestrians in a sequence, thereby encoding the input sequence into a hidden representation, and then using a decoder to decode the hidden representation output by the encoding framework into a sequence. After LSTM encoding and decoding, a frame-level depth spatiotemporal appearance descriptor is obtained. Finally, Fisher vector coding is used to enable the descriptor to describe video-level features.

[0030] In order to make the pedestrian re-identification method based on CNN and convolutional LSTM network proposed in the present invention clearer, the following takes the use of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a pedestrian re-identification method based on a CNN and a convolutional LSTM network, and belongs to the technical field of image processing. Firstly space information of codes in frames are extracted by using a group of CNN, then a frame level depth space-time appearance descriptor is obtained by using a coding-decoding frame formed by the convolutional LSTM, and finally Fisher vector coding is used so that the descriptor is enabled to describe the characteristics of the video level. With application of the mode, the characteristic representation can be extracted and the videos are enabled to be arranged sequences by the characteristic representation, and the space information is maintained and an accurate model is established.

Description

technical field [0001] The invention relates to the field of video image processing, in particular to a pedestrian re-identification method based on CNN and convolutional LSTM network. Background technique [0002] Pedestrian re-identification refers to identifying a single pedestrian from non-overlapping camera views, that is, to confirm whether the cameras at different locations capture the same pedestrian at different times. This problem has important practical value in the field of video surveillance. [0003] Person re-identification is usually performed by matching spatial appearance features. Matching methods include: based on a pair of single-frame pedestrian images, matching their color and intensity gradient histograms. However, the appearance characteristics of a single frame are inherently easy to change, since differences in illumination, position, pose, and viewing angle can all lead to large changes in human appearance. In addition, matching spatial appeara...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/103G06V20/46G06F18/214
Inventor 尤鸣宇沈春华徐杨柳
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products