Video pedestrian re-identification method and system based on self-learning local feature representation

A pedestrian re-identification and local feature technology, applied in the field of pedestrian re-identification, can solve the problems of inaccurate alignment of representations, inability to accurately and clearly determine whether two pedestrians are the same person, etc., to achieve enhanced fusion and clear contrast features. Effect

Active Publication Date: 2020-07-10
SHANDONG UNIV
View PDF5 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the pedestrian features extracted by these models are either based on the global features of the entire image, or based on fixed image cuts and local features of pedestrian model blocks, which cannot accurately align and represent the unique local features of each pedestrian. Unable to accurately and clearly judge whether two pedestrians are the same person

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video pedestrian re-identification method and system based on self-learning local feature representation
  • Video pedestrian re-identification method and system based on self-learning local feature representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0033] In one or more embodiments, a video pedestrian re-identification method based on self-learning deep local feature representation is disclosed. Based on the two-input Siamese network, the spatial features of the bottom layer of the pedestrian are extracted through the convolutional neural network, and then the residual -Recurrent neural network extracts the temporal features of pedestrians, uses the local features learned during training (i.e., clustering centers) to represent the temporal and spatial features of pedestrians, obtains the global feature representation of pedestrians in the video, measures the similarity of pedestrian features, and judges the difference between two videos. Whether the included pedestrians are the same person.

[0034] refer to figure 2 , in this embodiment, the video pedestrian re-identification method based on self-learning deep local feature representation includes the following steps:

[0035] Obtain two pieces of video information co...

Embodiment 2

[0066] In one or more implementations, a video pedestrian re-identification system based on self-learning local feature representation is disclosed, including:

[0067] A device for respectively acquiring video information containing continuously changing images of pedestrians to be identified within two set time periods;

[0068] A device for separately processing the acquired two pieces of video information by adopting a twin network structure to obtain aligned vectors representing the temporal and spatial characteristics of pedestrians;

[0069] It is a device for judging whether the pedestrians in two consecutive pieces of image information are the same person by comparing the vector information obtained, and realizing pedestrian re-identification.

[0070] Among them, the twin network structure is two networks with the same structure and parameter sharing, and one of the network structures includes:

[0071] A three-layer convolutional neural network for extracting spati...

Embodiment 3

[0076] In one or more embodiments, a terminal device is disclosed, including a server, the server includes a memory, a processor, and a computer program stored on the memory and operable on the processor, and the processor executes the The program implements the video pedestrian re-identification method based on self-learning local feature representation disclosed in Embodiment 1, and for the sake of brevity, details are not repeated here.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video pedestrian re-identification method and system based on self-learning local feature representation. The method comprises the steps: acquiring video information containing continuous change images of pedestrians to be identified in two set time periods; and respectively processing the obtained two segments of video information by adopting a twin network structure toobtain aligned vectors representing spatial and temporal features of pedestrians, and judging whether the pedestrians in the two segments of continuous image information are the same person or not bycomparing the obtained vector information, thereby realizing pedestrian re-identification. According to the residual-recurrent neural network provided by the invention, the correlation between the sequences can be extracted, the residual network is structurally formed, the problem of gradient disappearance of the recurrent neural network is solved, and the fusion of spatial features and time features is enhanced.

Description

technical field [0001] The invention relates to the technical field of pedestrian re-identification, in particular to a video pedestrian re-identification method and system based on self-learning local feature representation. Background technique [0002] The statements in this section merely provide background information related to the present invention and do not necessarily constitute prior art. [0003] Over the past few years, with the development of society and the advancement of technology, people have paid more and more attention to the safety of public places. A large number of surveillance cameras have been placed in various public places to effectively identify specific targets or people. In many public security criminal investigations, a large number of surveillance videos are used to find criminal suspects; in public places with a lot of traffic, such as large shopping malls, missing elderly people and children can be found through surveillance cameras. [000...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/103G06V10/44G06N3/045G06F18/23G06F18/241Y02T10/40
Inventor 梁姣张伟张倩宋然顾建军
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products