Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network

A technology of convolutional neural network and pedestrian re-identification, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve the problems of low recognition, achieve high recognition accuracy, accurate extraction, and realize urban intelligence Effect

Pending Publication Date: 2020-05-15
SHANGHAI UNIV OF ENG SCI
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a video pedestrian re-identification method based on twin-stream 3D convolutional neural network in order to overcome the defect of low recognition degree in the busy environment of the above-mentioned prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network
  • Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network
  • Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

[0034] Such as figure 1 As shown, a video pedestrian re-identification method based on twin-stream 3D convolutional neural network, including:

[0035] Step S1: Extract each frame of pedestrian video 1 and pedestrian video 2 into optical flow-x feature map, optical flow-y feature map, grayscale feature map, horizontal Coordinate gradient feature map and vertical coordinate gradient feature map;

[0036] Step S2: Use the optical flow-x feature map and optical flow-y feature map extracted in step S1 as the input of the action branch to extract pedestrian action information, grayscale ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video pedestrian re-identification method based on a twin double-flow 3D convolutional neural network. The video pedestrian re-identification method based on the twin double-flow 3D convolutional neural network comprises the steps of extracting each frame of picture of a pedestrian video into an optical flow-x feature map, an optical flow-y feature map, a gray feature map, a horizontal coordinate gradient feature map and a vertical coordinate gradient feature map through a hard line layer of the twin double-flow 3D convolutional neural network; taking the optical flow feature map as input of an action branch to extract action information of the pedestrian, and taking other feature maps as input of an appearance branch to extract appearance information of the pedestrian; fusing the pedestrian action information into the extracted pedestrian appearance information; performing metric comparison learning on the action information and the appearance information through fusion; updating the network parameters, and training a new convolutional neural network; and associating the target pedestrian image with the to-be-identified pedestrian image with the first similarity ranking. Compared with the prior art, the method has the advantages of being closer to a real scene and the like.

Description

technical field [0001] The invention relates to the field of machine vision based on image processing, in particular to a video pedestrian re-identification method based on a twin-stream 3D convolutional neural network. Background technique [0002] Person re-identification, a problem faced in person matching on non-overlapping cameras, has received increasing attention in recent years due to its importance in implementing automated surveillance systems. The re-identification of video pedestrians is closer to the real scene. The invention is helpful to realize the intelligentization of the city, to the safety and tracing of large public places such as airports, and to the automatic search for lost old people and children through the camera. To assist the public security organs in the automatic identification and tracking of criminals. [0003] In many applications, such as cross-camera tracking and pedestrian search, it is desirable to recognize a person from a group of peo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/103G06N3/045G06F18/241Y02T10/40
Inventor 魏丹王子阳胡晓强罗一平
Owner SHANGHAI UNIV OF ENG SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products