Pedestrian Re-Identification Method Based on Enhanced Deep Convolutional Neural Network

A technology of convolutional neural network and pedestrian re-identification, which is applied in the field of computer vision pedestrian re-identification, achieves the effect of reasonable design, good performance and improved accuracy

Active Publication Date: 2021-04-23
ACADEMY OF BROADCASTING SCI STATE ADMINISTATION OF PRESS PUBLICATION RADIO FILM & TELEVISION +1
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At the same time, it is a challenging task because the visual appearance of pedestrians can vary significantly from camera to camera due to pedestrian pose changes, different camera perspectives, lighting differences, occlusions, and background disturbances.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Pedestrian Re-Identification Method Based on Enhanced Deep Convolutional Neural Network
  • Pedestrian Re-Identification Method Based on Enhanced Deep Convolutional Neural Network
  • Pedestrian Re-Identification Method Based on Enhanced Deep Convolutional Neural Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] Embodiments of the present invention will be described in further detail below in conjunction with the accompanying drawings.

[0036] Such as figure 1 As shown, a pedestrian re-identification method based on enhanced deep convolutional neural network, first extracts the basic deep features of the image based on the deep learning Resnet50 convolutional neural network architecture, the dimension of the basic deep features is 2048 dimensions, using specific manual The feature extraction method extracts the manual features of the image, and uses the PCA dimensionality reduction method to reduce it to 2048 dimensions that match the basic deep features; The dimension of the depth feature is 4096 dimensions; considering a pair of input images, after extracting the enhanced depth features of the two images, the contrast features of the image pair are obtained through the feature comparison layer; finally, the classification loss and the verification loss function are jointly u...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a pedestrian re-identification method based on an enhanced deep convolutional neural network, which uses a basic deep learning convolutional neural network model to extract the basic depth features of pedestrian images, and uses traditional manual feature extraction methods to extract manual features of pedestrian images and Dimensionality reduction; apply the feature reconstruction module to fuse the basic deep features and manual features into enhanced deep features; predict whether the pedestrians in the two images are the same person through feature comparison, and jointly use the classification loss function and verification loss function to classify the input image and Similarity-difference verification, which trains the network with the goal of minimizing the joint loss, so that the network generates more discriminative pedestrian image features. The invention makes full use of the complementarity between manual features and deep features, and proposes a strategy of jointly using classification loss and verification loss functions for supervised network training, which achieves good performance and effectively improves the accuracy of pedestrian re-identification.

Description

technical field [0001] The invention belongs to the technical field of computer vision pedestrian re-identification, in particular to a pedestrian re-identification method based on an enhanced deep convolutional neural network. Background technique [0002] With the needs of social security and the development of science and technology, public places such as airports, stations, shopping malls and schools have deployed a large number of camera networks. These cameras with a large geographical space span and non-overlapping monitoring areas provide a large amount of video data for the subsequent processing system. In this context, relying on manual processing of these data becomes inefficient and infeasible, and must rely on advanced machine algorithms for intelligent processing. Automatically analyzing these video data through machine algorithms can not only improve efficiency but also significantly improve the quality of monitoring. Pedestrian re-identification is an import...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/10G06F18/2135G06F18/2413
Inventor 郭天生郭晓强王强姜竹青门爱东
Owner ACADEMY OF BROADCASTING SCI STATE ADMINISTATION OF PRESS PUBLICATION RADIO FILM & TELEVISION
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products