Video pedestrian re-recognition method based on Transform space-time modeling

A pedestrian re-identification and video technology, applied in the field of pedestrian re-identification, to improve performance and reduce training difficulty

Active Publication Date: 2021-11-09
WUHAN UNIV
View PDF8 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the previous work of the timing modeling part, modeling methods such as pooling, recurrent neural network, and timing attention network have been tried. The results show that the pooling method that loses timing information is the most prominent...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video pedestrian re-recognition method based on Transform space-time modeling
  • Video pedestrian re-recognition method based on Transform space-time modeling
  • Video pedestrian re-recognition method based on Transform space-time modeling

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] The present invention provides a video pedestrian re-identification method based on Transformer spatio-temporal modeling. Firstly, after frame-level features are extracted by using the image-level feature network ResNet50, position information is added to the frame-level features through the position coding layer to ensure video frames to the greatest extent. The sequence information, and then the re-encoded features are completed through the Transformer network to complete the spatio-temporal modeling, and then extract more discriminative spatio-temporal features.

[0054] The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0055] Such as figure 1 As shown, the process of the embodiment of the present invention includes the following steps:

[0056] Step 1. Perform video preprocessing on the pedestrian re-identification video dataset to obtain video clips that are convenient ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video pedestrian re-recognition method based on Transform space-time modeling. The method comprises the following steps: firstly, extracting frame-level features by using an image-level feature network ResNet50, then adding position information to the frame-level features through a position coding layer to ensure sequence information of video frames to the greatest extent, then completing space-time modeling on recoded features through a Transform network, and further extracting space-time features with higher discrimination performance. According to the method, position codes are added to frame-level features, so that time sequence information of video clips can be fully utilized; a Transform structure is utilized to map input features to three spaces for feature fusion, so that more robust spatial-temporal features are extracted, and the performance of the network is improved; an end-to-end network model is provided, the application process from input to model to output is realized, and the training difficulty of the pedestrian re-recognition network model based on the video is reduced.

Description

technical field [0001] The invention belongs to the field of pedestrian re-identification, in particular to a video pedestrian re-identification method based on Transformer spatio-temporal modeling. Background technique [0002] Pedestrian re-identification is a hot issue in the field of computer vision. Its main task is to use related technologies of image processing to complete the retrieval of specific pedestrians in images or video data from different cameras. In recent years, due to the increasing demand for public security and surveillance networks, the attention and requirements for pedestrian re-identification have also increased. However, in the actual application scenarios that focus on monitoring networks, the current mainstream method is to manually analyze video data streams to extract target information. This method will have limitations in efficiency and accuracy when faced with massive data sets. Therefore, the research on pedestrian re-identification techno...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06N3/047G06F18/2415G06F18/253Y02T10/40
Inventor 种衍文陈梦成潘少明
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products