Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Global local time representation method for video-based pedestrian re-identification

A re-identification and video technology, applied in the field of artificial intelligence, can solve the problems of complex training of long sequences of videos

Active Publication Date: 2020-08-21
PEKING UNIV
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For example, RNN models are complex to train for long sequence videos

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Global local time representation method for video-based pedestrian re-identification
  • Global local time representation method for video-based pedestrian re-identification
  • Global local time representation method for video-based pedestrian re-identification

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] The following description and drawings illustrate specific embodiments of the invention sufficiently to enable those skilled in the art to practice them.

[0058] 1 Basic introduction

[0059] We test our method on a newly proposed large-scale video dataset for person ReID (LS-VID) and four widely used video ReID datasets, including PRID, iLIDS VID, MARS and DukeMTMC Video ReID. Experimental results show that GLTR has consistent performance advantages on these datasets. It achieves 8702% first-class accuracy on the MARS dataset without re-ranking, which is 2% better than the recent PBR which uses additional body part cues for video feature learning. It achieves 9448% and 9629% first-class accuracy on PRID and DukeMTMC VideoReID, respectively, which also exceeds the current state of the art.

[0060] The GLTR representation is a series of frame features extracted by simple DTP and TSA models. Although computationally simple and efficient, this solution outperforms ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a global local time representation method for video-based pedestrian re-identification. A network provided by the invention is composed of a DTP convolution model and a TSA model. The DTP is composed of time convolution expanded in parallel and used for simulating short-term time clues between adjacent frames. The TSA captures global time cues using relationships between non-contiguous frames. Experimental results on five reference data sets show that the proposed GLTR method is superior to the most advanced method at present.

Description

technical field [0001] The invention relates to the technical field of artificial intelligence, in particular to a video recognition and representation method and system. Background technique [0002] Person re-identification refers to identifying pedestrians in a camera network by matching images or video sequences of pedestrians. It has many practical applications such as intelligent monitoring and criminal investigation. Significant progress has been made in image-based person ReID both in terms of resolution and construction of large benchmark datasets. In recent years, the research of video-based person re-identification (video person ReID) has received a lot of attention, because the availability of video data is easier than ever, and video data provides richer information than image data. Video-based person ReID is able to explore a large number of spatio-temporal cues, which has the potential to solve some of the challenges faced by image-based person ReID, distingu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06V40/20G06V20/40G06N3/045
Inventor 张史梁李佳宁高文
Owner PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products