Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A pedestrian re-identification method based on joint learning of shared and unique dictionary

A technology of pedestrian re-identification and dictionary learning, which is applied in character and pattern recognition, instruments, computer parts, etc.

Active Publication Date: 2019-03-01
KUNMING UNIV OF SCI & TECH
View PDF6 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method only considers the similarity of the same pedestrian under different perspectives, and ignores the influence of the similarity between different pedestrians on the recognition algorithm.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A pedestrian re-identification method based on joint learning of shared and unique dictionary
  • A pedestrian re-identification method based on joint learning of shared and unique dictionary
  • A pedestrian re-identification method based on joint learning of shared and unique dictionary

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0077] Example 1: The common components of the same pedestrian under different viewing angles do not reduce the recognition rate in the similarity measurement. The fundamental reason for reducing the recognition rate lies in the similarity shown by different pedestrians from different perspectives, and this similarity is often reflected by the common components between images of different pedestrians. According to the low-rank sparse representation theory, the shared components between different pedestrians often show strong correlation, so they have a strong low-rank. According to this idea, the present invention proposes a joint learning framework of pedestrian-specific dictionary and shared dictionary, and realizes the separation of pedestrian-specific components and shared components based on this, so as to solve the problem caused by the similarity components of pedestrian image appearance features from different perspectives. The problem of ambiguity in appearance charac...

example 1

[0142] Example 1: VIPeR dataset

[0143] The images in this dataset come from 632 pedestrians from two non-overlapping camera views, each pedestrian has only one image per view, for a total of 1264 images. During the experiment, the size of each pedestrian image in the dataset is set to 128 × 48. figure 2 Partial sample pairs of pedestrian images from this dataset are given. The pedestrian images in each row are from the same perspective, and the pedestrian images in the same column are the visual representations of the same pedestrian from different perspectives. It can be seen from this that the appearance characteristics of the same pedestrian from different perspectives are quite different due to the change of posture and the difference of the background. Therefore, this dataset can be used to measure the performance of the algorithm in mitigating the effects of pedestrian pose changes and complex backgrounds.

[0144] In order to prove the effectiveness of the algorit...

example 2

[0148] Example 2: CUHK01 dataset

[0149] The pedestrian images in this dataset are composed of 3884 images of 971 pedestrians captured by two non-overlapping cameras in the campus. Among them, there are 2 pictures for each pedestrian in the same view. During the experiment, the image size was adjusted to 128*60. image 3 Pairs of images of the same pedestrian under different viewing angles are given. It can be seen that the images of the same pedestrian from different perspectives show great differences due to differences in posture, perspective, lighting, and background. Therefore, it is extremely challenging to achieve correct matching of pedestrian images on this dataset.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a pedestrian re-identification method based on joint learning of a shared and unique dictionary, belonging to the technical field of digital image processing. Because each pedestrian is composed of shared elements reflecting their similarities and unique elements reflecting their identity uniqueness, it is proposed to reduce the ambiguity between pedestrian vision by removing the shared components of features. As that distance and coherence constraint term of the coding coefficient of the same pedestrian special component under the special dictionary are introduced, thesame pedestrian is forced to have similar code coefficients, and different pedestrians have weaker coherence; in addition, low-rank and sparse constraints are introduced to enhance the expressive power and discriminability of shared dictionaries and unique component dictionaries, respectively. The experimental results show that the method proposed by the invention has higher recognition performance than the traditional method.

Description

technical field [0001] The invention relates to a pedestrian re-identification method based on shared and unique dictionary pair joint learning, and belongs to the technical field of digital image processing. Background technique [0002] As one of the key tasks of video analysis, pedestrian re-identification can automatically match pedestrian images from multiple camera perspectives. However, in reality, due to economic considerations, the monitoring areas between different cameras are often non-overlapping and discontinuous. In addition, affected by the camera angle of view, illumination changes, complex background and occlusion factors, the appearance features of pedestrian images usually show great ambiguity, which brings great challenges to pedestrian re-identification technology. [0003] In order to reduce the ambiguity between pedestrian visual features and improve the performance of pedestrian re-identification, researchers have done a lot of work and proposed a se...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/23G06V10/40G06F18/2136G06F18/22G06F18/214
Inventor 李华锋许佳佳周维燕
Owner KUNMING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products