Cross-view character recognition method based on shapes and postures under wearable equipment

A wearable device and person recognition technology, which is applied in the field of cross-view person recognition, can solve problems that affect the accuracy of CVPI, pose inaccuracy, ignore pose consistency, etc., and achieve the effects of improving accuracy, solving occlusion, and improving performance

Active Publication Date: 2020-08-25
TIANJIN UNIV
View PDF12 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the inaccuracy of the pose estimated by this method, and the detection set is input through consecutive frames of video, this method ignores the characteristics of pose consistency in consecutive frames of video, which will affect the accuracy of CVPI to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Cross-view character recognition method based on shapes and postures under wearable equipment
  • Cross-view character recognition method based on shapes and postures under wearable equipment
  • Cross-view character recognition method based on shapes and postures under wearable equipment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032]In order to overcome the shortcomings of the existing technology, a higher accuracy CVPI method is proposed, which can better achieve higher accuracy person re-identification through human body posture information. In order to achieve the above purpose, the solution adopted by the present invention is a cross-view person recognition method based on re-optimization of shape and posture. The overall process of cross-view person recognition is: given the video frame image of the pedestrian to be detected in camera No. For the video obtained by the camera, for all the video frames of the first two, the human body parameterized model and the 2D joint point position corresponding to the frame image are detected. The function of the 2D joint point position is to optimize the Smpl human body parameterized model (3D joint point weight Projection optimization operation), and then obtain the final human body parameterized model through the 3D joint point reprojection optimization op...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the field of computer vision, relates to character recognition, and aims to better achieve character re-recognition with higher accuracy through human body posture information. In order to achieve the purpose, the invention relates to a cross-view character recognition method based on shapes and postures under wearable equipment. The method includes: giving a video frame image of a pedestrian to be detected in the first camera and a video obtained by the second camera; detecting a human body parameterized model and a two-position articulation point position corresponding to the frame image for all the video frames of the first two video frames, wherein the two-dimensional articulation point position is used for optimizing a human body parameterized model Smp1; obtaining a final human body parameterization model through three-dimensional joint point reprojection optimization operation and discrete cosine transform (DCT) time domain optimization operation of thehuman body parameterization model; and comparing the final human body parameterized model of the target to be detected with the human body parameterized model in each video frame in the second camera, and finding out the target to be detected. The method is mainly applied to automatic character recognition occasions.

Description

technical field [0001] The invention belongs to the field of computer vision, human body pose estimation, and model optimization, and relates to a cross-viewpoint character recognition method based on 2D / 3D human body joint points, a parametric model of a human body and optimization. Background technique [0002] As a sub-problem of image retrieval, the research of cross-view person recognition (CVPI) is very important and has a wide range of application scenarios, such as: video surveillance in outdoor dense areas, intelligent human-computer interaction and military investigation. Early research can be traced back to the problem of cross-camera multi-target tracking. Traditional surveillance videos are mostly obtained through fixed-position cameras. They can only cover a limited area from a pre-fixed perspective. Due to the fixedness of the camera, it will cause occlusion and Problems such as the disappearance of the target pedestrian’s field of view in the camera, so early...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00
CPCG06V40/20G06V20/40Y02T10/40
Inventor 李坤李万鹏刘幸子王松
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products