Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video human body attribute identification method based on deep adversarial network

An attribute recognition, network technology, applied in character and pattern recognition, biological neural network models, instruments, etc.

Inactive Publication Date: 2018-01-26
SHENZHEN WEITESHI TECH
View PDF3 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] Surveillance cameras are rapidly spreading in most cities around the world. In surveillance tasks, computer vision focuses on tracking and detecting objects described by their visual appearance, and capturing as many human characteristics as possible is better for users. Geographical situation Especially important, attribute recognition detects people's attributes (such as age, gender, etc.) and items (backpacks, bags, etc.) through security cameras. In areas such as cargo planes, ATM machines, shopping malls, stations and other public places, surveillance videos of suspicious people's behaviors, etc., although the attribute recognition of video people has made great progress, most of them are based on the face of the video person. , age, and race) are rarely identified based on the entire body of the person, and most of them think that the surveillance video is full-resolution and the person is not closed, but in fact, surveillance cameras with far-field vision are usually subject to The resolution problem and the serious impact of people occlusion, so the commonly used methods still have certain limitations in the attribute recognition of people in surveillance video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video human body attribute identification method based on deep adversarial network
  • Video human body attribute identification method based on deep adversarial network
  • Video human body attribute identification method based on deep adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] It should be noted that the embodiments in the application and the features in the embodiments can be combined with each other if there is no conflict. The present invention will be further described in detail below with reference to the drawings and specific embodiments.

[0035] figure 1 It is a system framework diagram of a video human body attribute recognition method based on a deep confrontation network of the present invention. It mainly includes attribute classification network; reconstruction network; super-resolution network.

[0036] The video human attribute recognition method of the deep confrontation network and the human attribute classification method can learn the normalized deep feature representation of the posture of different parts of the object.

[0037] The attribute classification network, through the ability of a hybrid neural network and a part-based method, when faced with an unconstrained image dominated by the influence of posture and viewpoint, i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a deep architecture for detecting attributes (such as gender, race and clothing) of a human body in a monitoring video. The method is characterized by, through capability of a hybrid neural network and a part-based method, carrying out decomposition forecasting on an object in an image, and giving robustness; carrying out training based on weighted loss, obtaining attributeprediction scores, re-constructing a network to enable the architecture to have robustness to occlusion, eliminating obstacles, and carrying out classification on occlusion images through a discriminator network; and finally, improving resolution of the images through a super-resolution network, and obtaining attribute identification result of the video human body. The method can improve the resolution of the low-resolution images, process occlusion problems, and can effectively extract attributes even under conditions of poor resolution and strong occlusion, so that identification efficiencyis improved greatly; and the method is suitable for a plurality of application fields.

Description

Technical field [0001] The invention relates to the field of human body attribute recognition, in particular to a video human body attribute recognition method based on a deep confrontation network. Background technique [0002] Surveillance cameras are spreading rapidly in most cities around the world. In surveillance tasks, the focus of computer vision is to track and detect targets described by their visual appearance, and capture as many features as possible for users to better geographic conditions. Especially important, attribute recognition uses security cameras to detect people’s attributes (such as age, gender, etc.) and objects (backpacks, bags, etc.), which are often used for video retrieval, dangerous behavior warning, traffic flow video monitoring, industrial automation monitoring, security, and automatic sales. The behavior of suspicious characters in surveillance videos in public places such as cargo planes, ATM machines, shopping malls, and stations, etc. Although...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04
Inventor 夏春秋
Owner SHENZHEN WEITESHI TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products