Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Facial expression capturing method and device in weak light environment

A facial expression and environment technology, applied in the field of computer vision, can solve problems such as poor facial expression recognition methods, and achieve the effects of enhancing model generalization, strong generalization, and avoiding overfitting.

Pending Publication Date: 2022-05-20
FUDAN UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the facial expression recognition method does not achieve good results in low light conditions, so it is necessary to improve the existing methods

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial expression capturing method and device in weak light environment
  • Facial expression capturing method and device in weak light environment
  • Facial expression capturing method and device in weak light environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] In order to make the technical means, creative features, goals and effects of the present invention easy to understand, a method and device for capturing human facial expressions in a low-light environment of the present invention will be described in detail below in conjunction with the embodiments and accompanying drawings.

[0019]

[0020] In this embodiment, a method and device for capturing human facial expressions in a low-light environment are run by a computer, and the computer needs at least one graphics card for GPU acceleration to complete the training process of the model. The trained theater seat positioning model The expression recognition classification model and the image recognition process are stored in the computer in the form of executable code. The computer can call these models through the executable code and process the image data frames in multiple scenes in batches at the same time, and obtain and output the image data in each scene. Facial ex...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method and a device for capturing facial expressions in a weak light environment, which can realize accurate framing of seats with different scales in a video stream in a complex environment, and detect and extract facial features for identifying expressions based on the framing result. The method is characterized by comprising the following steps: S1, acquiring a video stream of an overlook angle in a cinema to be detected; s2, detecting a video stream based on a cinema seat positioning model to obtain a labeling box of the newest positions of all seats in a cinema to be detected; and S3, performing face positioning and expression recognition on all the labeling boxes by using an expression recognition classification model so as to obtain face expression categories of audiences in the to-be-detected cinema. Wherein the cinema seat positioning model takes a network combined by ResNet50 and FPN as a Backbone, a CBAM module is added to construct a cinema seat positioning initial network, and the initial network is trained to obtain the cinema seat positioning model, and the expression recognition classification model is obtained on the basis of constructing and training a facial expression recognition algorithm combined by improved VGGNet and improved Focaal Loss.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a method and device for capturing human facial expressions in low-light environments. Background technique [0002] Scientific statistical methods of movie viewing data and intelligent methods of maintaining movie viewing order are indispensable links in the modernization of the film industry. Box office data is an important criterion for movie revenue statistics, and it is an intuitive representation of whether a movie is successful or not. The real attendance rate, audience status during the screening, such as facial expressions, and the number of departure times, are used to evaluate audience viewing from more dimensions. An important basis for the movie experience, which provides an important reference for the film producer to control the rhythm of the film and provide a better viewing environment for the screening party; maintaining a good viewing order is the resp...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06V40/16G06V20/40G06V10/764G06V10/774G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/048G06N3/045G06F18/24G06F18/214
Inventor 张峰赵瑞玮冯瑞
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products