Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Intelligent recognition shooting method and system for multi-person scene and storage medium

A technology for intelligent identification and shooting systems, applied to parts of TV systems, parts of color TVs, TVs, etc., can solve problems such as insufficient clarity, insufficient flexibility, inability to independently complete person shooting and person recognition, etc., to achieve The effect of improving recognition efficiency and improving flexibility

Active Publication Date: 2019-12-13
GUANGZHOU AVA ELECTRONICS TECH CO LTD
View PDF4 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In order to solve the above-mentioned technical problems of being unable to independently complete character shooting and character recognition due to insufficient flexibility and clarity, the present invention provides a method, system and storage medium for intelligent recognition and shooting of multi-person scenes. The specific technical solutions are as follows

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Intelligent recognition shooting method and system for multi-person scene and storage medium
  • Intelligent recognition shooting method and system for multi-person scene and storage medium
  • Intelligent recognition shooting method and system for multi-person scene and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0077] see figure 1 , the embodiment of the present invention provides a method for intelligent recognition and shooting of a multi-person scene, including the following steps:

[0078] S101. Capture an image of a multi-person scene to obtain a main image image. Wherein, the main screen image includes a plurality of human-shaped images.

[0079] Such as figure 2 As shown, the images of the multi-person scene are captured by the camera.

[0080] It should be pointed out that, in the embodiment of the present invention, the human-shaped image in the main screen image may be collected by a camera device, and the camera device includes a camera, an intelligent terminal with an image capture function, and the like.

[0081] S102. Execute an image analysis algorithm on the image, the image analysis algorithm divides the captured image into multiple human-shaped image regions, and determines weighting coefficients and coordinate information for each human-shaped image region.

...

Embodiment 2

[0103] Embodiment 2 is basically the same as Embodiment 1, but step S205 is added between steps S104 and S105 of Embodiment 1. At this time, the specific process is as follows Figure 5 As shown, the specific step S205 is as follows:

[0104] S205. Repeat steps S202, S203 and S204 for the area of ​​the supplementary captured human figure image that has not been subjected to the image analysis algorithm.

[0105] Although the embodiment performs supplementary shooting of the human figure image area that does not meet the weighting coefficient threshold value, it is impossible to know whether there will be too many people or insufficient clarity in the supplementary picture, which may cause the feature recognition algorithm in the later stage to fail to execute. . In order to ensure that the retaken pictures meet the requirements and can be executed by the feature recognition algorithm, steps S202, S203 and S204 need to be repeated. It should be pointed out that step S205 is a...

Embodiment 3

[0113] Embodiment 3 is basically the same as Embodiment 1, but step S306 is added after step S105 in Embodiment 1. At this time, the specific process is as follows Figure 7 As shown, the specific step S306 is as follows.

[0114] S306. Match the corresponding relationship between the coordinate information of the human figure image area and the feature label, and compound the feature label in the corresponding position on the main picture image according to the coordinate information.

[0115] In this way, the specific feature labels of each human figure can be seen intuitively on the screen. For example, in the actual remote teaching, the teacher can know the names of all the students who are attending the class, which is convenient for the teacher to roll call and teaching interaction; (Because the relevant information of irrelevant personnel has not been entered in advance, so their characteristics will not be in the characteristic database, so the comparison operation wi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an intelligent recognition shooting method and system for a multi-person scene and a storage medium. The method comprises the following steps: shooting images of a multi-person scene; dividing the shot image into a plurality of human-shaped image areas, and determining a weighting coefficient and coordinate information for each human-shaped image area; carrying out weighting coefficient judgment operation on each human-shaped image area; judging whether the human-shaped image region enters a feature to-be-extracted state or supplemental shooting according to the relationship between the weighting coefficient and the threshold value; and executing a feature recognition algorithm on the human-shaped image region with the features in the to-be-extracted state and thesupplementary shooting human-shaped image region to obtain a feature label. The method has extremely high adaptability to a multi-person scene, can accurately realize character feature recognition, meets the intelligent analysis shooting requirements of multiple scenes such as remote teaching, conference sign-in, teaching attendance and behavior analysis, and greatly improves the information presentation capability and the analysis capability of a main picture.

Description

technical field [0001] The present invention relates to the technical fields of video monitoring, video recording and broadcasting, and video interaction, and more specifically, relates to a method, system and storage medium for intelligent identification and shooting of a multi-person scene. Background technique [0002] In the current recording and broadcasting teaching and remote interactive video teaching, the recording screens and remote video interactive screens that can be provided are limited to simple shooting screen presentations, resulting in a lack of scalability for the overall application. In the context of the current rapid technological development and progress, more video and audio intelligent analysis technologies are provided to complete the structured data output of video and audio, and through the fusion of structured data and video and audio data, it can provide more User-friendly application experience. However, image and audio intelligent recognition...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/232
CPCH04N23/611
Inventor 欧俊文关本立詹建勋
Owner GUANGZHOU AVA ELECTRONICS TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products