Face recognition image fusion method

An image fusion and face recognition technology, which is applied in the field of face recognition image fusion, can solve the problems of low recognition efficiency and few facial features, and achieve the effect of low time cost, small amount of calculation, and not easy to be disturbed by the outside world

Pending Publication Date: 2022-07-29
SHANGHAI INST OF TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical purpose of the present invention is to provide a face recognition image fusion method to solve the problem of low recognition efficiency caused by few facial features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face recognition image fusion method
  • Face recognition image fusion method
  • Face recognition image fusion method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0042] see figure 1 and figure 2 , this embodiment provides a face recognition image fusion method, including the following steps:

[0043] First, in step S1, several images to be recognized are acquired as low spatial resolution images, and an image is acquired from a pre-stored image library as a high resolution image.

[0044] Then, in step S2, PCA transformation is performed on the low spatial resolution image to obtain a principal component image group. Specifically, read the RGB values ​​of the low spatial resolution image and construct a corresponding three-dimensional column vector. The three-dimensional column vector includes the pixel position of the low spatial resolution image and the corresponding RGB value, as shown in the following formula [low_R(i ,j),low_G(i,j),low_B(i,j)]. i,j are the coordinate points of the low spatial resolution image, and low_R, low_G, and low_B refer to different colors. The three-dimensional column vector of each pixel is added and...

Embodiment 2

[0081] Based on the same inventive concept as in Embodiment 1, this embodiment also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, realizes a human face as in Embodiment 1 Identify image fusion methods.

[0082] The computer-readable storage medium in this embodiment stores a computer program that can be executed by the processor. When the computer program is executed, it first acquires several images to be recognized as low-spatial resolution images, and acquires an image from a pre-stored image library as a High-resolution images. Next, perform PCA transformation on the low spatial resolution image to obtain a principal component image group. Then, perform grayscale stretching on the high-resolution image, and replace the first component image in the principal component image group with the grayscale-stretched high-resolution image to obtain a replacement image group. Finally, per...

Embodiment 3

[0085] Based on the same inventive concept as Embodiment 1, this embodiment also provides a computer device, including a memory, a processor, and a computer program stored in the memory and called by the processor. When the processor executes the computer program, the A face recognition image fusion method of Embodiment 1.

[0086] In the process of executing the face recognition image fusion method, the processor of the computer device in this embodiment first acquires several images to be recognized as low spatial resolution images, and acquires an image from a pre-stored image library as a high resolution image. Next, perform PCA transformation on the low spatial resolution image to obtain a principal component image group. Then, perform grayscale stretching on the high-resolution image, and replace the first component image in the principal component image group with the grayscale-stretched high-resolution image to obtain a replacement image group. Finally, perform invers...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a face recognition image fusion method, which comprises the following steps: S1, acquiring a plurality of to-be-recognized images as low-spatial-resolution images, and acquiring an image from a pre-stored image library as a high-resolution image; s2, performing PCA (Principal Component Analysis) transformation on the low-spatial-resolution image to obtain a principal component image group S3, performing gray stretching on the high-resolution image, and replacing the first component image in the main component image group with the high-resolution image after gray stretching to obtain a replacement image group; wherein the gray level uniform value of the high-resolution image after gray level stretching is the same as that of the principal component image group; and S4, performing PCA inverse transformation on the replacement image group to obtain a fused image. And the information amount during picture fusion is compared only by using the variance, so that external interference is not easily caused, and the accuracy is high. Principal components of the image data are in an orthogonal relationship, so that mutual interference among the original data can be counteracted; the method is simple in operation, is small in calculation amount, is low in time cost, and can achieve the quick image fusion for subsequent face recognition.

Description

technical field [0001] The invention belongs to the field of image recognition, and in particular relates to a face recognition image fusion method. Background technique [0002] The existing face recognition system mainly uses big data and artificial intelligence to compare and identify the collected face still images or faces in the video and the face data in the database. The face image has high requirements and cannot have too much occlusion. Especially under the epidemic situation, since everyone wears a mask, the recognition ability of the face recognition system in the existing technology is limited. [0003] Therefore, it is urgent to solve the problem that most of the facial features of the face are missing after wearing a mask, and there is little data that can be provided to the computer. More than half of the face is blocked by the mask, and only less than half of the facial features are retained. The execution efficiency of machine vision is low. The problem. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06V40/16G06V10/77G06V10/80G06K9/62
CPCG06F18/2135G06F18/251
Inventor 王文峰王玉莹张晶晶
Owner SHANGHAI INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products