Feature extraction method and apparatus, computer program, storage medium and electronic device

A feature extraction and global feature technology, applied in the field of computer vision, can solve the problems of not considering the importance of information and low detection accuracy, and achieve the effect of improving the accuracy of image processing

Inactive Publication Date: 2018-06-29
SHENZHEN SENSETIME TECH CO LTD
View PDF8 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the current object detection technology does not consider the importance o

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature extraction method and apparatus, computer program, storage medium and electronic device
  • Feature extraction method and apparatus, computer program, storage medium and electronic device
  • Feature extraction method and apparatus, computer program, storage medium and electronic device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0038] refer to figure 1 , shows a flowchart of steps of a feature extraction method according to Embodiment 1 of the present invention.

[0039] The feature extraction method of the present embodiment comprises the following steps:

[0040] Step S102: Perform target detection on the image to be processed, and obtain information on the entire area and at least one local area of ​​the target object in the image to be processed.

[0041] Wherein, the image to be processed may be an image in any scene taken, drawn or intercepted. The image to be processed contains a target object, and the target object can be any object such as a person, an animal, or a vehicle. The overall area information of the target object in the image to be processed is information such as the position and size of the entire area containing the target object in the image to be processed, and the local area information is information such as the position and size of at least one local area containing the t...

Embodiment 2

[0055] refer to figure 2 , shows a flow chart of steps of a feature extraction method according to Embodiment 2 of the present invention.

[0056] The feature extraction method of the present embodiment comprises the following steps:

[0057] Step S202: Perform target detection on the image to be processed by using the first neural network model for target detection, and obtain the overall area information and at least one local area information of the target object in the image to be processed.

[0058] In this embodiment, after the image to be processed is acquired, the image to be processed is input to the first neural network model for target detection, and the target detection is performed on the image to be processed through the first neural network model to obtain the target object in the image to be processed The overall region information, and at least one local region information.

[0059] Step S204: Obtain global feature data of the image to be processed.

[006...

Embodiment 3

[0083] refer to image 3 , shows a structural block diagram of a feature extraction device according to Embodiment 3 of the present invention.

[0084] The feature extraction device in the embodiment of the present invention includes: a first acquisition module 302, configured to perform target detection on the image to be processed, and acquire the overall area information and at least one local area information of the target object in the image to be processed; the second acquisition module 304, for acquiring the global feature data of the image to be processed; a third acquiring module 306, for acquiring the overall feature data and at least one local feature data of the target object according to the overall area information and the at least one local area information Feature data; a fusion module 308, configured to fuse the global feature data, the overall feature data and the at least one local feature data to obtain fused feature data.

[0085] Optionally, the first ac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

Embodiments of the invention provide a feature extraction method and apparatus, a computer program, a storage medium and an electronic device. The feature extraction method comprises the steps of performing target detection on a to-be-processed image, and obtaining overall region information and at least one local region information of a target object in the to-be-processed image; obtaining globalfeature data of the to-be-processed image; according to the overall region information and the at least one local region information, obtaining overall feature data and at least one local feature data of the target object; and fusing the global feature data, the overall feature data and the at least one local feature data to obtain fused feature data. By adopting the technical scheme, the fused feature data can carry the global information of the image and the overall information and the local information of the target object at the same time, so that the importance of the information of theregion where the target object is located in image processing is highlighted and the image processing accuracy is improved.

Description

technical field [0001] Embodiments of the present invention relate to the technical field of computer vision, and in particular, to a feature extraction method, device, computer program, storage medium, and electronic equipment. Background technique [0002] Object detection technology usually considers the information of the entire image, ignoring the importance of the information of the local area in the image. Although the accuracy of the detected target object information can meet the needs of some image processing, the current object detection technology detects The accuracy of target object information cannot adapt to many application scenarios. For example, in the pornography detection processing of webcast or social software, the differences in factors such as the gender, exposed parts, and exposure degree of the characters in the image directly affect the detection accuracy. However, pornography detection performed by current object detection technology does not co...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/10G06V10/40G06F18/2155G06F18/24
Inventor 朱允全旷章辉张伟
Owner SHENZHEN SENSETIME TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products