Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

An image advanced semantic recognition method based on multi-feature fusion of a deep network

A multi-feature fusion and deep network technology, applied in the field of computer image emotional semantic recognition, can solve problems such as the small number of emotional data sets, the inability to fully meet the requirements of deep learning training, and the inability to fully reflect the advanced semantic information of images. , to achieve the effect of scientific selection of experimental images and improvement of accuracy

Active Publication Date: 2018-12-11
TAIYUAN UNIV OF TECH
View PDF12 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, the current research does not fully reflect the high-level semantic information contained in the image. In addition, the number of high-quality emotional data sets is still small, which cannot fully meet the training requirements of deep learning.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An image advanced semantic recognition method based on multi-feature fusion of a deep network
  • An image advanced semantic recognition method based on multi-feature fusion of a deep network
  • An image advanced semantic recognition method based on multi-feature fusion of a deep network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0056] Emotion recognition in images is a completely different complex task than object detection or image classification. In the present invention, the pre-extracted low-level features such as color and texture statistics are combined, and the deep features extracted by the trained deep network model are combined to obtain a set of feature emotion recognition distributions that reflect emotions as fully as possible complexity. And finally use a phrase with e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an image high-level semantic recognition method based on multi-feature fusion of a deep network. A global color histogram extracts the color features of an image, the LBP algorithm extracts the texture feature of the image, the deep object network extracts the object features of the image and the deep emotion network extracts the deep emotion features of the image to be fused to identify the composite emotion of the image and the main object contained, Finally, for the input image, the network model can generate descriptive phrases with high-level semantic information, including emotional semantics and object semantics. The invention starts from the aspect of depth learning for small data sets. In this invention, a new method of data expansion is proposed, which combines pre-extracted low-level features such as color and texture statistics, and proposes a multi-feature fusion method to identify the emotion of images and the high-level semantic information of objects. This method improves the accuracy of the experimental results and makes the selection of experimental images more scientific.

Description

technical field [0001] The invention relates to the technical field of computer image emotional semantic recognition, and more specifically, relates to an image advanced semantic recognition method based on deep network multi-feature fusion. Background technique [0002] Images are an important tool that can be used to convey emotions, and different forms of images will bring people different intuitive emotional experiences. Psychological research has shown that human emotions vary in response to different visual stimuli. With the development of deep learning technology, computers have made breakthroughs in processing many visual recognition tasks such as image classification, image segmentation, object detection, and scene recognition. But what about the emotion the image inspires? Whether it is also possible to form judgments similar to humans through deep learning methods. In fact, due to the subjectivity and complexity of emotions, identifying evoked emotions from ima...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/62
CPCG06F18/2413G06F18/24147G06F18/25
Inventor 李海芳王哲邓红霞杨晓峰姚蓉阴桂梅
Owner TAIYUAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products