Music generation method based on facial expression recognition and recurrent neural network

A Recurrent Neural Network and Expression Recognition technology, applied in character and pattern recognition, acquisition/recognition of facial features, computer parts and other directions, can solve the problem of inability to meet the individual needs of the audience, achieve a good performance experience, low hardware dependence Effect

Inactive Publication Date: 2019-10-08
ZHEJIANG UNIV OF TECH
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to make up for the defect that existing music cannot meet the individual needs of listeners on specific occasions, the present invention combines facial expression emotion recognition and cyclic neural network music generation technologies, and proposes a music generation based on facial expression emotion recognition and cyclic neural network method, wherein facial expression emotion recognition adopts Visual Geometry Group-19 convolutional neural network model (VGG19 for short), and the music generation method of recurrent neural network adopts recurrent neural network-restricted Boltzmann machine (Recurrent Neural Network-Restricted Boltzmann Machine) , referred to as RNN-RBM) algorithm

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Music generation method based on facial expression recognition and recurrent neural network
  • Music generation method based on facial expression recognition and recurrent neural network
  • Music generation method based on facial expression recognition and recurrent neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The present invention will be further described below in conjunction with accompanying drawing.

[0051] refer to Figure 1 ~ Figure 4 , a kind of music generation method based on facial expression recognition and recurrent neural network, comprises the following steps:

[0052] S1: Obtain music audio data and character expression data;

[0053] S2: classify and label data;

[0054] S3: audio data and image data processing;

[0055] S4: Initialize the RNN-RBM neural network;

[0056] S5: training RNN-RBM neural network;

[0057] S6: Use VGG19 to recognize facial expressions;

[0058] S7: Input the recognized emotional information into the trained RNN-RBM network to obtain the final music generation.

[0059] In this example, music is generated from pictures and audio data collected by oneself, and the method includes the following steps:

[0060]S1: Get audio data and image data:

[0061] Some of the audio data comes from the Classical Piano Midi dataset. The m...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a music generation method based on expression recognition and a recurrent neural network. The method comprises the following steps: 1) obtaining music audio data and characterexpression data; 2) classifying and marking the data; 3) processing audio data and image data; 4) initializing an RNN-RBM neural network; 5) training the RNN-RBM neural network; and 6) identifying thefacial expression by using VGG19 + dropout + 10crop + softmax. And 7) inputting the recognized emotion information into the trained RNN-RBM network to obtain final music generation The emotion regulation method combines facial emotion recognition and AI music generation, can generate music according to the emotion of the person, achieves the purpose of emotion regulation, and has high practical application value.

Description

technical field [0001] The invention relates to the field of computer technology and the field of digital music generation, in particular to a music generation method based on expression emotion recognition and cyclic neural network. Background technique [0002] Music has a subtle influence on people's body and mind. With the development and progress of the Internet and cloud music, music takes up more and more time in people's daily life, and silently regulates people's physical and mental health. We can deeply feel the role of music in our daily life. For example, programmers will improve their programming efficiency when they listen to music, bodybuilders are used to using music to adjust their fitness rhythm, drivers use music to improve their concentration while driving, and so on. Listening to the right music on a suitable occasion can also greatly stretch people's body and mind. For example, listening to a passionate symphony can release people's low mood when they ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/65G06F16/635G06K9/00G06K9/62
CPCG06F16/65G06F16/636G06V40/174G06F18/241
Inventor 傅晨波夏镒楠李一帆岳昕晨宣琦
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products