Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Emotional music generation method based on deep neural network and music element driving

A deep neural network and music technology, applied in neural learning methods, biological neural network models, neural architecture, etc., to achieve the effect of enhancing artistic appeal and emotional rendering.

Inactive Publication Date: 2021-08-24
INST OF ACOUSTICS CHINESE ACAD OF SCI
View PDF5 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Previous related patents or papers have not used this method to build a deep neural network model for emotional music generation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Emotional music generation method based on deep neural network and music element driving
  • Emotional music generation method based on deep neural network and music element driving
  • Emotional music generation method based on deep neural network and music element driving

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] like figure 1 As shown, Embodiment 1 of the present invention provides a method for generating emotional music based on a deep neural network and music elements. Read the music dataset and perform preprocessing and encoding. Extract music element features, and use music sequence and music element features as the input of deep neural network to train the network. After the training of the deep neural network is completed, a music sequence containing the specified emotion can be generated according to the emotion specified by the user, and then the music containing the specified emotion can be output through decoding.

[0058] Step 1: Prepare the music data set in MIDI format as the training data. This time, 329 piano pieces including 23 classical piano players are used. These piano pieces are composed in various styles, including different rhythms and modes, and are suitable for training A generative model for emotional music.

[0059]Step 2: Use python's pretty-midi ...

Embodiment 2

[0088] like image 3 As shown, Embodiment 2 of the present invention proposes yet another method for generating emotional music. The music data set with emotional labels is preprocessed and encoded, and the features of music elements and corresponding emotional labels are extracted. The network is trained by taking music sequences, music element features and emotion labels as input to the deep neural network. After the network training is completed, the music sequence containing the specified emotion can be generated according to the emotion specified by the user, and then the music containing the specified emotion can be output through decoding.

[0089] Step 1: Prepare a data set of emotional music with manual annotation in MIDI format as training data. This embodiment adopts piano pieces containing 4 different emotions, of which 56 contain happy emotions, 58 contain calm emotions, and 40 Contains sad emotions, 47 piano pieces containing tense emotions. These piano compos...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of intelligent music generation, in particular to an emotional music generation method based on a deep neural network and music element driving. The method comprises the following steps: acquiring an emotion type specified by a user, and converting the emotion type into a corresponding music element feature; inputting the music element features into a pre-established and trained emotional music generation model to obtain a corresponding emotional music sequence; and decoding and synthesizing the emotion music sequence to obtain emotion music. According to the invention, music is generated by using an artificial intelligence algorithm, and emotion factors are fused into the intelligent music generation system, so that the artistic appeal and emotion rendering ability of the intelligent music are improved; and emotional music generation does not depend on a large amount of manually-marked music data.

Description

technical field [0001] The invention relates to the technical field of intelligent music generation, in particular to an emotional music generation method driven by a deep neural network and music elements. Background technique [0002] The intelligent music generation method uses artificial intelligence methods to compose machine music, and by simulating the creative thinking of composers, it improves the efficiency of music generation and the universality of music creation, and promotes the interdisciplinary fields of music and computer science, neuroscience, psychology, etc. development of. The field of intelligent music generation is developing rapidly in foreign countries. Foreign artificial intelligence giants have carried out in-depth research on intelligent music generation technology. The development of intelligent music generation in my country is still in its infancy. Intelligent music generation systems and works are still relatively sporadic. A complete system h...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10H1/00G06N3/04G06N3/08
CPCG10H1/0025G10H1/0066G06N3/084G10H2250/311G06N3/045
Inventor 郑凯桐桑晋秋孟瑞洁郑成诗李晓东蔡娟娟王杰
Owner INST OF ACOUSTICS CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products