Model generation method, music synthesis method, system, equipment and medium

A technology for generating systems and models, applied in neural learning methods, biological neural network models, electroacoustic musical instruments, etc., can solve problems such as poor music fragment effect, complex model structure, and unsmooth notes, so as to enhance fluency and improve training. Speed, effect of reducing training time

Pending Publication Date: 2021-06-01
携程旅游信息技术(上海)有限公司
View PDF3 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The technical problem to be solved by the present invention is to provide a method for generating a model in order to overcome the defects in the prior art that the model structure is complex, the training is difficult, the synthesized notes are not smooth, and the effect of the music fragment is poor when using the generative model to synthesize music. , method, system, device and medium for music synthesis

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Model generation method, music synthesis method, system, equipment and medium
  • Model generation method, music synthesis method, system, equipment and medium
  • Model generation method, music synthesis method, system, equipment and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0057] This embodiment provides a method for generating a model, such as figure 1 As shown, the generation method includes:

[0058] Step S11 , splitting the music segment into a plurality of sequential notes.

[0059] Step S12 , process the music segment to obtain a fundamental frequency matrix and a note density matrix corresponding to each note. Wherein, the note density matrix is ​​used to represent the triggering and ending time of the corresponding note. Specifically, the processing includes Embedding processing.

[0060] Step S13, splicing the fundamental frequency matrix and the note density matrix to generate a first splicing matrix.

[0061] Step S14, inputting the first concatenated matrix into a model comprising a plurality of cyclic neural network layers and a plurality of linear layers for training, so as to generate a musical note prediction model. Among them, the linear layer is used to extract the musical features corresponding to the notes, the cyclic neu...

Embodiment 2

[0080] In this embodiment, a system for generating a model is provided, such as image 3 As shown, it specifically includes: a splitting module 110 , a processing module 120 , a first stitching module 130 and a training module 140 .

[0081] Wherein, the splitting module 110 is configured to split the music segment into a plurality of consecutive notes in time sequence.

[0082] The processing module 120 is configured to process the music segment to obtain a fundamental frequency matrix and a note density matrix corresponding to each note. Wherein, the note density matrix is ​​used to represent the triggering and ending time of the corresponding note. Specifically, the processing includes Embedding processing.

[0083] The first stitching module 130 is configured to stitch the fundamental frequency matrix and the note density matrix to generate a first stitching matrix.

[0084] The training module 140 is configured to input the first concatenated matrix into a model compri...

Embodiment 3

[0102] In this embodiment, a method for music synthesis is provided, such as Figure 4 As shown, the method includes:

[0103] Step S21, using the method for generating a model as in Embodiment 1 to train and generate a musical note prediction model.

[0104] Step S22, obtaining the number of preset notes contained in the music to be synthesized and the target fundamental frequency matrix and target note density matrix corresponding to each preset note; wherein, each preset note has a note position label.

[0105] Step S23, splicing the target fundamental frequency matrix and the target note density matrix to generate a target splicing matrix.

[0106] Step S24, input the target mosaic matrix corresponding to all the preset notes into the note prediction model in sequence according to the arrangement order of the notes in the preset time sequence, so as to obtain the target note corresponding to each preset note.

[0107] Step S25, splicing all the target notes according to ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a model generation method, a music synthesis method, a system, equipment and a medium, and the model generation method comprises the steps: splitting a music segment into a plurality of notes which are continuous in time sequence; processing the music clips to obtain a fundamental frequency matrix and a note density matrix corresponding to each note; splicing the fundamental frequency matrix and the note density matrix to generate a first spliced matrix; and inputting the first splicing matrix into a model comprising a plurality of recurrent neural network layers and a plurality of linear layers for training so as to generate a note prediction model. According to the model generation method provided by the invention, the fundamental frequency matrix and the note density matrix are used as feature data, the feature data are input into the model constructed by the recurrent neural network layer and the linear layer for training, the model is simple in structure and low in training difficulty, the training speed of the model is increased, and the training time is shortened.

Description

technical field [0001] The invention relates to the field of computer music synthesis, in particular to a model generation method, music synthesis method, system, equipment and medium. Background technique [0002] With the continuous development of deep learning, it has a wide range of applications in various aspects of image, text and language. In recent years, with the rapid rise of the live broadcast industry, all walks of life are carrying out live broadcast "carrying goods", and Internet companies are also vigorously developing the live broadcast industry. Internet companies need background music for hotel introductions, but the existing background music is created by music artists. The number of artists is limited, and the number of works created each year is small, which cannot meet the market demand, and copyrights need to be purchased, which costs a lot. [0003] Therefore, the use of artificial intelligence to generate music has received extensive attention. In...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10H1/00G10H1/32G06N3/04G06N3/08
CPCG10H1/0008G10H1/32G06N3/049G06N3/08G06N3/045
Inventor 周明康罗超邹宇胡泓
Owner 携程旅游信息技术(上海)有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products