Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

288 results about "Lyrics" patented technology

Lyrics are words that make up a song usually consisting of verses and choruses. The writer of lyrics is a lyricist. The words to an extended musical composition such as an opera are, however, usually known as a "libretto" and their writer, as a "librettist". The meaning of lyrics can either be explicit or implicit. Some lyrics are abstract, almost unintelligible, and, in such cases, their explication emphasizes form, articulation, meter, and symmetry of expression. Rappers can also create lyrics (often with a variation of rhyming words) that are meant to be spoken rhythmically rather than sung.

Chinese song emotion classification method based on multi-modal fusion

The invention discloses a Chinese song emotion classification method based on multi-modal fusion. The Chinese song emotion classification method comprises the steps: firstly obtaining a spectrogram from an audio signal, extracting audio low-level features, and then carrying out the audio feature learning based on an LLD-CRNN model, thereby obtaining the audio features of a Chinese song; for lyricsand comment information, firstly constructing a music emotion dictionary, then constructing emotion vectors based on emotion intensity and part-of-speech on the basis of the dictionary, so that textfeatures of Chinese songs are obtained; and finally, performing multi-modal fusion by using a decision fusion method and a feature fusion method to obtain emotion categories of the Chinese songs. TheChinese song emotion classification method is based on an LLD-CRNN music emotion classification model, and the model uses a spectrogram and audio low-level features as an input sequence. The LLD is concentrated in a time domain or a frequency domain, and for the audio signal with associated change of time and frequency characteristics, the spectrogram is a two-dimensional representation of the audio signal in frequency, and loss of information amount is less, so that information complementation of the LLD and the spectrogram can be realized.
Owner:BEIJING UNIV OF TECH

Song synthesis method, device and equipment and storage medium

The invention relates to artificial intelligence, and discloses a song synthesis method which comprises the steps: obtaining lyric recitation audio and music score information; performing duration labeling on the lyric recitation audio through a preset voice recognition model and a lyric pinyin text to obtain recitation duration; analyzing initial acoustic parameters from the lyric recitation audio through a preset vocoder; extracting singing duration from the lyric pinyin text according to a preset initial consonant variable speed dictionary, the rhythm information and the beat information; performing speed change processing on the initial acoustic parameters according to a preset speed change algorithm, the recitation duration and the singing duration; performing formant enhancement processing on the frequency spectrum envelope after speed change to obtain an enhanced frequency spectrum envelope; performing correction processing based on the pitch information, the singing duration and the fundamental frequency after speed change to obtain a corrected fundamental frequency; and performing song synthesis processing on the processed acoustic parameters through the preset vocoder. The invention also relates to a blockchain, and the synthesized song is stored in the blockchain.
Owner:PING AN TECH (SHENZHEN) CO LTD

Music copyright recognition authentication method and authentication system based on block chain

The invention relates to a music copyright identification authentication method and a music copyright authentication system based on a block chain. The authentication method comprises the following steps: obtaining music works based on one or more network nodes under the block chain network; obtaining music works based on one or more network nodes under the block chain network; distinguishing musical compositions by song notes and lyric character; drawing a song note into a note curve, extracting characteristic parameters from the note curve, and calculating a hash value of the note curve; a lyric character being translated into a character code, and characteristic parameters being extracted from the character code to calculate a character coding hash value; export request for exporting the note curve hash value or character encoded hash value to the block chain network specified account being broadcast among other network nodes; in response to receiving acknowledgement of the export request from other network nodes, a note curve or character encoded hash value being written to a block chain network designated account. The invention broadcasts and submits the hash value of a musical work to a network node of a block chain network, thereby protecting the copyright of the musical work at the time of submission.
Owner:北京创声者文化传媒有限公司

Interactive lyric generation method and system based on neural network

The invention discloses an interactive lyric generation method and system based on a neural network, and the system is based on the method, and the method comprises the steps: carrying out the pre-training of a lyric generation model, inputting preprocessed lyric training data into a basic training model for training, and obtaining a lyric generation model; acquiring a lyric label set by a user and a provided first sentence of lyrics; encoding lyric data; completing the first sentence of lyrics, and inputting data codes of the lyrics provided by the user into a lyrics generation model to automatically complete lyrics sentences; inputting the completed lyric sentence into a lyric generation model to generate a candidate sentence of a next lyric sentence; selecting a candidate statement as asecond sentence of lyrics and merging the second sentence of lyrics into the first sentence of lyrics to serve as prediction input of the next sentence of lyrics; inputting the merged lyric sentencesinto a lyric generation model to generate candidate sentences of the next lyric sentence; and repeating the above steps until one segment or the whole lyric is completed. In the lyric generation process, a plurality of candidate sentences are generated for a user to select, so that the interactivity in the lyric generation process is improved.
Owner:成都潜在人工智能科技有限公司

Singing scoring method based on lyric and voice alignment

InactiveCN110660383AAccurate scoreReduce feature matching similaritySpeech recognitionNoise removalNoise
The invention discloses a singing scoring method based on lyric and voice alignment. The singing scoring method comprises the following steps in sequence: song recording; voice accompaniment separation and noise removal; extraction of fundamental tone frequency and amplitude; alignment of lyrics with voice by taking sentences as units; segmentation of the fundamental tone frequency of each character in the aligned voice; calculation of a fundamental tone frequency similarity score; calculation of a rhythm score according to the duration of each sentence of user voice and standard voice and thestarting and ending time of each character; normalization of the amplitudes of the user voice and the standard voice; calculation of an amplitude similarity score; multiplying of the fundamental tonefrequency score, the rhythm score and the amplitude score by a weight coefficient and adding, and calculation of a comprehensive score of a song. According to the singing scoring method in the invention, the influence of accompaniment and noise on voice evaluation is reduced; label information of the lyrics is reasonably utilized, so that evaluation of the fundamental tone frequency and rhythm ofthe user is more accurate; and user songs are evaluated in multiple aspects, so that song scoring results are more objective and comprehensive.
Owner:SOUTH CHINA UNIV OF TECH

Multi-singer singing synthesis method and device

The invention discloses a multi-singer singing synthesis method, and belongs to the technical field of voice synthesis. The synthesis method comprises two stages of model training and model reasoning,and the model reasoning part is finally deployed in the device. The model training comprises the steps of obtaining singing data of multiple singers, extracting musical sentence features, phoneme pronunciation durations and audio frequency spectrum features, wherein the musical sentence features and the phoneme pronunciation durations are arranged according to a phoneme sequence sequence expandedby lyrics, the lengths and the number of phonemes are kept consistent, and the total frame number of the pronunciation durations is consistent with the total frame number of the corresponding frequency spectrum; generating singer vectors for databases of different singers; and taking the musical sentence features and the singer vectors as the input of the model, and taking the spectrum features and the pronunciation durations as the target joint training model of model fitting. The model adopts an adversarial generative network technology to distinguish timbres and pronunciation characteristics of different singers, and keeps the quality of the synthesized song close to the original sound.
Owner:SICHUAN CHANGHONG ELECTRIC CO LTD

Human voice melody extraction method and system based on numbered musical notation recognition and fundamental frequency extraction

The invention discloses a human voice melody extraction method and system based on numbered musical notation recognition and fundamental frequency extraction, and the system applies the method, and the method comprises the steps: carrying out the binarization of a numbered musical notation file corresponding to a to-be-processed song, processing an original audio file of the song into downsampledsingle-track audio, and separating a human voice waveform from the single-track audio; identifying notes and lyric pairs in the numbered musical notation to obtain a list of lyrics and notes; retrieving a list of lyrics and notes according to the libretto file to obtain a matching result sequence of libretto and notes; selecting a note, calculating the fundamental frequency of the note according to the separated human voice waveform, calculating the frequency of each note according to the calculated fundamental frequency and the relative relation of the notes, and converting the frequency of each note into midi pitch; and translating the matching result sequence of the row lyrics and the notes to obtain a matching result sequence of the row lyrics and the notes of which the pitches are matched with the midi pitches of the notes. The human voice melody with the pitch matched with the melody can be extracted.
Owner:成都潜在人工智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products