Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Band command action generation method based on self-supervised cross-modal perception loss

A cross-modal, action technology, applied in neural learning methods, biological neural network models, speech analysis, etc., can solve problems such as unnatural image details, and achieve the effect of accelerating the convergence speed

Active Publication Date: 2021-12-17
HOHAI UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, existing perceptual loss networks have their own limitations
Some scholars pointed out that when using the traditional perceptual loss based on ImageNet pre-training VGGNet for image super-resolution, it will lead to unnatural image details

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Band command action generation method based on self-supervised cross-modal perception loss
  • Band command action generation method based on self-supervised cross-modal perception loss
  • Band command action generation method based on self-supervised cross-modal perception loss

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0046] Embodiments of the invention are described in detail below, examples of which are illustrated in the accompanying drawings. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

[0047] In recent years, many scholars have realized the great value of multimodal data widely existing in the Internet, and proposed many cross-modal self-supervised learning methods. Different from single-modal self-supervised learning, the feature representations of the two modalities in cross-modal self-supervised learning guide each other's learning, and can mine richer information from data. Perceptual loss was proposed by Johnson et al. in 2016 as a loss function for generation tasks. Different from the traditional Euclidean distance measurement or loss in the sample space, the perceptual loss measures the distance between the generated sample and the real sample ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the technical field of artificial intelligence, and discloses a band command action generation method based on self-supervision cross-modal perception loss. The method comprises the following steps: firstly, carrying out self-supervised audio-action synchronization learning, and automatically sampling positive and negative sample pairs to train parameters of a two-branch network model; secondly, using the two trained branches for extracting semantic music control signals and calculating perception loss respectively, then using a discriminator for calculating adversarial loss, determining the optimal weight ratio of the perception loss to the adversarial loss according to the output action standard deviation, and training a model; and finally, inputting a test audio into the model, generating a command action sequence synchronized with the music, and visualizing the command action sequence. The method has the important significance that a cross-modal self-supervised learning task is used as a pre-training task of the perception loss network, and the problem of excessive smoothness of traditional regression loss can be avoided, so that natural, attractive and diversified command actions highly synchronous with music are generated.

Description

technical field [0001] The invention relates to a method for generating band conductor actions based on self-supervised cross-modal perceptual loss, and in particular to a method for generating band conductor actions that use music as a conditional control signal to generate rhythm-synchronized and semantically related command actions. , which belongs to the field of human action condition generation. Background technique [0002] The conductor is the soul of the symphony orchestra. From the medieval European church choir to the modern music of the 21st century, conducting technology and art have been continuously developed and have become a discipline with rich content. The conductor's body language is complex and changeable, and it is necessary to convey various information such as tempo, strength, emotion, and playing method in real time when the orchestra is performing, while maintaining a certain style and aesthetic feeling. In recent years, with the development of de...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F16/64G06F16/74G06N3/04G06N3/08G10L25/24G10L25/30
CPCG06F16/64G06F16/74G06N3/08G10L25/24G10L25/30G06N3/048G06N3/045
Inventor 刘凡陈德龙潘艳玲周睿志许峰
Owner HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products