Method for synthesizing video MV by music based on self-supervised learning

A supervised learning and music technology, applied in the field of media asset management, can solve the problems of not being able to provide users with screen information, time-consuming and energy-consuming, etc., and achieve the effect of intuitive visual impact and vivid auditory experience

Pending Publication Date: 2020-06-26
TENCENT TECH (SHENZHEN) CO LTD
View PDF4 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Both music and short video MV have entertainment properties, but due to their voice characteristics, it is impossible to provide users with intuitive and full picture information, and the traditional method of manual

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The present invention will be further described below.

[0020] The technical scheme that this embodiment mode adopts is: it comprises the following steps:

[0021] 1. Separate the audio and video streams from the existing material library;

[0022] 2. Use deep learning technology to extract characters, actions, expressions, and scene information from videos based on video understanding; the specific method is to use deep 3D convolutional neural networks to extract spatiotemporal information of videos for scene recognition, motion capture, and emotional analysis , extract the scene information, object information, character expression, and motion information of the video;

[0023] 3. Automatic classification according to the rhythm and voiceprint information of the music; the specific method is to use the GRU (Gated RecurrentUnit) network to identify the melody rhythm, emotion, genre, and voiceprint characteristics of the music, and classify them according to different...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for synthesizing a video MV by music based on self-supervised learning, and the method comprises the following steps: 1, separating an audio stream and a video streamfrom an existing material library; 2, extracting people, actions, expressions and scene information from the video by using a deep learning technology based on video understanding; 3, automatically classifying the voiceprint information according to the rhythm of the music; 4, separating voices, musical instruments, accompaniments and lyrics from the music; 5, synchronizing audio and video relatedfeature information by a timestamp in the video file; 6, learning corresponding video information according to the music features to form a mapping relationship between music and videos; 7, inputtingany piece of music, and synthesizing a corresponding video MV; according to the invention, a proper video clip can be automatically matched and selected from massive existing video data, music is mapped to generate a corresponding short video MV, and more intuitive visual impact and more vivid auditory experience are provided for a user.

Description

technical field [0001] The invention relates to the technical field of media asset management, in particular to a method for synthesizing video MV from music based on self-supervised learning. Background technique [0002] Driven by the technology of mobile Internet, big data, and AI intelligence, short videos are breaking the traditional thinking of the content industry with their own advantages. The short video platform can realize precise matching and intelligent diversion based on the interests and preferences of users, and intelligently radiate multiple distribution channels through short video content to accurately reach multi-level users, enabling users to understand video topics at a low cost and resonate with them, and gain more Many endorsements and retweets. With the development of 5G technology, the operating cost of the platform is reduced, the speed of the mobile terminal is greatly improved, the traffic of the short video blowout and the outstanding marketing...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G10L15/08G11B27/02H04N21/234H04N21/439H04N21/44
CPCG10L15/08G11B27/02H04N21/23418H04N21/4394H04N21/44G06V20/40G06N3/045
Inventor 康洪文
Owner TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products