Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice adaptive completion system based on multi-modal knowledge graph

A knowledge graph, multi-modal technology, applied in the field of speech adaptive completion system, can solve the problems of cognitive limitation, low interpretability, out-of-order or packet loss, etc., to achieve high accuracy and high interpretability Sexual, Semantically Appropriate Effects

Pending Publication Date: 2022-01-14
SHANGHAI JIAO TONG UNIV
View PDF0 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] Real-time audio and video technology is mostly used in real-time video chat, video conferencing, distance education, smart home, etc., but in actual use, due to the out-of-order or packet loss of data packets during network transmission, the resulting call jitter will cause a significant drop in call quality. The receiving end creates audio and video data through the packet loss repair system to fill the audio gap caused by packet loss or network delay
Audio data completion still faces the following difficulties in the process of mobile audio and video communication: First, audio generation is dominated by deep learning methods, and the opacity of the reasoning process leads to low interpretability of such methods, so it is difficult to target Second, the current technology mainly uses single-modal data as the basis for model reasoning, ignoring the ability of the mobile terminal to perceive multiple modal information, resulting in incomplete perception of data and information by the system. cognitive limitations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice adaptive completion system based on multi-modal knowledge graph
  • Voice adaptive completion system based on multi-modal knowledge graph
  • Voice adaptive completion system based on multi-modal knowledge graph

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] Such as figure 1 As shown, this embodiment relates to a speech adaptive completion system based on a multimodal knowledge map, including: a speech preprocessing module, a speech analysis module, a video preprocessing module, a spatiotemporal image analysis module, a multimodal Data aggregation module, multimodal information fusion module, semantic text reasoning module and speech completion module, among which: the speech preprocessing module collects and preprocesses speech packets at the receiving end, and takes low-quality real-time audio with packet loss as input , preliminarily process the voice modal data through voice data packet detection, voice framing, audio windowing and endpoint detection, obtain the preprocessed waveform and output it to the voice analysis module; the video preprocessing module collects the video packets at the receiving end and pre-processing, taking continuous video images as input, and performing preliminary processing on the video modal...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a voice adaptive completion system based on a multi-modal knowledge graph. The system comprises a data receiver, a data analyzer and a data inference device. The data receiver preprocesses received audio and video data and outputs the audio and video data to the data analyzer; the data analyzer analyzes the voice and the image to extract waveform time sequence features and lip track features, and a phoneme sequence is obtained through multi-mode joint representation; and the data inference device carries out domain session modeling and candidate text prediction according to historical texts, text inference is carried out in combination with a phoneme sequence, statements with semantics are obtained, and complemented voice is synthesized according to waveform features. According to the invention, through a phoneme reasoning model, phoneme recognition is carried out when the voice modality is lost, the domain session modeling is carried out on the historical text generated by the existing voice according to the semantic relationship between the entities in the multi-modal knowledge graph, so that reasoning is carried out to generate the text with semantic, the voice is synthesized in combination with the waveform characteristics of the user voice, and the complemented audio is formed.

Description

technical field [0001] The present invention relates to a technology in the field of speech processing, in particular to a multimodal knowledge map-based speech adaptive completion system for mobile terminals. Background technique [0002] Real-time audio and video technology is mostly used in real-time video chat, video conferencing, distance education, smart home, etc., but in actual use, due to the out-of-order or packet loss of data packets during network transmission, the resulting call jitter will cause a significant drop in call quality. The receiving end creates audio and video data through the packet loss repair system to fill the audio gap caused by packet loss or network delay. Audio data completion still faces the following difficulties in the process of mobile audio and video communication: First, audio generation is dominated by deep learning methods, and the opacity of the reasoning process leads to low interpretability of such methods, so it is difficult to t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L13/02G10L13/06G10L13/08G06F16/36G06F40/35G06V40/16G06V10/80G06V10/82G06K9/62G06N3/04G06N3/08G06N5/04
CPCG10L13/02G10L13/06G10L13/08G06F16/367G06N5/04G06F40/35G06N3/08G06N3/045G06N3/044G06F18/253
Inventor 蔡鸿明李琥于晗姜丽红
Owner SHANGHAI JIAO TONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products