Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal video Chinese subtitle recognition method based on dense connection convolutional network

A recognition method and multi-modal technology, applied in character and pattern recognition, biological neural network models, neural learning methods, etc., can solve the problem of loss of sequence in feature extraction, inability to align audio and images, and ineffective fusion of multi-modal data And other issues

Inactive Publication Date: 2021-08-06
SHANGHAI MARITIME UNIVERSITY
View PDF0 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Solve the problems that the audio and image cannot be aligned in the current text line detection network used for video subtitles, the sequence of feature extraction is lost, and the multimodal data cannot be effectively fused

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal video Chinese subtitle recognition method based on dense connection convolutional network
  • Multi-modal video Chinese subtitle recognition method based on dense connection convolutional network
  • Multi-modal video Chinese subtitle recognition method based on dense connection convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034]The technical solutions of the present invention will be clearly and completely described below in conjunction with the embodiments. Obviously, the described embodiments are only some of the embodiments of the present invention, rather than all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0035] The overall implementation process of a text recognition method for multimodal video Chinese subtitles provided by the present invention is as follows: figure 1 As shown, the specific instructions are as follows:

[0036] exist figure 1 In the model, the model is divided into three parts: feature compression extraction part, modal data fusion part, and multimodal feature classification part. The feature compression part is divided into the image feature compression part and the audio feature extraction part. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a multi-modal video Chinese subtitle recognition method based on a dense connection convolutional network. The method utilizes multiple technologies such as multi-modal data fusion, a cyclic automatic encoder and a connection type time sequence classifier, and introduces a positive and negative bidirectional residual attention mechanism on the basis of DenseNet. According to the method, sequence information of audio and text images in videos can be reserved, the data of the audio and text images which are completely mismatched in dimension size can be effectively fused, and the feature loss is reduced. By fusing the multi-modal data, more comprehensive and more detailed feature information is provided for text line classification, and the text recognition precision is improved. The method is innovated on the basis of the dense convolutional network, model parameters and training time are remarkably reduced while the model recognition accuracy is slightly reduced, and the method has higher flexibility and adaptability.

Description

technical field [0001] The present invention relates to text detection technology, multi-modal data fusion technology, deep learning technology, in particular to a text line detection method for multi-modal video Chinese subtitles. Background technique [0002] In today's society, with the rise of short videos in social networks, the scale of video resources has greatly increased, even exceeding the scale of image data. As a data that combines audio and image modalities, video contains much more information than independent audio and image data. However, in the face of massive video data, the utilization of these two modal data becomes more difficulty. Video subtitle recognition is different from single-modal text recognition. Although both the audio sequence and the text sequence contain a sentence of information, the audio is expressed in a time sequence, and the text is expressed in a spatial sequence. The characteristics of the two modal data expressions are Dimension ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/34G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V30/153G06N3/045G06F18/253G06F18/214
Inventor 唐震宇刘晋
Owner SHANGHAI MARITIME UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products