Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for retrieving specific action video fragments in Japanese online video corpus

An action and video technology, applied in the field of video clip retrieval, can solve the problems of loss of action semantic information, lack of action retrieval and feature extraction functions, etc., to achieve the effect of enriching contextual information

Active Publication Date: 2021-07-09
DALIAN UNIV OF TECH
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, although there are Japanese video corpora, such as the patent of Jiang Guohai et al., ZL201310216448.2 "Video Segment Retrieval Method for Japanese Online Video Corpus", none of them have the functions of action retrieval and feature extraction, and the rich features brought by actions are lost. Semantic information makes it difficult to carry out tasks such as context classification, pragmatic research, and cultural analysis

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for retrieving specific action video fragments in Japanese online video corpus
  • Method for retrieving specific action video fragments in Japanese online video corpus

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The specific implementation of the present invention will be described in detail in conjunction with the accompanying drawings and technical solutions.

[0024] The network system adopted in this embodiment is composed of a computer, a video corpus, a corpus server, and an action analysis and data analysis server, and the two servers communicate through the HTTP protocol. For the query of video corpus, the user interacts with the corpus server through the Internet, and the operation modules such as uploading are used as the basic modules of the video corpus. On top of this, an action analysis and data processing server independent of the video corpus server is added. The operating system of the server is Windows 10x64, the action analysis and data processing server is implemented based on Python, and the video action segment retrieval function is implemented based on Java EE technology. Based on JDK 1.8 and SpringBoot open source framework, the retrieval terminal is wr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for retrieving a specific action video fragment in a Japanese online video corpus, belongs to the field of video fragment retrieval, and relates to a method for quickly retrieving a video fragment containing a specific action in an online Japanese multi-modal corpus based on a deep learning technology. The retrieval method adopts a deep learning and statistical learning technology, an action and subtitle and video matching positioning technology, a caching and indexing technology and a data visualization technology. The retrieval method comprises the four steps of uploading of video corpora, frame-by-frame action analysis and feature extraction of the video corpora, index establishment and action retrieval. Firstly, after uploading of video corpora is finished, an action analysis and data processing server carries out analysis and feature extraction on the video corpora, a final result is indexed, and a user carries out retrieval through a WEB application. According to the method, quick query, accurate positioning and downloading of video fragment retrieval of specific actions in Japanese video corpora are realized, and a convenient retrieval service is provided for Japanese learning and research.

Description

technical field [0001] The invention belongs to the field of video clip retrieval, and relates to a fast retrieval method for video clips containing specific actions in an online Japanese multimodal corpus based on deep learning technology. Background technique [0002] In recent years, with the development of Internet technology, more and more foreign language learners try to learn foreign languages ​​through video corpus. Video corpus has received high attention in the process of foreign language learning, because video corpus can make up for the lack of pure text corpus. Provide a more realistic context. Actions play an important role in video corpus, because actions are often the embodiment of context and culture. Finding actions that appear in videos is of great significance to both foreign language learners and researchers. For Japanese learners, it is helpful to understand the specific context and culture of a language, and to deepen the impression of vocabulary, gra...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/732G06F16/78G06N3/04G06N3/08
CPCG06F16/732G06F16/78G06N3/08G06N3/045
Inventor 黄万鸿韩兰灵江波刘玉琴
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products