Deep station caption detection method of weak supervision

A weakly supervised, station logo technology, applied in the field of deep learning, can solve problems such as labor and time consumption, and achieve the effect of improving precision and recall rate, improving data processing efficiency, and improving the effect of station logo detection.

Active Publication Date: 2017-11-07
INST OF INFORMATION ENG CHINESE ACAD OF SCI
View PDF5 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for most of the classic neural network-based detection models, a large number of annotations are often required, which consumes manpower and time.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep station caption detection method of weak supervision
  • Deep station caption detection method of weak supervision
  • Deep station caption detection method of weak supervision

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] In order to make the above-mentioned features and advantages of the present invention more comprehensible, the following specific examples are given and described in detail in conjunction with the accompanying drawings.

[0046] The present invention provides a weakly supervised deep station logo detection method, and its station logo detection model training process is as follows figure 1 As shown, the flow chart of the method is as follows figure 2 As shown, and the method includes a training phase and a detection phase, and the training phase mainly includes the following steps:

[0047] (1) Deduplicate massive network video data files according to the MD5 code, and retain valid data to facilitate later data processing and ensure effective training.

[0048](2) Use the key frame extraction method to extract some key frames from the above-mentioned de-duplicated network video, and carry out M palace grid division to each network video key frame, only keep four 1 / M p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a deep station caption detection method of weak supervision. The deep station caption detection method comprises the steps of preprocessing mass online video data files, and obtaining a large data set only marking a station caption type and a small data set only marking station caption position; inputting the small data set into a station caption positioning network to be trained, and obtaining a station caption positioning network capable of predicting a station caption area; inputting the large data set into the trained station caption positioning network to obtain a plurality of prediction station caption areas of each picture in the large data set, inputting the prediction station caption areas of each picture into a station caption classification network to be trained, and obtaining a station caption classification network capable of classifying station captions; conducting the same partial preprocessing on videos to be detected, inputting the preprocessed pictures into the trained station caption positioning network, and obtaining the prediction station caption areas of the pictures; inputting the prediction station caption areas of the pictures into the trained station caption classification network, and obtaining station caption positions and types of the pictures.

Description

technical field [0001] The invention relates to the field of deep learning, in particular to a weakly supervised deep station logo detection method. Background technique [0002] With the development of the Internet and the rise of multimedia technology, online video carries more and more content and has become a major content carrier in the era of big data. Different video sources tend to present different video content information. By detecting video logos, network video data can be managed more effectively, video source and content information can be grasped in advance, and videos containing bad information can be supervised. Therefore, video station logo detection has strong practical significance and research value. [0003] Station logo data widely exists in network video, and station logo detection is to detect several key frames extracted from network video. Compared with general object detection, station logo detection has particularity. The detection target appea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F17/30G06K9/62G06K9/66
CPCG06F16/7335G06F16/743G06V30/194G06F18/23213G06F18/24
Inventor 操晓春张月莹伍蹈
Owner INST OF INFORMATION ENG CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products