Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature-extraction model training method, device and storage medium

A feature extraction and model training technology, applied in the field of video processing, can solve problems such as poor anti-noise performance and affect the accuracy of video features, and achieve the effects of improving anti-noise performance, uniform sample feature distribution, high accuracy and recall rate

Active Publication Date: 2018-12-25
TENCENT TECH (SHENZHEN) CO LTD
View PDF6 Cites 13 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The above scheme does not take into account the change of video data in the time dimension when selecting sample images, resulting in poor anti-noise performance of the feature extraction model in the time dimension, which affects the accuracy of the extracted video features

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature-extraction model training method, device and storage medium
  • Feature-extraction model training method, device and storage medium
  • Feature-extraction model training method, device and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0027] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0028] In the related art, when training the feature extraction model, multiple images in at least one sample video are usually obtained, the multiple images are enhanced, and the processed images are used as sample images, so that according to the determined multiple sample images Perform training to obtain the feature extraction model. Among them, image enhancement can improve the anti-noise performance of the feature extraction model in the spa...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention discloses a feature extraction model training method, a device and a storage medium, belonging to the technical field of video processing. The method comprises: detecting a plurality of images in a sample video for each sample video in at least one sample video, and obtaining at least two images including the same object; determining at least two images containing the same object as sample images; according to the determined sample images, the feature extraction model being obtained; the feature extraction model being used to extract the video features. Since atleast two images containing the same object can describe changes in the temporal dimension of the object, therefore, when we get the feature extraction model according to the determined sample imagetraining, we can consider the changes of video data in the time dimension, improve the anti-noise performance of the feature extraction model in the time dimension, and further improve the accuracy and robustness of video features.

Description

technical field [0001] Embodiments of the present invention relate to the technical field of video processing, and in particular to a method, device and storage medium for training a feature extraction model. Background technique [0002] With the rapid development of Internet technology and the vigorous rise of Internet video, video recognition has been widely used in various fields such as video recommendation, copyright detection, object tracking, and video surveillance. Extracting video features is a key step in video recognition. In order to improve the accuracy of video features, it is usually possible to train a feature extraction model first, and extract video features based on the feature extraction model. [0003] In the stage of training the feature extraction model, multiple images in at least one sample video are obtained, and the multiple images are enhanced, such as image scaling, translation, etc., and the processed multiple images are determined as sample im...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06F17/30G06V10/25G06V10/764G06V10/774
CPCG06V20/46G06F18/214G06F16/7837G06V10/25G06V10/82G06V10/764G06V10/774G06F18/22G06F18/211
Inventor 龚国平徐敘遠吴韬
Owner TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products