Video processing method, video searching method and terminal equipment

A processing method and a search method technology, applied in the field of image processing, can solve the problems of poor correlation between keywords and fragments, inaccurate description, etc., and achieve the effects of strong correlation, high description accuracy, and improved accuracy

Pending Publication Date: 2020-05-19
SHENZHEN TCL NEW-TECH CO LTD
View PDF4 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The main purpose of the embodiments of the present invention is to provide a video processing method, which aims to solve the technical problems of poor correlation between keywords and clips and inaccurate descriptions in the prior art when searching for keywords to obtain highlight clips

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video processing method, video searching method and terminal equipment
  • Video processing method, video searching method and terminal equipment
  • Video processing method, video searching method and terminal equipment

Examples

Experimental program
Comparison scheme
Effect test

no. 1 example

[0111] refer to image 3 , image 3 It is a schematic flowchart of the second embodiment of the video processing method of the present invention, image 3 It is also a detailed flowchart of step S200. Based on the first embodiment above, step S200 includes:

[0112] Step S210, extracting multiple image frames of the target video;

[0113] Step S220, acquiring multiple sub-feature parameters of the image frames;

[0114] Step S230, acquiring feature parameters of the target video according to the sub-feature parameters.

[0115] In this embodiment, multiple image frames are extracted from the target video at a predetermined frame rate, which can reduce the number of video frames processed by the terminal device, thereby improving the efficiency of acquiring the content of the target video.

[0116] The sub-feature parameters of each image frame can be identified one by one. Since the image frame loses sound information, furthermore, the sub-feature parameters include at le...

no. 4 example

[0141] refer to Image 6 , Image 6 It is a schematic flow chart of the fifth embodiment of the video processing method of the present invention, Image 6 also for Figure 5 The detailed flowchart of step S320, based on the fourth embodiment above, step S320 includes:

[0142] Step S321, comparing the human body features and preset human body features, and obtaining a comparison result;

[0143] Step S322, acquiring preset human body characteristics corresponding to the human body characteristics according to the comparison result;

[0144] Step S323, acquiring the identity information according to the preset human body characteristics corresponding to the human body characteristics.

[0145] After acquiring the human body features of the person information in the target video, the human body features may include one or more of facial features, iris features, and body shape features. The preset human body features correspond to the human body features. If the human body f...

no. 6 example

[0163] refer to Figure 8 , Figure 8 It is a schematic flowchart of the seventh embodiment of the video processing method of the present invention, Figure 8 also for Figure 7 The detailed flowchart of step S130, based on the sixth embodiment above, step S130 includes:

[0164]Step S131, respectively extracting image blocks in grayscale images corresponding to adjacent image frames, where the image blocks extracted in the adjacent image frames have the same position and size;

[0165] In this embodiment, image blocks are respectively extracted from grayscale images corresponding to adjacent image frames, wherein the coordinates of the upper left corner of the image block are randomly generated, and the size of the image block is also randomly generated. It can be understood that the positions and sizes of the image blocks extracted in adjacent image frames are the same, which is beneficial for subsequent comparison.

[0166] Step S132, obtaining the number of pixels in e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video processing method, which comprises the steps of editing a to-be-edited video according to a scene, and obtaining a target video; obtaining characteristic parameters ofthe target video; generating keywords of the target video according to the feature parameters; and storing the keyword and the target video in an associated manner. The invention further discloses a video searching method, terminal equipment and a computer readable storage medium. According to the invention, editing is carried out according to the scene change, the target video can be ensured to be in the same scene, the accuracy of identifying the feature parameters in the target video can be effectively improved, and the corresponding keywords are generated according to the feature parameters of the target video, so that the target video and the keywords have the beneficial effects of strong relevance and high description accuracy.

Description

technical field [0001] The present invention relates to the technical field of image processing, in particular to a video processing method, a video search method and a terminal device. Background technique [0002] With the popularization of the Internet, it is becoming easier for viewers to obtain movies and TV series. Due to the long duration of movies and TV series, audiences sometimes just want to watch some exciting clips. When searching for keywords to obtain highlights, there are often key The correlation between words and fragments is poor, and the description is inaccurate. [0003] The above content is only used to assist in understanding the technical solution of the present invention, and does not mean that the above content is admitted as prior art. Contents of the invention [0004] The main purpose of the embodiments of the present invention is to provide a video processing method, which aims to solve the technical problems of poor correlation between keyw...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06F16/783G06F16/78
CPCG06F16/7867G06F16/784G06V20/41G06V20/46G06V10/761G06V40/10G06V10/507G06V40/20
Inventor 薛凯文赖长明徐永泽
Owner SHENZHEN TCL NEW-TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products