Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features

A technology of local invariant features and color features, applied in special data processing applications, editing/combining graphics or text, image data processing, etc., can solve problems such as lack of fast image matching methods

Inactive Publication Date: 2011-04-13
NAT UNIV OF DEFENSE TECH
View PDF4 Cites 40 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, there is still a lack of accurate and fast description features in image matching, ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features
  • Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features
  • Method for automatically tagging animation scenes for matching through comprehensively utilizing overall color feature and local invariant features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] figure 1 It is a flow chart of the animation scene image labeling method based on color and local invariant features of the present invention, and the specific steps include:

[0058] The first step is to preprocess the animated image to be marked, so as to highlight the grayscale details of the image and unify the size of all images;

[0059] The second step is to calculate the global color similarity between the target image and the image in the animation scene material library, and perform color feature filtering to obtain the previous similar images;

[0060] The third step is to calculate the image and target image based on The global color similarity of and CSIFT-based local feature similarity;

[0061] The fourth step, this The global color similarity and local invariant feature similarity of the two images are fused to obtain the final total similarity, and the total similarity is sorted. If the total similarity is greater than the threshold is consi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for automatically tagging animation scenes for matching through comprehensively utilizing an overall color feature and local invariant features, which aims to improve the tagging accuracy and tagging speed of animation scenes through comprehensively utilizing overall color features and color-invariant-based local invariant features. The technical scheme is as follows: preprocessing a target image (namely, an image to be tagged), calculating an overall color similarity between the target image and images in an animation scene image library, and carrying out color feature filtering on the obtained result; after color feature filtering, extracting a matching image result and the colored scale invariant feature transform (CSIFT) feature of the target image, and calculating an overall color similarity and local color similarities between the matching image result and the CSIFT feature; fusing the overall color similarity and the local color similarities so as to obtain a final total similarity; and carrying out text processing and combination on the tagging information of the images in the matching result so as to obtain the final tagging information of the target image. By using the method provided by the invention, the matching accuracy and matching speed of an animation scene can be improved.

Description

technical field [0001] The present invention relates to the technical field of multimedia information processing and an image tagging method based on image matching. Its essence is a method of obtaining matching objects by comparing image similarities, then merging the tagging information of matching objects, and using the clustering results as target objects. The method of annotating information is a method that comprehensively considers image color information and local invariant feature information of the image based on color invariance to match animation scene images so as to realize image annotation. Background technique [0002] Traditional cartoons are mainly completed manually from production to production. A large number of animation scene images are generated during the production process. Since these images do not have any labeling information, a large amount of scene materials are often wasted. If animation scenes can be By automatically labeling images, you can ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06F17/30G06T11/60G06T7/00
Inventor 谢毓湘杨征邓莉琼吴玲达魏迎梅蒋杰黄紫藤
Owner NAT UNIV OF DEFENSE TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products