Method and system for changing player skin

A player and skin technology, applied in the field of player skin transformation, can solve problems such as invariance and inconvenient viewing, and achieve the effect of meeting experience needs and improving convenience

Active Publication Date: 2012-05-16
TENCENT TECH (SHENZHEN) CO LTD
4 Cites 9 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, with this traditional player, once the user sets the skin, the skin will not change during the playback of the movie, that is, the skin of the player will remain unchanged during the playback of the movie.
In fact, users have different moods when wat...
View more

Abstract

The invention provides a method for changing a player skin, comprising the following steps of: obtaining the hash value of a video played; finding out the attribute information of the video according to the hash value; obtaining a preset mapping table which is used for recording the mapping relationship of the attributes of the video to the attributes of a skin and matching values between the attributes of the video with the attributes of the skin; constructing a complete skin according to the mapping table; and displaying the complete skin on the player. The invention also provides a system for changing a player skin. With the method and the system, the skin can be dynamically changed in the video playing process so as to create a viewing atmosphere appropriate for the video; therefore, the experience requirements of a user are completely met.

Application Domain

Specific program execution arrangements

Technology Topic

Computer visionComputer graphics (images) +2

Image

  • Method and system for changing player skin
  • Method and system for changing player skin
  • Method and system for changing player skin

Examples

  • Experimental program(1)

Example Embodiment

[0044] figure 1 A method flow for player skin transformation in an embodiment is shown, and the method flow includes the following steps:
[0045] In step S102, the hash value of the played video is obtained. The hash value can be used to uniquely identify a file. When using the player to play a video, the hash value of the played video can be calculated, for example, a logical operation is performed on the content data of the played video to obtain the hash value of the currently playing video.
[0046] In step S104, the attribute information of the video is found according to the hash value. The attribute information of the video is defined in advance and stored in the background database, and the video attribute can include the main video attribute and the video additional attribute. Among them, the main attribute of the video can be used to describe the type of the video or the emotion reflected by the video. For example, the emotion of the video can be defined as anger, fear, warmth, romance, sadness, etc. Science fiction, war, etc. The video additional attribute may be some additional information used to describe the video. For example, the video is suitable for children, lovers, students, teachers, etc., and the video protagonists include Zhou Xingchi, Zhou Runfa, Ge You, and so on. In addition, only the main video attributes can be defined without defining additional video attributes as required, and the content included in the defined main attributes or additional attributes can also be adjusted according to the content of the video.
[0047]Each main attribute category (such as the above-mentioned video emotion, video type) can be assigned a unique code, these codes are called video main attribute category key codes, for example, the unique code assigned to video emotion is EM, and the unique code assigned to video type is TP, etc., where EM and TP are the video main attribute category keys. Correspondingly, a unique code can also be attached to each additional attribute category, and these corresponding codes are called video additional attribute category key codes. Attributes in each main attribute category or additional attribute category can also be assigned a unique code. For example, the above video emotion (EM) contains anger as AG, fear as SC, warmth as LV, romance as RM, sadness as SA, etc. Type (TP) contains action as AT, comedy as CM, sci-fi as SF, war as WA, etc. These codes are called video main attribute category value codes. The corresponding code assigned to the additional attribute in the additional attribute category is called the video additional attribute category value code.
[0048] In the background database, all the main attribute categories of the video are represented by the above-mentioned main attribute category key codes, the accessory attribute categories are represented by the additional attribute category key codes, and the specific attributes in each main attribute category or additional attribute category are also represented by Unique code representation, namely the main attribute category value code and the attachment attribute category value code.
[0049] After defining the main attributes and additional attributes of the video, these attribute information is stored in the background database and maintained. Since there are often multiple main attribute categories for videos, and each main attribute category has multiple main attributes, a certain weight can be assigned to each main attribute according to the video content. For example, for the movie "Avatar", define its The main attribute categories are EM and TP, among which the attributes in EM and their assigned weights are: warmth (LV) 50%, romance (RM) 30%, anger (AG) 20%; the attributes in TP and their assigned weights are: The weights are: Science Fiction (SF) 80%, War (WA) 20%. Similarly, a certain weight can also be assigned to the additional attribute of the video. Of course, if the additional attribute of the video is not defined, the additional attribute does not need to be assigned a weight.
[0050] The above-defined main attributes, additional attributes and weights assigned to each attribute can be maintained in the background database, and the corresponding relationship between the hash value of the video and each video attribute and attribute weight of the video can be established in the background database, In this way, after obtaining the hash value of the currently playing video, since the hash value can uniquely identify the file, the attribute information of the video can be found from the background database according to the hash value of the video, including the definition of the video. The main attributes, additional attributes and weights assigned to each attribute.
[0051] In step S106, a preset mapping table for recording the mapping relationship between the video attribute and the skin attribute and the matching value between the video attribute and the skin attribute is obtained. The skin attributes are also pre-defined in the background database. The skin attributes include the main skin attributes and the additional attributes of the skin. The main attributes of the skin represent the skin type. For example, the main attributes of the skin are defined as solemn, fresh, cartoon, technology, etc. to help represent elements of a skin, such as color, shading, etc. Different skin elements have corresponding skin attributes, and a unique code can also be assigned to the defined skin attributes to identify the skin attributes.
[0052] In this embodiment, a mapping table is also maintained in the background database, and the mapping table records the mapping relationship between the video attribute and the skin attribute and the matching value between the video attribute and the skin attribute, and the matching value refers to a numerical value. The matching degree between the mutually mapped video attributes and the skin attributes, wherein the larger the matching value, the better the matching between the mutually mapped video attributes and the skin attributes. Table 1 shows a preset mapping table in one embodiment.
[0053] Table 1
[0054] Anger (AG)
[0055] In the above table, what is actually recorded is the main attribute value code under the corresponding video main attribute key code. If there is an additional video attribute, the video attachment attribute value code is recorded. Correspondingly, the skin attribute in the mapping table is also represented by a code. "ST" in Table 1 above means solemnity. For clarity, the meanings of the main attribute value code and the skin attribute code are also identified. As shown in Table 1, the matching value of "Anger" and "Solemn" is 10, indicating that for the video of the "Angry" category, it is appropriate to use the skin of "Solemn", and the matching value of "Anger" and "Cartoon" is 0, indicating that "cartoon" skins are inappropriate for "angry" videos. For all video attributes and skin attributes, such a mapping table needs to be maintained in the background database. When adding or modifying video attributes or skin attributes, the above mapping table can be adjusted.
[0056] In step S108, a complete skin is constructed according to the mapping table. In one embodiment, the steps of step S108 are:
[0057] (1) According to the mapping table, the matching values ​​of the played video and each skin attribute are calculated respectively. According to the mapping relationship between the video attribute and the skin attribute in the mapping table and the matching value between the video attribute and the skin attribute, the matching value between the played video and each skin attribute is calculated as the matching value between the skin attribute and each video attribute Sum of products with attribute weights. For example, in the movie "Avatar", the attributes and assigned weights in the EM are: Warm (LV) 50%, Romance (RM) 30%, Anger (AG) 20%, combine Table 1, calculate the video and The matching value of the "Dignified" skin is: 6*50%+6*30%+10*20%=6.8; while the matching value of the played video and the "Fresh" skin is: 10*50%+8*30%+ 6*20%=8.6. It should be noted that the above method for calculating the matching value is only an embodiment, but not limited thereto, and the calculation formula can also be adjusted according to actual needs.
[0058] (2) Select the skin attribute corresponding to the largest matching value among the matching values, obtain the skin element corresponding to the skin attribute, and construct a complete skin according to the obtained skin element. If the calculated matching value is the largest, it indicates that the corresponding skin element is more suitable for the currently playing video. For example, the video "Avatar" has the largest matching value with the "fresh" skin, and the corresponding "fresh" skin element should be obtained. A large number of skin elements are stored in the background database. These skin elements include skin frames, pictures, colors, shading, etc., and complete skins can be constructed according to these skin elements.
[0059] In step S110, the complete skin is displayed on the player. After the complete skin is constructed, it is directly displayed on the player. During the video playback process, the skin can be dynamically changed without the need for the user to manually set the skin, which improves convenience, and the presented skin elements are obtained through a series of matching calculations. , suitable for the currently playing video, can create a better video viewing atmosphere for the user, and fully meet the user's experience needs.
[0060] In one embodiment, the above-mentioned method further includes the steps of performing video content recognition on the played video and determining the skin changing time and skin changing parameters according to the recognition result. like figure 2 As shown, in this embodiment, the method for player skin transformation further includes the following steps:
[0061] In step S202, time domain segmentation and key frame extraction are performed on the video to obtain the skin transformation moment. Since the video has a hierarchical structure, the video consists of a series of scenes, one scene has several shots, and one shot contains multiple image frames, the video structure is analyzed, the temporal boundary is detected, and the representative video content sequence is extracted. , from which to extract keyframes. Due to the different foreground and background of the video, there are also sudden changes in visual features (such as color, area shape, texture, etc.), resulting in large changes in the feature description information such as image histogram, absolute difference between frames, and image edges. Based on these pieces of information, a certain threshold is set for determination. For example, when the inter-frame change value in the key frame exceeds the threshold, it is determined that the skin needs to be transformed at the moment of the frame, so as to obtain the skin transformation moment. In actual use, the skin transition moment can also be determined in combination with the video duration, time domain distribution and artificially set comprehensive factors.
[0062] In step S204, the image data in the key frame is analyzed to obtain image features, and skin transformation parameters are determined according to the image features. Including extracting color information in key frames, outline information of objects, etc. Color information and contour information in key frames can be extracted by using traditional video recognition technology. The skin transformation parameters are determined according to the acquired image information, for example, the color value is determined according to the extracted color information, and the used shading is determined according to the extracted outline information.
[0063] In step S206, at the moment of skin transformation, the complete skin is displayed on the player according to the skin transformation parameters. The skin transformation moments are determined, and the complete skin displayed on the player can be adjusted at these moments, including changing the color of the skin according to the color value, and changing the shading of the skin according to the determined shading. For example, the video of a cartoon cat is currently playing. By analyzing the image data in the key frame, the outline information is obtained as the outline of the cat. According to the information, the skin element stored in the background database is called, such as the picture of the cartoon cat, and the The displayed skin is transformed to be shaded with a picture of a cartoon cat.
[0064] In one embodiment, the above method further includes the step of recording the corresponding relationship between the hash value of the video and the skin element used by the complete skin displayed when the video is played, and the corresponding relationship is maintained in the background database. When the same video is played next time , the corresponding skin elements can be found directly from the background database according to the hash value of the video, and a complete skin can be constructed directly based on these skin elements and displayed on the player without repeating video recognition and matching calculation, which improves the efficiency.
[0065] image 3 A system structure of player skin transformation in one embodiment is shown, and the system structure includes a player client 10 and a server 20 that interacts with the player client 10, wherein the player client 10 includes:
[0066] The hash value calculation module 102 is configured to calculate the hash value of the played video. The hash value can be used to uniquely identify a video file, and perform a hash value calculation on the currently playing video, for example, perform a logical operation on the content data of the played video to obtain the hash value of the video.
[0067] The first transceiver module 104 is used for sending data to the server 20 and receiving the data returned by the server 20, including sending the hash value of the video calculated by the hash value calculation module 102 to the server 20, and receiving the complete skin data returned by the server 20. .
[0068] The skin display module 106 obtains the complete skin from the server 20 and displays the complete skin on the player.
[0069] Server 20 includes:
[0070] The second transceiver module 202 is configured to receive the data sent by the player client 10 and return the data to the player client 10, including receiving the hash value of the video sent by the first transceiver module 104, and sending the generated complete skin to Player client 10.
[0071] The video tag matching module 204 is configured to obtain the hash value of the played video, and find the attribute information of the video according to the hash value. Video properties include video main properties and video additional properties. Among them, the main attribute of the video can be used to describe the type of the video or the emotion reflected by the video. For example, the emotion of the video can be defined as anger, fear, warmth, romance, sadness, etc. Science fiction, war, etc. The video additional attribute may be some additional information used to describe the video. For example, the video is suitable for children, lovers, students, teachers, etc., and the video protagonists include Zhou Xingchi, Zhou Runfa, Ge You, and so on. In addition, only video main properties may be defined without defining video additional properties.
[0072] Each main video attribute category or video additional attribute category can be assigned a unique code. As mentioned above, they are called the video main attribute category key code and the video additional attribute key code, respectively. Each main attribute in the video main attribute category key code A unique code is also assigned, that is, a video main attribute value code, and each additional attribute in the corresponding video additional attribute category key code is also assigned a video accessory attribute value code.
[0073]In the server 20, all the main attribute categories of the video are represented by the above-mentioned main attribute category keys, the accessory attribute categories are represented by the additional attribute category keys, and the specific attributes in each main attribute category or additional attribute category are also represented by Unique code representation, namely the main attribute category value code and the attachment attribute category value code.
[0074] The video database 206 is used to record predefined video attributes and establish a corresponding relationship between the hash value of the video and each video attribute of the video and the attribute weight assigned to each video attribute. Since there are often multiple main attribute categories for videos, and each main attribute category has multiple main attributes, a certain weight can be assigned to each main attribute according to the video content. For example, for the movie "Avatar", define in EM The attributes and assigned weights are: Warm (LV) 50%, Romance (RM) 30%, Anger (AG) 20%. Similarly, a certain weight can also be assigned to the additional attribute of the video. Of course, if the additional attribute of the video is not defined, the additional attribute does not need to be assigned a weight. After the video tag matching module 204 obtains the hash value of the currently playing video, according to the corresponding relationship recorded in the video database 206, the attribute information of the playing video can be found, including the main attribute, additional attribute and the main attribute defined for the video. The weight assigned to each attribute.
[0075] The skin generation module 212 generates a complete skin according to the mapping table. In this embodiment, the server 20 further includes: a dynamic matching module 208, configured to respectively calculate the matching value between the played video and each skin attribute according to the mapping table. Skin attributes include skin main attributes and skin additional attributes, where the main skin attributes represent the type of skin, for example, the main attributes of the skin are defined as solemn, fresh, cartoon, technology, etc. The additional attributes of the skin are used to assist in representing a skin element, such as color, background pattern, etc. Different skin elements have corresponding skin attributes, and a unique code can also be assigned to the defined skin attributes to identify the skin attributes.
[0076] The matching rule storage module 210 is used to store a mapping table, which records the mapping relationship between the video attribute and the skin attribute and the matching value between the video attribute and the skin attribute. The matching degree between the video attribute and the skin attribute, wherein the larger the matching value, the better the matching between the mutually mapped video attribute and the skin attribute. As shown in Table 1, it is a preset mapping table in one embodiment. When adding or modifying video attributes or skin attributes, the mapping table stored in the matching rule storage module 210 can be adjusted.
[0077] In one embodiment, the dynamic matching module 208 calculates the matching between the played video and each skin attribute specifically: calculating the matching value between the played video and each skin attribute as the matching value between the skin attribute and each video attribute and the attribute weight The sum of the products of the values.
[0078] The skin generation module 212 is configured to select the skin attribute corresponding to the largest matching value among the matching values ​​calculated by the dynamic matching module 208, obtain the skin element corresponding to the skin attribute, and construct a complete skin according to the skin element.
[0079] The skin database 214 is used to store skin elements and record the correspondence between skin elements and skin attributes. A large number of skin elements stored in the skin database 214 have corresponding skin attributes. These skin elements include skin frames, pictures, colors, shading, etc. The skin generation module 212 can construct a complete skin according to these skin elements.
[0080] After the skin generating module 212 constructs a complete skin, the skin is sent to the player client 10 by the second transceiver module 202 and displayed by the skin display module 106 of the player client 10 . In the process of playing the video, the skin can be dynamically changed without the need for the user to manually set the skin, which improves the convenience, and the presented skin elements are obtained through a series of matches, which are suitable for the currently playing video and can create a better user experience. The video viewing atmosphere fully meets the user's experience needs.
[0081] figure 2 The player client 10 in one embodiment is shown. In addition to the above-mentioned hash value calculation module 102, the first transceiver module 104, and the skin display module 106, the player client 10 also includes a video recognition module 108 and a skin. Adjustment module 110, wherein:
[0082] The video recognition module 108 is used for performing video content recognition on the played video and determining the skin changing time and skin changing parameters according to the video result. In this embodiment, the video recognition module 108 includes:
[0083] The time domain segmentation and key frame extraction module 1080 is used to perform time domain segmentation and key frame extraction on the video to obtain the skin transformation moment. The video structure is analyzed, temporal boundaries are detected, a representative video content sequence is extracted, and key frames are extracted from it. Due to the different foreground and background of the video, there are also sudden changes in visual features (such as color, area shape, texture, etc.), resulting in large changes in the feature description information such as image histogram, absolute difference between frames, and image edges. Based on these pieces of information, a certain threshold is set for determination. For example, when the inter-frame change value in the key frame exceeds the threshold, it is determined that the skin needs to be transformed at the moment of the frame, so as to obtain the skin transformation moment. In actual use, the skin transition moment can also be determined in combination with the video duration, time domain distribution and artificially set comprehensive factors.
[0084] The image analysis module 1082 is configured to analyze the image data in the key frame, obtain image features, and determine skin transformation parameters according to the image features. Including extracting color information in key frames, outline information of objects, etc. Color information and contour information in key frames can be extracted by using traditional video recognition technology. The skin transformation parameters are determined according to the acquired image information, for example, the color value is determined according to the extracted color information, and the used shading is determined according to the extracted outline information.
[0085] The skin adjustment module 110 is configured to transform and display the complete skin on the player according to the skin transformation parameters at the moment of skin transformation. For example, the skin adjustment module 110 changes the color of the skin according to the color value, changes the shading of the skin according to the shading determined to be used, etc., and displays the transformed skin on the player through the skin display module 106 .
[0086] In one embodiment, the video database 206 is further configured to record the corresponding relationship between the hash value of the video and the skin element used by the complete skin displayed when the video is played. In this way, when the same video is played next time, the corresponding skin elements can be directly searched from the video database 206 according to the hash value of the video, and a complete skin can be constructed directly based on these skin elements and displayed on the player without repeating the process. Video recognition and matching calculation improve efficiency.
[0087] The above-mentioned embodiments only represent several embodiments of the present invention, and the descriptions thereof are specific and detailed, but should not be construed as limiting the scope of the patent of the present invention. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of the present invention, several modifications and improvements can also be made, which all belong to the protection scope of the present invention. Therefore, the protection scope of the patent of the present invention should be subject to the appended claims.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

User interface generation apparatus

InactiveUS20110035706A1improve convenience
Owner:KYOCERA CORP

Network scanning system

InactiveUS20050057780A1improve conveniencelow cost
Owner:CANON DENSHI KK

Classification and recommendation of technical efficacy words

  • Improve convenience
  • Meet experience needs

Composite vibrating diaphragm of sound production device and sound production device

PendingCN111923528AMeet experience needsSatisfy the tolerance
Owner:GOERTEK INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products