Equipment and method for generating emoticon based on shot image
A technology of expression and equipment, which is applied in the direction of image communication, color TV parts, TV system parts, etc., can solve the problems of consumption, editing, large data flow, etc., and achieve the effect of enriching content, saving traffic and time
Active Publication Date: 2015-02-04
SAMSUNG GUANGZHOU MOBILE R&D CENT +1
6 Cites 17 Cited by
AI-Extracted Technical Summary
Problems solved by technology
In addition, although some chat software supports image capture in video calls, this is limited to capturing images to obtain corresponding image fil...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreMethod used
According to the above description of the present invention with reference to FIGS. image), and convert the image into an emoticon, so as to be edited and sent together with the text as part of the user input content, which not only allows the user to add personalized emoticons in real time, but also saves the time and traffic of transmitting emoticons, Improved user chat experience. In addition, unique shooting processing, preview processing, confirmation processing (such as clicking any position to confirm the preview effect of shooting) and emoticon generation processing are also set up, which can further enrich the user experience and facilitate user operations.
As an example, in step S30, can obtain the emoticon rendering that user confirms, generate the emoticon for output based on the emoticon rendering of acquisition, and add the emoticon that generates to the message of user input (for example, cursor current location). Here, the emoticon rendering confirmed by the user is converted into the format of the emoticon (conforming to the predetermined length and width), so that the converted emoticon can be inserted into the message input by the user to be arranged together with the text, and it is consistent with the transmitted picture itself. Less data traffic and time than required.
As an example, the emoticon generating unit 30 can obtain the emoticon rendering confirmed by the user, generate an emoticon for output based on the emoticon rendering obtained, and add the emoticon generated to the message input by the user (for example, where the cursor is currently located). Here, the emoticon generation...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View moreAbstract
The invention provides equipment and a method for generating an emoticon based on a shot image. The equipment comprises an image acquisition unit for acquiring an image shot by a shooting device, a preview unit for generating an expression effect picture for previewing based on the acquired image and displaying the expression effect picture to a user, and an emoticon generating unit for generating an emoticon for outputting based on the expression effect picture and adding the generated emoticon into a message input by a user.
Application Domain
Technology Topic
Image
Examples
- Experimental program(1)
Example Embodiment
[0063] Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In the accompanying drawings, the same reference numerals always refer to the same components.
[0064] figure 1 Shows a block diagram of an apparatus for generating emoticons based on captured images when a user inputs a message according to an exemplary embodiment of the present invention. As an example, during the period when users input messages in various electronic products such as personal computers, smartphones, and tablets, figure 1 The illustrated emoticon generating device can be used to generate emoticons based on captured images.
[0065] Such as figure 1 As shown, the emoticon generating device includes: an image acquiring unit 10 for acquiring an image taken by the photographing device 5; a preview unit 20 for generating an emoticon rendering for preview based on the acquired image, and displaying the emoticon to the user Effect diagram; the emoticon generating unit 30 is used to generate emoticons for output based on the emoticon effect graph, and add the generated emoticons to the message input by the user. Here, the photographing device 5 may be included in the emoticon generating device, and may also be used as a peripheral device connected to the emoticon generating device. The above-mentioned image acquisition unit 10, preview unit 20, and emoji generation unit 30 can be implemented by general-purpose hardware processors such as digital signal processors, field programmable gate arrays, etc., or can be implemented by dedicated hardware processors such as dedicated chips, or completely It is implemented in software through a computer program, for example, implemented as various modules in chat software or social media software installed in an electronic product.
[0066] figure 1 The illustrated emoticon generating device can convert a captured image into an emoticon format during a user inputting a message, and insert it into the input message, so as to realize the arrangement together with the text, and save transmission traffic and time.
[0067] Specifically, the image acquisition unit 10 is used to acquire an image photographed by the photographing device 5. Here, as a preferred manner, the acquired image may be a user's expression in content, and may be a static image or a dynamic image in form. After the photographing device 5 is turned on to start capturing images, the image acquisition unit 10 may automatically acquire the image photographed by the photographing device 5 in real time, or acquire the image photographed by the photographing device 5 according to the user's control. For example, the photographing device 5 may photograph continuous moving images (ie, videos). Accordingly, the image acquisition unit 10 may acquire the photographed video automatically or according to the control of the user, or may intercept the photographed static single frames at predetermined intervals. Images, or can be combined into a dynamic image such as gif format by continuously capturing several static images within a predetermined time. The photographing device 5 itself may also be set to photograph a single static image at a predetermined interval or continuously photograph several static images within a predetermined time to synthesize a dynamic image.
[0068] The photographing device 5 as an additional component may adopt an expression tracking method when photographing the user's expression image, so as to accurately capture the user's expression. Here, as another additional component, figure 1 The emoticon generating device shown may further include a prompting unit (not shown) for analyzing the message input by the user, and prompting the user to make a corresponding emoticon according to the analysis result. Specifically, the prompting unit can analyze the meaning of the input message according to the semantic analysis technology, and prompt the user to make an expression corresponding to the analysis result after the camera 5 is turned on. For example, when the message input by the user is "I am happy to die", the prompt unit can analyze that the sentiment of the input message is joy, and then prompt the user to make a joyful expression in the form of voice, for example, the voice output "Laugh a smile" ". Or, when the message input by the user is "I am sad", the prompting unit can analyze that the sentiment of the input message is sadness, and then prompt the user to make a sad expression by means of voice, for example, outputting "please be sad" Expression". In addition, the prompting unit may not have a semantic analysis function, and only voice prompts to the user that facial expression shooting is about to be performed after the camera 5 is turned on.
[0069] The photographing device 5 may include a front camera or a rear camera. In this case, the image acquisition unit 10 will acquire an image taken by a front camera or an image taken by a rear camera. In addition, the photographing device 5 may include both a front camera and a rear camera. In this case, the image acquisition unit 10 will acquire a composite image of images captured by each of the front camera and the rear camera. figure 2 An example of a composite image acquired by the image acquisition unit 10 according to an exemplary embodiment of the present invention is shown. Such as figure 2 As shown, the portrait of the lady taken by the front camera and the image of the child taken by the rear camera are combined into a single image via the image acquisition unit 10.
[0070] As an example, the photographing device 5 may be turned on according to the user's facial expression photographing instruction to start photographing images. Specifically, in the process of the user inputting a message, when the user wants to insert an emoticon based on the captured image, the user may input an expression shooting instruction. For example, the user may input an expression shooting instruction through at least one of the following operations: Double-click any position in the input box for inputting a message by touch or with a mouse, click the shooting menu item or shooting button in the input box by touch or with the mouse, and slide in the area of the input box Operate (for example, by finger or touch pen, etc.), input facial expression shooting commands by voice. In this case, the image acquisition unit 10 can automatically acquire the image photographed by the photographing device 5 in real time. After the emoticon based on the captured image is generated and added to the input message, or when the user inputs an instruction to give up the emoticon, the photographing device 5 may be turned off.
[0071] As another example, the photographing device 5 may be turned on when the user enters an interface for inputting a message or when the user starts to input a message to start capturing an image. In this case, the camera 5 may be continuously turned on during the user inputting a message (for example, during the use of chat software). At this time, the image acquisition unit 10 may automatically acquire the image taken by the photographing device 5 in real time, or acquire the image photographed by the photographing device 5 according to the user's control.
[0072] In addition, the preview unit 20 is configured to generate an expression effect diagram for preview based on the acquired image, and display the expression effect diagram to the user. For example, the preview unit 20 may generate an expression effect diagram for preview by embedding the acquired image in a predetermined expression frame, and display the expression effect diagram in a predetermined area on the screen. As an example, the predetermined expression frame may be a hollow circle, however, it should be understood that the hollow circle is not used to limit the scope of the present invention. image 3 An example of embedding a captured image in a predetermined expression frame according to an exemplary embodiment of the present invention is shown. Such as image 3 As shown, various expression frames of different shapes can be used to generate expression effect pictures. Here, the specifications of the emoticon frame (for example, size, length, width, etc.) can be used as a constraint condition for converting the acquired image into emoticons. In addition, the predetermined area may be located in an input box for inputting a message, for example, the position of the cursor or around the cursor, or the predetermined area may be located in a preview window that is set separately from the input box for inputting a message In the preferred mode, the size and position of the preview window can be set and adjusted by the user.
[0073] In addition, the emoticon generating unit 30 is used to generate an emoticon for output based on the emoticon effect graph, and add the generated emoticon to the message input by the user.
[0074] As an example, the emoticon generating unit 30 may obtain an emoticon effect diagram confirmed by the user, generate an emoticon for output based on the obtained emoticon effect diagram, and add the generated emoticon to the message input by the user (for example, the cursor is currently located Location). Here, the emoticon generating unit 30 converts the emoticon effect diagram confirmed by the user into an emoticon format (conforming to a predetermined length and width), so that the converted emoticon can be inserted into the message input by the user to be arranged together with the text, and Compared with the transmission of the picture itself, less data flow and time are required.
[0075] For example, the emoticon generating unit 30 may obtain the emoticon effect diagram confirmed by the user through at least one of the following operations: click the emoticon effect diagram by touch or with a mouse; click the confirmation menu item or confirmation button set on the screen; press Press the confirmation key or the side key; input confirmation commands by voice; click anywhere on the screen by touch or with the mouse. By setting appropriate confirmation operations in different situations, the user can conveniently confirm the expression preview effect on which the final generated emoticon is based while the expression is being photographed.
[0076] As another example, in the case where the photographing device 5 is turned on according to the user’s expression shooting instruction to start capturing images, the emoticon generating unit 30 may not need to receive the confirmed expression effect map from the user, but based on the expression effect map and the pre- The degree of similarity between the stored standard emoticons is used to directly select the standard emoticons as the output emoticons.
[0077] Specifically, in this case, figure 1 The illustrated emoticon generating device may further include an emoticon storage unit (not shown) for storing at least one standard emoticon generated in advance based on a photographed image. As an example, the standard emoticons may be typical emoticons representing different emotions, user favorite emoticons, etc., and can be continuously updated, that is, the subsequently generated emoticons are added to the emoticon storage unit as standard emoticons, or the emoticons are updated The original standard expression in the character storage unit. For example, the captured image can be processed (extracting facial expressions in the image, scaling the extracted parts, etc.) to generate standard emoticons conforming to the emoticon format. These standard emoticons can be stored in a special standard The emoji library can also be stored in the default emoji library. Picture 12 An example of adding the generated standard emoticons to the existing default emoticon library is shown. From Picture 12 It can be seen that the last two emoticons are standard emoticons newly added through custom methods.
[0078] Correspondingly, the emoticon generating unit 30 may compare the previewed emoticon effect graph with at least one standard emoticon in the emoticon storage unit, and when the emoticon generating unit 30 determines that the emoticon effect graph is compared with the at least one standard emoticon When the similarity between one or more standard emoticons exceeds the threshold (for example, the similarity is above 80%), the emoticon generating unit 30 may add the most similar standard emoticon to the user as an emoticon for output In the message entered. Here, the emoticon generating unit 30 may compare the similarity between the emoticon effect graph and the standard emoticon based on texture features, color features, or brightness features, and obtain a similarity value reflecting the degree of similarity.
[0079] Through the above method, the corresponding emoticons can be generated without the user's confirmation operation, which not only reflects the specific emoticons captured, but also speeds up the generation of emoticons, and simplifies user operations.
[0080] Below, will combine Figure 4 to Figure 11 A method for generating an emoticon based on a captured image when a user inputs a message according to an exemplary embodiment of the present invention will be described. The method can be figure 1 The emoticon generating device shown is completed, and it can also be implemented by a computer program. For example, the method may be executed by an application for inputting messages installed in an electronic product.
[0081] Figure 4 A flowchart showing a method for generating an emoticon based on a captured image when a user inputs a message according to an exemplary embodiment of the present invention. As an example, according to Figure 4 In the method shown, during a user inputting a message in various electronic products such as a personal computer, a smart phone, a tablet computer, etc., an emoticon can be generated based on a captured image.
[0082] Reference Figure 4 , In step S10, the captured image is acquired. Here, as a preferred manner, the acquired image may be a user's expression in content, and may be a static image or a dynamic image in form. Specifically, after the camera is turned on to start capturing images, the image captured by the camera can be automatically acquired in real time, or the image captured by the camera can be acquired according to the user's control. For example, the photographing device may photograph continuous moving images (ie, videos). Accordingly, in step S10, the photographed video may be acquired automatically or according to the user's control, or the photographed static single image may be intercepted at predetermined intervals , Or by capturing several static images continuously within a predetermined period of time and then synthesizing them into dynamic images such as gif format. In addition, the photographing device itself may also be set to photograph a single static image at a predetermined interval or to continuously photograph several static images within a predetermined time to synthesize a dynamic image.
[0083] As an additional step, before step S10, Figure 4 The method may further include: taking an image. Here, a photographing device as a built-in unit or a peripheral device may be used to photograph an image (such as a static or dynamic expression image of the user). As an example, when photographing a user's expression image, an expression tracking method may be used to accurately capture the user's expression.
[0084] Here, as another additional step, Figure 4 The emoticon generating method shown may further include: analyzing the message input by the user, and prompting the user to make a corresponding emoticon according to the analysis result. Specifically, the meaning of the input message can be analyzed according to semantic analysis technology, and the user is prompted to make an expression corresponding to the analysis result after the camera is turned on. For example, when the message input by the user is "I am happy to die", it can be analyzed that the sentiment of the input message is joy, and then the user is prompted to make a joyful expression in the form of voice, for example, the voice output "laugh". Or, when the message input by the user is "I am sad", it can be analyzed that the sentiment of the input message is sadness, and then the user is prompted to make a sad expression through voices, for example, the voice output "please make a sad expression" ". In addition, the prompting step may not include semantic analysis processing, and only voice prompts to the user that facial expression shooting is about to be performed after the camera is turned on.
[0085] According to an exemplary embodiment of the present invention, the photographing device may include a front camera or a rear camera. In this case, in step S10, an image taken by the front camera or an image taken by the rear camera will be acquired. In addition, the photographing device may include both a front camera and a rear camera. In this case, in step S10, an image obtained by combining images captured by each of the front camera and the rear camera is obtained.
[0086] As an example, in step S10, the photographing device may be turned on according to the user's facial expression photographing instruction to start photographing images. Specifically, in the process of the user inputting a message, when the user wants to insert an emoticon based on the captured image, the user may input an expression shooting instruction. For example, the user may input an expression shooting instruction through at least one of the following operations: Double-click any position in the input box for inputting a message by touch or with a mouse, click the shooting menu item or shooting button in the input box by touch or with the mouse, and slide in the area of the input box Operate, input facial expression shooting commands by voice. In this case, the image taken by the camera can be automatically acquired in real time. Accordingly, after the emoticon based on the captured image is generated and added to the input message, or when the user inputs an instruction to give up the emoticon, the camera may be turned off.
[0087] As another example, in step S10, the camera may be turned on to start capturing images when the user enters the interface for inputting a message or when the user starts to input a message. In this case, the camera may be continuously turned on while the user is inputting a message (for example, while using chat software). At this time, the image taken by the camera can be automatically acquired in real time, or the image taken by the camera can be acquired according to the user's control.
[0088] Next, in step S20, an expression effect diagram for preview is generated based on the acquired image, and the expression effect diagram is displayed to the user. For example, an expression effect diagram for preview can be generated by embedding the acquired image in a predetermined expression frame, and the expression effect diagram can be displayed in a predetermined area on the screen. As an example, the predetermined expression frame may be a hollow circle. However, it should be understood that the hollow circle is not used to limit the scope of the present invention, and expression frames of various shapes may be used to generate an expression effect graph. Here, the specifications of the emoticon frame (for example, size, length, width, etc.) can be used as a constraint condition for converting the acquired image into emoticons. In addition, the predetermined area may be located in an input box for inputting a message, for example, the position of the cursor or around the cursor, or the predetermined area may be located in a preview window that is set separately from the input box for inputting a message In the preferred mode, the size and position of the preview window can be set and adjusted by the user.
[0089] Then, in step S30, an emoticon for output is generated based on the emoticon effect graph, and the generated emoticon is added to the message input by the user.
[0090] As an example, in step S30, an expression effect map confirmed by the user may be obtained, an expression character for output is generated based on the obtained expression effect image, and the generated expression character is added to the message input by the user (for example, the current cursor position Location). Here, the emoticon effect picture confirmed by the user is converted into an emoticon format (in accordance with the predetermined length and width), so that the converted emoticon can be inserted into the message input by the user to be arranged together with the text and be similar to the transmitted picture itself. Less data traffic and time than required.
[0091] For example, it is possible to obtain the expression effect map confirmed by the user through at least one of the following operations: click the expression effect map by touch or with the mouse; click the confirmation menu item or confirmation button set on the screen; press the confirmation key or side Key; input confirmation commands by voice; click anywhere on the screen by touch or with the mouse. By setting appropriate confirmation operations in different situations, the user can conveniently confirm the expression preview effect on which the final generated emoticon is based while the expression is being photographed.
[0092] As another example, when the camera is turned on according to the user’s expression shooting instruction to start capturing images, it may not be necessary to receive the confirmed expression effect map from the user, but based on the expression effect map and the pre-stored standard emoticon. The degree of similarity between the two can directly select the standard emoji as the output emoji.
[0093] Specifically, in this case, Figure 4 The illustrated emoticon generating method may further include the step of storing at least one standard emoticon generated in advance based on a photographed image. As an example, the standard emoticons may be typical emoticons representing different emotions, user favorite emoticons, etc., and can be continuously updated, that is, the subsequently generated emoticons are stored as standard emoticons, or the original standard emoticons are updated.
[0094] Correspondingly, in step S30, the previewed emoticon effect picture may be compared with at least one standard emoticon in the emoticon storage unit, and when it is determined that the emoticon effect picture is one or more standard emoticons in the at least one standard emoticon When the similarity between the characters exceeds the threshold (for example, the similarity is above 80%), the most similar standard emoji can be added to the message input by the user as the emoji used for output. Here, the similarity between the expression effect map and the standard emoticon can be compared based on the texture feature, the color feature, or the brightness feature, and a similarity value reflecting the degree of similarity can be obtained.
[0095] Through the above method, the corresponding emoticons can be generated without the user's confirmation operation, which not only reflects the specific emoticons captured, but also speeds up the generation of emoticons, and simplifies user operations.
[0096] in Figure 4 In the emoticon generating method shown, the captured image can be converted into an emoticon format in the process of the user inputting a message, and inserted into the input message, so as to realize the arrangement together with the text, and save transmission traffic and time.
[0097] Figure 5 A flowchart showing a process of generating an emoticon by an emoticon generating device according to an exemplary embodiment of the present invention.
[0098] Reference Figure 5 In step S101, during the user inputting a message, the camera 5 receives the user's facial expression shooting instruction. As an example, the user’s facial expression shooting instruction may include at least one of the following items: double-clicking any position in the input box for inputting a message in a touch manner or with a mouse, and clicking the input in a touch manner or with a mouse The shooting menu item or shooting button in the frame, sliding operation in the area of the input frame, and inputting the expression shooting command by voice.
[0099] After the photographing device 5 receives the user's expression photographing instruction, in step S102, the photographing device 5 is turned on to start capturing images. As an example, the photographing device 5 may photograph an expression image of the user.
[0100] Then, in step S103, the image acquisition unit 10 acquires an image taken by the photographing device 5, where the acquired image may be a video of the user's expression, a single static image, or a dynamic image synthesized by multiple images.
[0101] In step S104, the preview unit 20 generates an expression rendering for preview based on the acquired image. For example, the preview unit 20 may generate the expression for preview by embedding the acquired image in a predetermined expression frame (for example, a hollow circle). Effect picture. In step S105, the generated expression effect diagram is used by the preview unit 20 to replace the display of the cursor in the input message. As an example, the hollow circle here can be converted from the current cursor (that is, when the user inputs an expression shooting instruction), that is, the current cursor becomes a hollow circle, and the acquired image is embedded in the hollow circle by the preview unit 20, Therefore, the display of the expression effect diagram replaces the display of the cursor. Examples of emoticons are as Picture 8 As shown, Picture 8 The expression effect diagrams generated when the cursor is at the end of the message and in the middle of the message are respectively shown.
[0102] Next, in step S105, the emoticon generating unit 30 generates an emoticon for output based on the emoticon effect map, and adds the generated emoticon to the message input by the user (that is, the cursor is inputting when the user inputs an expression shooting instruction Position in the message). Here, as an example, the emoticon generating unit 30 may acquire an emoticon effect map confirmed by the user, and generate an emoticon for output based on the acquired emoticon effect map. Specifically, after the preview unit 20 displays the emoticon effect diagram to the user, when the user clicks any position on the screen by touch or with a mouse, the emoticon generating unit 30 converts the emoticon effect diagram at this time into output , And add the generated emoji to the message entered by the user, that is, the cursor is converted to the position before the hollow circle. As another example, the emoticon generating unit 30 may compare the emoticon effect graph with at least one stored standard emoticon, and when the emoticon generating unit 30 determines that the emoticon effect graph is one or more of the at least one standard emoticon When the similarity between the standard emoticons exceeds the threshold, the emoticon generating unit 30 adds the most similar standard emoticon to the message input by the user as an emoticon for output.
[0103] After the emoticon is generated and added to the message, in step S107, the photographing device 5 is turned off, and then, in step S108, the cursor is redisplayed to the position when the user inputs the emoticon photographing instruction.
[0104] in Figure 5 In the process shown, the camera 5 is turned on only when the user wants to input an emoticon, and after the emoticon is generated and added to the message, the camera 5 is turned off. In addition, in Figure 5 In the process shown, after the camera 5 is turned on, whenever the user inputs an instruction to give up the emoticon, the camera 5 is immediately turned off, the process of generating the emoticon will be terminated accordingly, and the cursor will be restored to display.
[0105] Image 6 A flowchart showing a process of generating an emoticon by an emoticon generating device according to another exemplary embodiment of the present invention.
[0106] Reference Image 6 In step S111, the user enters an interface for inputting a message or the user starts inputting a message.
[0107] Next, in step S112, the photographing device 5 is turned on to start photographing an image. As an example, the photographing device 5 may photograph an expression image of the user.
[0108] Then, in step S113, the image acquisition unit 10 acquires an image photographed by the photographing device 5, where the acquired image may be a video of the user's expression, a single static image, or a dynamic image synthesized from multiple images.
[0109] In step S114, the preview unit 20 generates an expression effect map for preview based on the acquired image. For example, the preview unit 20 may generate the expression for preview by embedding the acquired image in a predetermined expression frame (for example, a hollow circle). Effect picture.
[0110] In step S115, the preview unit 20 determines whether the cursor is at the end of the input message. If it is determined in step S115 that the cursor is at the end of the input message, then in step S116, the preview unit 20 displays the generated expression effect diagram behind the cursor. If it is determined in step S115 that the cursor is not at the end of the input message, then in step S117, the preview unit 20 displays the generated expression effect graph on the next line of the cursor. Examples of emoticons are as Picture 9 As shown, Picture 9 The expression effect diagrams generated when the cursor is at the end of the message and in the middle of the message are respectively shown.
[0111] Next, in step S118, the emoticon generating unit 30 generates an emoticon for output based on the emoticon effect graph, and adds the generated emoticon to the message input by the user (ie, the current position of the cursor in the input message) in. Here, as an example, the emoticon generating unit 30 may obtain an emoticon effect diagram confirmed by the user, and generate an emoticon for output based on the obtained expression effect diagram. Specifically, after the preview unit 20 displays the emoticon effect diagram to the user, when the user clicks on any position on the screen by touch or with a mouse, the emoticon generating unit 30 converts the emoticon effect diagram at this time into output And add the generated emoji to the message entered by the user, that is, the current position of the cursor.
[0112] in Image 6 In the processing shown, as long as the user enters the message input interface or starts to input a message, the photographing device 5 will be turned on and continue to photograph. When the cursor is not at the end of the input message, the emoticon preview image is displayed on the line below the cursor.
[0113] However, in fact, if the user moves the cursor from the end to the middle of the message, it is often not for inserting emoticons, but for modifying the text. In this case, the emoticon preview image may not be displayed.
[0114] to this end, Figure 7 A flowchart showing a process of generating an emoticon by an emoticon generating device according to another exemplary embodiment of the present invention.
[0115] Figure 7 A flowchart showing a process of generating an emoticon by an emoticon generating device according to another exemplary embodiment of the present invention.
[0116] Reference Figure 7 In step S121, the user enters an interface for inputting a message or the user starts to input a message.
[0117] Next, in step S122, the photographing device 5 is turned on to start photographing an image. As an example, the photographing device 5 may photograph an expression image of the user.
[0118] Then, in step S123, the image acquisition unit 10 acquires an image photographed by the photographing device 5, where the acquired image may be a video of the user's expression, a single static image, or a dynamic image synthesized by multiple images.
[0119] In step S124, the preview unit 20 generates an expression rendering for preview based on the acquired image. For example, the preview unit 20 may generate the expression for preview by embedding the acquired image in a predetermined expression frame (for example, a hollow circle). Effect picture.
[0120] In step S125, the preview unit 20 determines whether the cursor is at the end of the input message. If it is determined in step S125 that the cursor is at the end of the input message, then in step S126, the preview unit 20 displays the generated expression effect diagram behind the cursor. If it is determined in step S125 that the cursor is not at the end of the input message, then in step S127, the preview unit 20 determines whether the user has input a preview instruction. When the preview unit 20 determines in step S127 that the user has input a preview instruction, in step S128, the preview unit 20 displays the generated expression effect graph on the next line of the cursor. Examples of emoticons are as Picture 10 As shown, Picture 10 An example of generating an expression effect graph when the cursor is at the end of a message and an example of generating an expression effect graph when the cursor is in the middle of the message and the user does not input a browsing instruction are respectively shown.
[0121] Next, in step S129, the emoticon generating unit 30 generates an emoticon for output based on the emoticon effect graph, and adds the generated emoticon to the message input by the user (ie, the current position of the cursor in the input message) in. Here, as an example, the emoticon generating unit 30 may acquire an emoticon effect map confirmed by the user, and generate an emoticon for output based on the acquired emoticon effect map. Specifically, after the preview unit 20 displays the emoticon effect diagram to the user, when the user clicks any position on the screen by touch or with a mouse, the emoticon generating unit 30 converts the emoticon effect diagram at this time into output And add the generated emoji to the message entered by the user, that is, the current position of the cursor.
[0122] in Figure 7 In the processing shown, as long as the user enters the message input interface or starts to input a message, the photographing device 5 will be turned on and continue to photograph. When the cursor is not at the end of the input message, it needs to receive the user's browsing instruction to display the emoticon preview image at the position of the next line of the cursor.
[0123] The above examples are only for explaining the emoticon generating device and method according to the exemplary embodiments of the present invention, and do not constitute a limitation of the present invention. As mentioned above, those skilled in the art can use various instruction input methods, image confirmation methods, etc. to implement the present invention. For example, in addition to Figure 8 to Figure 10 In addition to displaying the emoticon effect graph as shown, the emoticon effect graph can also be displayed in a separate preview window independent of the message input box regardless of the position of the cursor. Such as Picture 11 As shown, the expression rendering is displayed in the preview window, where the size and position of the preview window can be set or adjusted by the user. In addition, after the user confirms a certain emoticon effect picture by clicking any position in the preview window, the emoticon generating unit 30 can convert the emoticon effect picture according to parameters (such as size and shape, etc.) that conform to the emoticon format, thereby Generate emoticons for output, and add the generated emoticons to the last position of the cursor in the message.
[0124] According to the above reference Figure 1 to Figure 12 From the description of the present invention, it can be seen that in the emoticon generating device and method according to the exemplary embodiment of the present invention, an image (for example, an emoticon image of the user) can be captured in real time according to the needs of the user, and the image can be converted into an emoticon The characters are arranged and sent together with the text as part of the user’s input. This allows users to not only add personalized del emoticons in real time, but also saves the time and traffic for transmitting emoticons, and improves the user’s chat experience. In addition, unique shooting processing, preview processing, confirmation processing (such as clicking on any location to confirm the preview effect of the shooting) and emoji generation processing are also set up, which can further enrich the user experience and facilitate user operations.
[0125] It should be pointed out that according to the needs of implementation, each step described in this application can be split into more steps, or two or more steps or partial operations of the steps can be combined into new steps to achieve the purpose of the present invention .
[0126] The above-mentioned method according to the present invention can be implemented in hardware, firmware, or implemented as software or computer code that can be stored in a recording medium (such as CD ROM, RAM, floppy disk, hard disk, or magneto-optical disk), or implemented to be downloaded via a network The original computer code stored in a remote recording medium or a non-transitory machine-readable medium and will be stored in a local recording medium, so that the method described here can be stored using a general-purpose computer, a dedicated processor, or a programmable or dedicated Such software processing on a recording medium of hardware (such as ASIC or FPGA). It can be understood that a computer, a processor, a microprocessor controller, or programmable hardware includes a storage component (for example, RAM, ROM, flash memory, etc.) that can store or receive software or computer code, when the software or computer code is used by the computer, When the processor or hardware accesses and executes, the processing method described here is implemented. Furthermore, when a general-purpose computer accesses the code for implementing the processing shown here, the execution of the code converts the general-purpose computer into a dedicated computer for executing the processing shown here.
[0127] Although the present invention has been shown and described with reference to the preferred embodiments, those skilled in the art should understand that various modifications and changes can be made to these embodiments without departing from the spirit and scope of the present invention defined by the claims. .
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more Similar technology patents
Input method and device as well as electronic equipment
InactiveCN105094357AImprove information input efficiencyRich contentSpecial data processing applicationsInput/output processes for data processingComputer hardwareElectric equipment
Owner:ALIBABA GRP HLDG LTD
Multimedia stamp
InactiveCN102117573ARich contentIncrease the use of functionsStampsSensing record carriersImage codeIdentification device
Owner:AIGO DIGITAL TECHNOLOGY CO LTD
City road display
ActiveCN106028715AVersatileRich contentMolten spray coatingCasings with display/control unitsAwningSteel plates
Owner:林小敏
Commodity recommendation method and device based on user portrait, equipment and storage medium
PendingCN109711931ARich varietyRich contentCharacter and pattern recognitionBuying/selling/leasing transactionsData analysisComputer science
Owner:ONE CONNECT SMART TECH CO LTD SHENZHEN
Test report generation method and device, equipment and storage medium
PendingCN110096429ARich contentSpeed up the processSoftware testing/debuggingSoftware deploymentTest comparisonTest program
Owner:ONE CONNECT SMART TECH CO LTD SHENZHEN
Classification and recommendation of technical efficacy words
- Rich content
- Save time and traffic
OTT television based intelligent remote control system and control method thereof
InactiveCN104185050ARich contentSmall and exquisite appearanceInput/output for user-computer interactionSelective content distributionEmbedded systemSpeech input
Owner:HARBIN INST OF TECH SHENZHEN GRADUATE SCHOOL
Information searching method for mobile terminal and mobile terminal
InactiveCN101963976AQuick information search functionRich contentSpecial data processing applicationsData informationPersonalization
Owner:YULONG COMPUTER TELECOMM SCI (SHENZHEN) CO LTD
Satellite data receiving simulation analysis platform
ActiveCN104915482ARich contentFlexible developmentSpecial data processing applicationsLink analysisSatellite data
Owner:INST OF REMOTE SENSING & DIGITAL EARTH CHINESE ACADEMY OF SCI +1
Method for webpage classification
InactiveCN101630330AImprove search efficiencyRich contentSpecial data processing applicationsWeb page categorizationInformation extraction
Owner:苏州普适通信息科技有限公司 +1
Multifunctional automatization biological sewage treatment composite experimental device
InactiveCN101201989ASimple operation interfaceRich contentMultistage water/sewage treatmentEducational modelsComputer OperationEngineering
Owner:XUZHOU COLLEGE OF INDAL TECH
Webpage browsing method for mobile communication equipment terminal
InactiveCN102185923ASave time and trafficDigital data information retrievalNatural language data processingClient-sideTraffic volume
Owner:GUANGZHOU UCWEB COMP TECH CO LTD
Document content display method and terminal
InactiveCN106445321ASave time and trafficInput/output processes for data processingInformation retrievalDocumentation
Owner:YULONG COMPUTER TELECOMM SCI (SHENZHEN) CO LTD
Remote control method and device, computer equipment and storage medium
PendingCN111427524ASave time and trafficSave memoryDigital output to display deviceEmbedded systemComputer equipment
Owner:ONE CONNECT SMART TECH CO LTD SHENZHEN
Interactive service processing method and device, equipment and storage medium
ActiveCN110248203ASimplified selection processSave time and trafficSelective content distributionTraffic volumeTraffic capacity
Owner:GUANGZHOU HUYA TECH CO LTD
Test data management system and method
InactiveCN110471967ASave time and trafficAvoid human errorDatabase management systemsTransmissionData transmissionData subject
Owner:SHENZHEN TIGO SEMICON