Video call special effect control method, terminal and computer readable storage medium

A technology of video calls and control methods, applied in the field of communication, can solve problems such as low user experience satisfaction and cumbersome operation of smart devices

Inactive Publication Date: 2018-03-30
NUBIA TECHNOLOGY CO LTD
10 Cites 21 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] The technical problem to be solved by the present invention lies in: the existing contro...
View more

Method used

Insert new image content in the collected image information, including but not limited to inserting new text or picture, inserting background or scene, increasing dynamic effect, increasing dynamic expression;
Insert new image content in the image information of collection, include but not limited to insert new text or picture, insert background or scene, increase dynamic effect, increase dynamic expression;
[0058] The A/V input unit 104 is used to receive audio or video signals. The A/V input unit 104 may include a graphics processing unit (Graphics Processing Unit, GPU) 1041 and a microphone 1042, and the graphics processing unit 1041 is used for still pictures or The image data of the video is processed. The processed image frames may be displayed on the display unit 106 . The image frames processed by the graphics processor 1041 may be stored in the memory 109 (or other storage media) or sent via the radio frequency unit 101 or the WiFi module 102 . The microphone 10...
View more

Abstract

The invention discloses a video call special effect control method, a terminal and a computer readable storage medium. The video call special effect control method comprises the following steps: in the process of carrying out video call between a terminal and an opposite-end terminal, obtaining call information of a current video call; then, automatically generating special effect information matched with the call information according to the obtained call information; and sending the obtained special effect information to the opposite-end terminal. The invention also discloses the terminal and a computer readable storage medium. According to the scheme above, in the terminal communication process, the matched special effect information is generated automatically according to the current call information and is sent to the opposite-end terminal for display, and what the opposite-end terminal receives is no longer single image content and voice messages, so that the video call content can be enriched to a large extent, enjoyment of video call is improved, and user experience satisfaction is improved.

Application Domain

Substation equipmentTwo-way working systems +1

Technology Topic

Special effectsSpeech sound +2

Image

  • Video call special effect control method, terminal and computer readable storage medium
  • Video call special effect control method, terminal and computer readable storage medium
  • Video call special effect control method, terminal and computer readable storage medium

Examples

  • Experimental program(3)

Example

[0076] First embodiment
[0077] The video call special effect control method provided in this embodiment is applicable to various electronic devices that can conduct video calls, including but not limited to various smart phones, computers, tablets, and readers. For the video call special effect control method in this embodiment, see image 3 Shown, including:
[0078] S301: During a video call between the terminal and the opposite terminal, obtain call information of the current video call.
[0079] In this embodiment, after detecting a video call between the terminal and the opposite terminal, the call information can be collected immediately, or the call information can be collected after the relevant special effect activation condition is triggered. For example, in this embodiment, at least one of the following conditions may be used as the special effect activation condition:
[0080] Condition 1: A special effect opening instruction is received, and the special effect opening instruction can be issued by the user manually or by voice or other interactive methods;
[0081] Condition 2: The opposite contact of the video call is in the preset special effect processing contact list; at this time, the user can pre-set a special effect processing contact list, and each contact in the list needs to be processed during the video call For contacts with special effects processing, you can obtain whether the peer contact is in the list during a video call. If so, you can enable special effects processing; otherwise, no special effects processing will be performed during the video call;
[0082] Condition 3: The current time is within the preset special effect processing time period; at this time, when the video call is started, it can be judged whether the current time is within the preset special effect processing time period. If so, the special effect processing can be turned on; otherwise, in No special effect processing is performed during the video call; the preset special effect processing time range in this embodiment can support user settings;
[0083] Condition 4: The current position of the terminal is within the preset special effect processing position range; at this time, the current position of the terminal can be obtained when the video call is started, and it is judged whether the current position is within the preset special effect processing position range, and if so, you can Enable special effect processing; otherwise, no special effect processing is performed during the video call; the preset special effect processing position range in this embodiment can also support user settings.
[0084] It should be understood that the above conditions in this embodiment can be used alone or in combination. For example, condition 3 and condition 4 can be combined. At this time, during a video call, it is necessary to determine whether the current time is within the preset special effect processing time period, and it is also necessary to determine whether the current position is within the preset special effect processing position range. Only when the two are satisfied at the same time can the special effect processing be turned on, and then it is necessary to obtain the call information, otherwise, no special effect processing is performed. The foregoing conditions can be flexibly used in any combination, and the trigger conditions in this embodiment are not limited to the four conditions in the foregoing example.
[0085] The terminal in this embodiment may be the initiator of the video call or the called party of the video call. In this embodiment, there may be one opposite terminal, which is a one-to-one two-party video call, or multiple opposite terminals, which is a multi-party video call. In addition, the display interface during a video call in this embodiment can also be flexibly set. See for example Figure 4 For the two-party video call shown, see the video display interface on the terminal of one party Figure 4 As shown, it can display the video screen of the party and the video screen of the other party at the same time, and the video screen of the party and the video screen of the other party can be switched back and forth between using a large window display and a small window display. In this embodiment, the image information of the local terminal may not be displayed on the local terminal, and the image information of the opposite terminal is first displayed in the full screen. See also Figure 5 In the multi-party video call shown, the video display interface on the terminal of one party can display the image information of the other party in all the parameters of the video call. Of course, it can also selectively display part of the image information of other parties and display the image of the local end on the local end. information. It can be seen that the specific display mode of the video call display interface in this embodiment can be flexibly set and adjusted according to requirements.
[0086] The content contained in the call information obtained in this embodiment can be flexibly set, for example, it may include but not limited to at least one of time information, location information, contact information, call content, and image content information, and the generated special effect information Match with the obtained call information.
[0087] S302: Generate special effect information matching the acquired call information.
[0088] The special effect information generated in this embodiment matches the currently acquired call information, and it may correspondingly change as the communication information changes. The change of the call information may be an increase or decrease in content or an update of the content.
[0089] The special effect information generated in this embodiment may be prompt information, or display information after special processing, which can be flexibly set according to factors such as the current video call scene and user needs.
[0090] S303: Send the generated special effect information to the opposite terminal.
[0091] In this embodiment, the generated special effect information can be sent to the opposite terminal along with the video stream or audio stream or separately. For example, when the current video call uses the video stream and audio stream to be split and sent to the opposite terminal, the generated When the special effect information is audio-related special effect information, it can be sent to the opposite end along with the audio stream. When it is related to the video stream, it can be sent to the opposite end along with the video stream. When relevant, you can choose to send it to the opposite end along with the audio stream or video stream at will. Of course, you can also send it to the opposite end in a manner independent of the audio stream or video stream.
[0092] When the current video call is sent to the opposite end by combining the audio stream and the video stream, and after the opposite end receives the audio stream and the video stream, the generated special effect information can be sent together with the audio stream and video stream. To the opposite end, of course, it can also be sent to the opposite end in the form of audio and video streams.
[0093] In this embodiment, the way in which the generated special effect information is displayed on the display interface can also be flexibly set, for example, sinking display, floating display, superimposed display, etc. can be used, depending on the specific content of the special effect information. Depends.
[0094] In this embodiment, after the special effect information is generated, the generated special effect information can be displayed to the user, and sent after the user previews and confirms it. When the preview screen is generated, the preview screen can include only special effect information, or the combined effect of special effect information and screen information.
[0095] In this embodiment, the generated special effect information can be sent to the opposite terminal for display by the opposite terminal. When there are multiple opposite terminals, it can also be specified to send the special effect information to some or all of them. The generated special effect information can also be synchronously displayed on the display interface of the local terminal.
[0096] Through the video call special effect control method provided in this embodiment, during the video call, the corresponding call information can be automatically captured, and special effect information matching the captured communication information can be generated, and the generated special effect information can be sent to the call pair End, thus enriching the content and methods of video calls, to a large extent can improve the fun of video calls and the satisfaction of user experience.

Example

[0097] Second embodiment
[0098] In order to facilitate the understanding of the present invention, this embodiment uses several specific call information as examples to further illustrate the present invention.
[0099] In an example, the call information obtained from the current video call includes time information of the current video call. At this time, generating special effect information matching the communication information includes:
[0100] According to the acquired time information, the time characteristic special effect matching is performed from the preset time information and the time characteristic special effect correspondence table. In this embodiment, the correspondence table of time information and time characteristic special effects may be preset. For example, for an example correspondence table, see Table 1 below:
[0101] Table 1
[0102] period
[0103] It should be understood that the temporal special effects generated according to the time information in this embodiment are not limited to the several situations in the foregoing table examples, and can be flexibly adjusted.
[0104] In an example, the call information obtained from the current video call includes location information of the current video call. At this time, generating special effect information matching the communication information includes:
[0105] According to the location information, search for location characteristic special effect information matching the location information from the terminal locally or from the cloud.
[0106] The location information in this embodiment includes at least one of the location information of the local terminal and the location information of the opposite terminal. For example, in an example, local special effect information may be generated according to the location information of the current location of the opposite terminal, and the local special effect information may include weather, landmark buildings, cultural features, special products, etc. of the current location of the opposite terminal. For another example, in an example, the local special effect information can be generated according to the location of the local terminal.
[0107] In an example, the call information obtained from the current video call includes contact information of the current video call. At this time, generating special effect information matching the communication information includes:
[0108] According to the current contact information, special effect information matching the contact is generated, for example, the contact’s avatar cartoon, or the contact’s mantra or other characteristic information of the contact.
[0109] In an example, the acquired call information includes the call content of the current video call. At this time, special effect information matching the communication information is generated, see Image 6 Shown, including:
[0110] S601: Analyze the acquired call content.
[0111] The call content acquired in this embodiment includes at least one of voice content and image content. And the acquired call content includes at least one of the communication content sent by the local terminal to the opposite terminal and the communication content sent by the opposite terminal to the local terminal. The details can be flexibly set according to requirements. In this embodiment, the call content scene of the current call can be obtained according to the analysis result. The analysis methods in this embodiment include, but are not limited to, various speech recognition analysis methods.
[0112] S602: Generate content special effect information matching the call content according to the analysis result.
[0113] At this time, according to the analysis result, at least one of the following generation methods of content special effect information matching the content of the call is generated:
[0114] Perform voice change processing on the acquired voice content;
[0115] Insert new voice content into the acquired voice content, such as inserting music or other voice information;
[0116] Perform deformation processing on the collected image information, including but not limited to stretching and compression;
[0117] Insert new image content into the collected image information, including but not limited to inserting new text or pictures, inserting backgrounds or scenes, adding dynamic effects, and adding dynamic expressions;
[0118] Obtain information related to the analysis result from the cloud to generate special effect prompt information;
[0119] Convert the acquired voice content into text;
[0120] Convert the acquired voice content into text and add text special effects, which can make various easter eggs special effects similar to the text chat process;
[0121] Match the expressions corresponding to the analysis results from the terminal or the cloud, including but not limited to various static expressions and dynamic expressions.
[0122] It can be seen that the video call special effect control method provided in this embodiment can obtain various video call-related call information, and generate various matching special effect information to display on the opposite terminal, or the opposite terminal and the local terminal, to enrich The content of the video call and enhance the fun.

Example

[0123] The third embodiment
[0124] This embodiment provides a terminal, which may be various electronic terminals that can control smart terminals. See Figure 7 As shown, the terminal in this embodiment may further include a processor 701, a memory 702, and a communication bus 703;
[0125] The communication bus 703 is used to implement a communication connection between the processor 701 and the memory 702;
[0126] The processor 701 is configured to execute one or more programs stored in the memory 702 to implement the steps of the video call special effect control method exemplified in the above embodiments.
[0127] This embodiment also provides a computer-readable storage medium, which can be used in various electronic terminals, and stores one or more programs. The one or more programs can be used by one or more The processor executes to implement the steps of the video call special effect control method exemplified in the above embodiments.
[0128] In order to facilitate the understanding of the present invention, in an example of this embodiment, the call information is taken as the call content as an example, combined with a complete video call special effect control method for description, see Figure 8 As shown, it includes:
[0129] S801: It is detected that the terminal has a video call with the opposite terminal.
[0130] In this embodiment, after detecting a video call between the terminal and the opposite terminal, the call information can be collected immediately, or the call information can be collected after the relevant special effect activation condition is triggered.
[0131] The terminal in this embodiment may be the initiator of the video call or the called party of the video call. In this embodiment, there may be one opposite terminal, which is a one-to-one two-party video call, or multiple opposite terminals, which is a multi-party video call.
[0132] S802: Obtain the video content of the current call.
[0133] The acquired video content includes voice content and screen content. Of course, one of them can also be included.
[0134] S803: Analyze the acquired call content.
[0135] S804: Generate special effect information matching the analysis result.
[0136] In this example, any of the following methods can be used to generate special effects that match the analysis results:
[0137] Perform voice change processing on the acquired voice content;
[0138] Insert new voice content into the acquired voice content, such as inserting music or other voice information;
[0139] Perform deformation processing on the collected image information, including but not limited to stretching and compression;
[0140] Insert new image content into the collected image information, including but not limited to inserting new text or pictures, inserting background or scenes, adding dynamic effects, and adding dynamic expressions;
[0141] Obtain information related to the analysis result from the cloud to generate special effect prompt information;
[0142] Convert the acquired voice content into text;
[0143] Convert the acquired voice content into text and add text special effects, which can make various easter eggs special effects similar to the text chat process;
[0144] Match the expressions corresponding to the analysis results from the terminal or the cloud, including but not limited to various static expressions and dynamic expressions.
[0145] S805: Send the generated special effect information to the opposite terminal of the video call.
[0146] The generated special effect information can be sent to the opposite terminal along with the video stream or audio stream or separately. When the current video call uses video stream and audio stream to split and send to the opposite terminal, when the generated special effect information is audio-related special effect information, it can be sent to the opposite end along with the audio stream, and when it is related to the video stream , It can be sent to the opposite end along with the video stream. If it is not related to video and audio, you can choose to send it to the opposite end along with the audio stream or video stream at will. Of course, you can also send it to the opposite end along with the audio stream or video stream. The opposite end is sent to the opposite end in a manner independent of the audio stream or video stream. When the current video call is sent to the opposite end by combining the audio stream and the video stream, and after the opposite end receives the audio stream and the video stream, the generated special effect information can be sent together with the audio stream and video stream. To the opposite end, of course, it can also be sent to the opposite end in the form of audio and video streams.
[0147] In this embodiment, the way in which the generated special effect information is displayed on the display interface can also be flexibly set, for example, sinking display, floating display, superimposed display, etc. can be used, depending on the specific content of the special effect information. Depends. In this embodiment, after the special effect information is generated, the generated special effect information can be displayed to the user, and sent after the user previews and confirms it. When the preview screen is generated, the preview screen can include only special effect information, or the combined effect of special effect information and screen information.
[0148] In this embodiment, the generated special effect information can be sent to the opposite terminal for display by the opposite terminal. When there are multiple opposite terminals, it can also be specified to send the special effect information to some or all of them. The generated special effect information can also be synchronously displayed on the display interface of the local terminal.
[0149] For example, see Picture 9 As shown, the figure shows an example of a video call between two parties. Assuming that based on the currently acquired screen content and voice content, it is analyzed that the two parties or one of the parties are in a good mood, then dynamic expressions can be generated to represent the mood, see Picture 9 Shown in 91.
[0150] For another example, see Picture 10 As shown, this figure shows an example of a video call between multiple parties. Assuming that the video caller currently in the lower right is in a shy state based on the currently obtained video call content analysis, a feather-like dynamic special effect can be generated, see Picture 10 Shown in 1001.
[0151] The terminal provided in this embodiment can obtain various video call-related call information during the video call, and generate various matching special effect information to display on the opposite terminal, or on the opposite terminal and the local terminal, to enrich the video The content of the call and enhance the interest, thereby enhancing the satisfaction of the user experience.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products