Unlock instant, AI-driven research and patent intelligence for your innovation.

Action processing method and device for virtual object, and medium

A virtual object and motion technology, applied in the field of communication, can solve the problems of increased time cost and operation difficulty, low efficiency of virtual object motion processing, multi-time cost of motion video, etc., to improve motion processing efficiency, reduce time cost and operation difficulty , improve the effect of coherence

Pending Publication Date: 2021-11-12
BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO LTD
View PDF0 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In the process of implementing the embodiment of the present invention, the inventor found that in the related art, the key point information of the human body is provided by the user, which not only increases the user's time cost and operation difficulty, but also makes the action processing efficiency lower; and, in the related art, the action The video frames contained in the video are all generated by the generation confrontation network, which makes the generation of the action video more time-consuming, which in turn makes the action processing efficiency of the virtual object lower.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Action processing method and device for virtual object, and medium
  • Action processing method and device for virtual object, and medium
  • Action processing method and device for virtual object, and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] refer to figure 1 , which shows a flow chart of the steps of Embodiment 1 of a virtual object action processing method according to the present invention, which may specifically include the following steps:

[0054] Step 101. Receive an action command input by the user; the action command may specifically include: action identification and time-related information;

[0055] Step 102, determining the action video frame sequence corresponding to the above action identification;

[0056] Step 103, according to the preset state image of the virtual object at the target time, determine the corresponding action state image from the above-mentioned action video frame sequence; the above-mentioned target time can be determined according to the above-mentioned time-related information;

[0057] Step 104: Generate a connected video frame sequence according to the preset state image and the above action state image; the above connected video frame sequence is used to connect the ...

Embodiment 2

[0124] Embodiment 2 of a motion processing method for a virtual object of the present invention may specifically include: a preprocessing link, a matching link, and a generating link.

[0125] 1) Preprocessing link.

[0126] The preprocessing link is used to preprocess the action state images in the action video to obtain the corresponding visual features of the action state.

[0127] In a specific implementation, action videos may be collected in advance according to action identifiers; and the collected action videos and their corresponding action identifiers may be stored in an action video library.

[0128] refer to figure 2 , shows a schematic flowchart of preprocessing an action video according to an embodiment of the present invention. figure 2 In this method, M action state images such as action state image 1, action state image 2 ... action state image M can be extracted from the action video, and the M action state images are respectively input into the correspon...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The embodiment of the invention provides an action processing method and device for a virtual object, and a medium. The method specifically comprises: receiving an action instruction input by a user, wherein the action instruction comprises an action identifier and time related information; determining an action video frame sequence corresponding to the action identifier; determining a corresponding action state image from the action video frame sequence according to a preset state image of a virtual object at target time; determining the target time according to the time-related information; generating a connection video frame sequence according to the preset state image and the action state image; enabling the connection video frame sequence to be used for connecting the preset state image and the action video frame sequence; and splicing the connection video frame sequence and the action video frame sequence. According to the embodiment of the invention, the action processing efficiency of the virtual object can be improved.

Description

technical field [0001] The present invention relates to the field of communication technology, in particular to a method, device and medium for processing actions of virtual objects. Background technique [0002] With the development of communication technology, virtual objects can be widely used in broadcasting scenes, teaching scenes, medical scenes, customer service scenes and other scenes. Taking the broadcasting scene as an example, virtual objects can replace media workers for news broadcasting or game commentary. [0003] In practical applications, virtual objects usually need to perform some actions. At present, the action processing process of virtual objects in related technologies usually includes: first, the user provides key point information of the human body in time sequence; then, input the above key point information into GAN (Generative Adversarial Networks) to generate action videos The action video frames in the action video; then, according to the time...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00
CPCH04N21/440236G11B27/031H04N5/265G06V10/82G06V20/49G06V10/54G06V10/761G06V10/24G06V10/44G06T17/00
Inventor 田凯陈伟苏雪峰
Owner BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO LTD