Unlock instant, AI-driven research and patent intelligence for your innovation.

Virtual image real-time interaction method and terminal

A virtual image and real-time interactive technology, applied in the field of interactive entertainment, can solve problems such as monotonous playback actions, poor interactivity, and lack of interactivity for users, and achieve the effect of enhancing interactivity and improving user experience

Pending Publication Date: 2021-10-22
福建凯米网络科技有限公司
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the existing realization of virtual image interaction, the virtual image is usually played through fixed preset actions, the playback action is relatively monotonous, and lacks interactivity with the user
In order to improve interactivity, the avatar is usually set based on the recognized user audio. However, once the user audio is not recognized, the avatar will not move, it will appear sluggish, and the interactivity is still poor, which will affect the user experience.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual image real-time interaction method and terminal
  • Virtual image real-time interaction method and terminal

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0072] Please refer to figure 1 , a method for interacting with virtual images in real time, comprising the steps of:

[0073] S1. Real-time collection of accompaniment data, audio data and motion data of objects to be identified;

[0074] Among them, the user's behavior can be collected in real time through cameras, handheld controllers, microphones and other devices to obtain the user's audio data and action data in real time;

[0075] S2. Determine the scene mode where the object to be identified is located according to the accompaniment data, audio data and action data;

[0076] Specifically, determine a first matching degree between the audio of the object to be identified and the accompaniment according to the accompaniment data and the audio data;

[0077] It is possible to perform real-time scoring on song singing, and take the real-time song singing scoring, that is, the first score as the first matching degree;

[0078] determining a standard rhythm and a standard...

Embodiment 2

[0102] The difference between this embodiment and Embodiment 1 is that it is further defined that determining the second matching degree between the object to be recognized and the standard rhythm according to the action data includes:

[0103] Determining the time point and duration of each rhythm in the standard rhythm as the first rhythm data set;

[0104] Determine the time point and duration of each rhythm in the shaking rhythm of the object to be identified according to the action data, as a second rhythm data set;

[0105] determining a second matching degree between the object to be recognized and a standard rhythm according to the matching degree between each corresponding data in the first rhythm data set and the second rhythm data set;

[0106] Take shaking the phone according to the rhythm as an example:

[0107] The second score=100*(the number of shakes whose deviation from the standard rhythm is less than a preset value (for example: 100 milliseconds)) / (total n...

Embodiment 3

[0125] The difference between this embodiment and Embodiment 1 or Embodiment 2 is that the S3 also includes:

[0126]Realize the switching of actions corresponding to different scene modes through smooth transition;

[0127] The switching of actions corresponding to different scene modes by means of smooth transition includes:

[0128] Judging whether the avatar plays a new action, and if so, stops the current action;

[0129] determining the magnitude difference of the action according to the state of the current action and the state of the first frame of the new action;

[0130] Computing frame complement time according to the amplitude difference;

[0131] Determine a frame complement action according to the amplitude difference and the frame complement time;

[0132] controlling the avatar to play the supplementary action, and after the completion of the supplementary action, controlling the avatar to play the new action;

[0133] Wherein, the determining the amplitude...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a virtual image real-time interaction method and a terminal. The method comprises the steps of determining a scene mode of a to-be-recognized object according to accompaniment data collected in real time, audio data of the to-be-recognized object and action data, and controlling a virtual image to play an action corresponding to the scene mode in real time. The virtual image can dynamically play the corresponding action along with different scene modes, the scene mode is determined by comprehensively considering the collected accompaniment data, the audio data of the to-be-recognized object and the action data, and when the to-be-recognized object does not execute any action, the virtual image can also play the corresponding action, so that the interaction is enhanced, the user can be guided to participate under the condition that the user lacks interaction, and the user experience is improved.

Description

technical field [0001] The invention relates to the field of interactive entertainment, in particular to a method and terminal for real-time interaction of virtual images. Background technique [0002] In existing entertainment venues such as KTV, in order to improve user experience, virtual images are usually set up to interact with users. However, in the existing realization of virtual image interaction, the virtual image is usually played through fixed preset actions, and the playing action is relatively monotonous, lacking interactivity with the user. In order to improve interactivity, the avatar is usually set based on the recognized user audio. However, once the user audio is not recognized, the avatar will not move, it will appear sluggish, and the interactivity is still poor, affecting the user experience. . Contents of the invention [0003] The technical problem to be solved by the present invention is to provide a method and terminal for real-time interaction ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10H1/36G06F3/01
CPCG10H1/368G06F3/011
Inventor 陈节省李中冬许荣峰郭天祈陈江煌林剑宇
Owner 福建凯米网络科技有限公司