Sound processing method and device in virtual scene, equipment and storage medium
A virtual scene and sound processing technology, which is applied in the field of virtual scenes, can solve the problems of affecting the efficiency of sound processing, large amount of calculation, large resource consumption, etc., and achieve the effect of reducing resource consumption, simplifying the calculation process and improving efficiency
Pending Publication Date: 2022-04-12
TENCENT TECH (SHENZHEN) CO LTD
0 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0005] However, the acoustic regions in the virtual scene are usually complex and numerous in number, and the simulation method through geometric acoustics needs to track the propagation rout...
Method used
In summary, the scheme shown in the embodiment of the present application combines the closed attributes of each virtual space in the virtual scene and the position of the receiving point in the virtual scene to determine the space type of each virtual space in real time, and then based on each The space type of the virtual space, as well as the position of the receiving point and the sound source, perform sound effect processing on the sound from the sound source, and do not need to track the sound from the sound source during this process, which greatly simplifies the calculation process and reduces the resource usage, thereby improving the efficiency of sound processing in virtual scenes.
[0227] In summary, the scheme shown in the embodiment of the present application combines the closed attributes of each virtual space in the virtual scene and the position of the receiving point in the virtual scene to determine the space type of each virtual space in real time, and then based on each The space type of the virtual space, as well as the po...
Abstract
The embodiment of the invention discloses a sound processing method and device in a virtual scene, equipment and a storage medium, and belongs to the technical field of virtual scenes. The method comprises the following steps: acquiring a closed attribute of each virtual space in a virtual scene and a receiving point position; obtaining the space type of each virtual space based on the closed attribute of each virtual space and the position of the receiving point; and based on the space type of the virtual space where the receiving point position is located and the space type of the virtual space where a target sound source is located, adding a sound effect to the sound emitted by the target sound source to obtain a target sound. According to the scheme, the calculation process is greatly simplified, and the resource occupation amount is reduced, so that the sound processing efficiency in the virtual scene is improved.
Application Domain
Video games
Technology Topic
Virtual spaceComputer graphics (images) +2
Image
Examples
- Experimental program(1)
Example Embodiment
[0082] Exemplary embodiments will be described in detail herein, and examples are illustrated in the drawings. The following description is related to the drawings, unless otherwise indicated, the same figures in the different drawings represent the same or similar elements. The embodiment described in the exemplary embodiments is not meant to all embodiments consistent with the present application. Instead, they are only examples of apparatus and methods consistent with some aspects of the present application as detailed in the appended claims.
[0083] It should be understood that "several" mentioned herein refers to one or more, "multiple" refers to two or more than two. "And / or", describe the association relationship of the associated object, indicating that there may be three relationships, such as A, and / or B, which may be represented: Alone A, and there are three cases of B, alone. Character "/" generally means that the associated object is a "or" relationship.
[0084] figure 1 A schematic diagram of an implementation environment provided by an exemplary embodiment of the present application is shown. This implementation environment may include: the first terminal 110, server 120, and the second terminal 130.
[0085] The first terminal 110 is installed and running an application 111 that supports the virtual environment, which can be multiplayer online pair of warfare procedures. When the first terminal runs the application 111, the user interface of the application 111 is displayed on the screen of the first terminal 110. The application 111 can be any of MultiPlayer OnlineBattle Arena Games, Moba, large escape to bureau games, survival games, simulationgame, slgs). In this embodiment, the first person is a first person game in this embodiment. The first terminal 110 is a terminal used by the first user 112, and the first user 112 controls the active virtual object located in the virtual environment using the first terminal 110, and the first virtual object can be referred to as the first user 112. Object. The activities of the first virtual object include, but are not limited to: adjusting body gestures, crawling, walking, running, cycling, flight, jumping, driving, picking, transmitting virtual emissive, attack, throw, release skills, materials / resources collection, architecture At least one of the buildings. Schematic, the first virtual object is a first virtual person, such as simulation character or anime character.
[0086] Among them, the first person is the game refers to the game that the user can transmit a virtual emission product in the first person's viewing angle. The screen of the virtual environment in the game is a picture that observed the virtual environment with the viewing angle of the first virtual object. Each virtual object in the game can team collaboration or battle with virtual objects controlled by other users. For example, in the battle mode of the game, at least two virtual objects perform unique battle in the virtual environment, virtual objects are dangerous in harm and virtual environments initiated by avoiding other virtual objects (such as toxic gases, swamps, etc.) To reach the purpose of survival in a virtual environment, when the virtual object is zero in the virtual environment, the virtual object ends in the virtual environment. Alternatively, the battle is started with the first client to join the mighty time, with the last client to exit the game, each client can control one or more virtual objects in the virtual environment. Alternatively, the competitive mode of the battle can include single-player mode, double group play mode or multiplayer, and the embodiment of the present application is not limited to the mobility mode.
[0087] The second terminal 130 is installed and running an application 131 that supports the virtual environment, which can be a multiplayer online pair of warfare procedures. When the second terminal 130 operates the application 131, the user interface of the application 131 is displayed on the screen of the second terminal 130. The client can be an Moba game, a big escape to a game, a living game, a SLG game, in this embodiment, the application 131 is the first person game to be exemplified. The second terminal 130 is the terminal used by the second user 132, and the second user 132 controls the active activity of the second virtual environment using the second terminal 130, and the second virtual object can be referred to as the second user 132. Role. Schematic, the second virtual object is a second virtual person, such as simulation character or anime characters.
[0088] Alternatively, the first virtual object and the second virtual object are in the same virtual world. Alternatively, the first virtual object and the second virtual object can belong to the same camp, the same team, the same organization, with a friend relationship or temporary communication authority. Optionally, the first virtual object and the second virtual object can belong to different camps, different teams, different organizations or hostile relationships.
[0089] Among them, the virtual object refers to an active object in the virtual scene. This organic object can be at least one of a virtual character, a virtual animal, a virtual carrier. Alternatively, when the virtual scene is a three-dimensional virtual scene, the virtual object is a three-dimensional stereoscopic model based on animated bone technology. Each virtual object has its own shape, volume, and orientation in a three-dimensional virtual scene, and occupies a portion of the three-dimensional virtual scene.
[0090] Alternatively, the application installed on the first terminal 110 and the second terminal 130 is the same, or the application installed on both terminals is the same type application on different operating system platforms (Android or iOS). The first terminal 110 can refer to one of the plurality of terminals, and the second terminal 130 can refer to another one in a plurality of terminals, and the embodiment will be illustrated only by the first terminal 110 and the second terminal 130. The device type of the first terminal 110 and the second terminal 130 is the same or different, including: smartphone, tablet, e-book reader, MP3 (MOVING PICTURE EXPERTSGROUP AUDIO LAYER III, dynamic image expert compression standard audio level 3) At least one of the player, MP4 (MOVING PICTUREEXPERTS Group Audio Layer IV, Dynamic Image Expert Compression Standard Audio 4) Player, Laptop and Desktop Computers.
[0091] figure 1Only two terminals are shown, but there are a plurality of other terminals in different embodiments to access the server 120. Alternatively, there is a terminal that is a terminal corresponding to the developer, and the development and editing platform of an application that supports the virtual environment is installed on which the developer can edit and update the application on the terminal. And transfer the updated application installation package to server 120 through wired or wireless network, the first terminal 110 and the second terminal 130 can download the application installation package from the server 120 to update the application.
[0092] The first terminal 110, the second terminal 130, and other terminals are connected to the server 120 via a wireless network or a wired network.
[0093] Server 120 includes at least one of a server cluster, a cloud computing platform, and a virtualization center consisting of a server, a plurality of servers. Server 120 is used to provide background service to applications that support 3D virtual environments. Alternatively, the server 120 assumes the main calculation work, the terminal undertakes the work; or the server 120 is subjected to the secondary calculation work, the terminal assumes the main calculation operation; or the server 120 and the terminal use a distributed computing architecture to synergize computing .
[0094] In a schematic example, server 120 includes a memory 121, a processor 122, a war service module 124, a user's input / output interface (I / O interface) 125. The processor 122 is used to load the instruction stored in the server 120 to process the data in the battle service module 124; the battle service module 124 is used to provide a number of counter-war rooms for users, such as 1V1 battle, 3V3 battle, 5V5 battle, etc. The I / O interface 125 facing the user is used to establish communication exchange data through a wireless network or wired network and the first terminal 110 and / or the second terminal 130.
[0095] Among them, virtual scenes can be three-dimensional virtual scenes, or virtual scenes can also be a two-dimensional virtual scene. Virtual scenes is a virtual scene that displays (or provided) when the application is running on the terminal. The virtual scene can be a simulated environmental scene of the real world or a semi-simulated semi-fictional 3D environmental scene, and can also be a pure fictional 3D environmental scene. Virtual scenes can be any of two-dimensional virtual scenes, 2.5-dimensional virtual scenes and three-dimensional virtual scenes, the following embodiments are illustrative of three-dimensional virtual scenes, but do not limit this. Alternatively, the virtual scene can also be used for virtual scene fighting between at least two virtual roles. Alternatively, the virtual scene can also be used to use a virtual props between at least two virtual roles. Alternatively, the virtual scene can also be used to use virtual props between at least two virtual roles in the target area, which will continue to be small as the time in the virtual scene.
[0096] Virtual scenes are typically generated by an application in a computer device such as a terminal (such as a screen) to display the hardware (such as screen). The terminal can be a mobile terminal such as a smartphone, a tablet, or an e-book reader; or the terminal can also be a personal computer device of a laptop or a fixed computer.
[0097] Please refer to figure 2 It shows a display interface schematic view of the virtual scene provided by an exemplary embodiment of the present application. Such as figure 2 As shown, the display interface of the virtual scene includes a scene screen 200 including the currently controlled virtual object 210, a three-dimensional virtual scene environment screen 220, and a virtual object 240. The virtual object 240 can be a virtual object controlled by other terminals to the user-controlled virtual object or application control.
[0098] exist figure 2 In the present, the currently controlled virtual object 210 and the virtual object 240 are the three-dimensional model in the three-dimensional virtual scene, the object of the three-dimensional virtual scene displayed in the scene screen 200 as the object of the current controlled virtual object 210, Exemplary, such as figure 2 As shown, at the view of the currently controlled virtual object 210, the environmental screen 220 of the displayed three-dimensional virtual scene is the earth 224, the sky 225, the horizon 223, the hill 221, and the plant 222.
[0099] The currently controlled virtual object 210 can make skilled release or virtual props under the control of the user, move, and executing a specified action, in the user's control, the virtual scene in the virtual scene can display different three-dimensional models, such as the screen of the terminal. Supporting the touch operation, and the virtual scenes of the scene screen 200 contains a virtual control, and when the user touchs the virtual control, the currently controlled virtual object 210 can perform a specified action and present the currently corresponding three-dimensional model in a virtual scene.
[0100] image 3 A flowchart of the sound processing method in the virtual scene provided by the exemplary embodiment of the present application is shown. The sound processing method in the virtual scene can be performed by a computer device, which may be a terminal, or a server, or the computer device may also include the terminal and a server. Such as image 3 As shown, the sound processing method in this virtual scene, including:
[0101] Step 310, obtain the closed attributes of the respective virtual spaces in the virtual scene, and the receiving point position; the closed attribute is used to indicate whether the virtual space is a closed space; the receiving point position includes the location of the virtual object controlled by the target terminal in the virtual scene.
[0102] In the present application embodiment, the virtual space refers to the acoustic space preset in the virtual scene, or it can be referred to as a room (ROOM).
[0103] The location of the Listener in the virtual scene can be referred to as a receiving point position. In a virtual scene, the receiving point can be a location where the virtual object controlled by the target terminal is in the virtual scene. For example, a virtual object is a virtual role that the user is controlled by the terminal, which is the position of the above-described receiving point in the virtual scene.
[0104] Step 320, based on the closed attributes of each virtual space, and the space type of each virtual space is obtained.
[0105] In the present application embodiment, each virtual space in the virtual scene may have a corresponding closed attribute, which is used to indicate whether the virtual space is a closed acoustic space.
[0106] Step 330, adding a sound effect on the spatial type of the target sound source based on the spatial type of the virtual space in the receiving point position, and the spatial type of the target sound source, the target sound source is added to obtain the target sound of the target sound source at the receiving point location.
[0107] In summary, the scheme shown in the present application, combines the closure attributes of each virtual space in the virtual scene, and the location of the receiving point in the virtual scene, determine the space type of each virtual space in real time, and then based on each virtual space The space type, and the location of the receiving point and the sound source, sound processing on the sound source, no need to track the sound from the sound source during this process, greatly simplified the calculation process, reducing resource occupancy The efficiency of sound processing in the virtual scene is improved.
[0108] Figure 4 A flowchart of the sound processing method in the virtual scene provided by the exemplary embodiment of the present application is shown. The sound processing method in the virtual scene can be performed by a computer device, which may be a terminal, or a server, or the computer device may also include the terminal and a server. Such as Figure 4 As shown, the sound processing method in this virtual scene, including:
[0109] Step 410 get a closed attribute of each virtual space in the virtual scene; the closed attribute is used to indicate whether the virtual space is a closed space.
[0110] Wherein, the closed attribute of the above virtual space can be pre-set by the developer. For example, if a virtual space is fully closed or semi-closed, the developer can set the closed property of the virtual space to be closed space, correspondingly, if a virtual space is not closed, the developer can set the closure attribute of the virtual space to be non- Closed space (or open space).
[0111] Alternatively, the closure attribute of the above virtual space can also be set by the application, for example, when the virtual scene is the scene generated by the application, the application can spatially divide the virtual scene (such as divided according to the terrain or spatial structure) And automatically set the closed properties to the obtained virtual space. For example, if the application divided by a virtual space is fully closed or semi-closed, the application can set the closed property of the virtual space to be closed space, corresponding, if the application divided by a virtual space is not closed, the application The closed properties of the virtual space can be set to be non-enclosed.
[0112] Step 420 acquire the receiving point position; the receiving point location includes the location of the virtual object controlled by the target terminal in the virtual scene.
[0113] In the present application embodiment, the generated sound in the virtual scene typically needs to be transmitted to the terminal for playback, and for the terminal that controls different virtual objects, the sound effect of its playback depends on the corresponding virtual object in the virtual scene. Location, therefore, for a given target terminal, the computer device can control the location of the virtual object controlled in the virtual scene as the receiving point position in the location of the virtual scene.
[0114] Step 430, based on the closed attributes of each virtual space, and the space type of each virtual space is obtained.
[0115] In the present application embodiment, the computer device can obtain a spatial type of each virtual space based on the closed attributes of each virtual space, and the virtual space (or the relationship between the receiving point position and each virtual space).
[0116] In a possible implementation, the space type of each virtual space is obtained based on the closed attributes of each virtual space, and the receiving point position, which acquires the space type of each virtual space, including:
[0117] The closure attribute in response to the target virtual space indicates that the target virtual space is closed space, and the receiving point position is within the target virtual space, the space type of the target virtual space is obtained is the first spatial type; in the present application, it can be referred to as u space;
[0118] In response to the closure attribute of the target virtual space, the target virtual space is closed space, and the receiving point position is outside the target virtual space, the space type of the target virtual space is the second spatial type; in the present application, it can be referred to as V. space;
[0119] In response to the closed attribute of the target virtual space indicates that the target virtual space is a non-enclosed space, the space type of the target virtual space is the third space type; the W space can be referred to in the present application embodiment;
[0120] Among them, the target virtual space is any of the various virtual spaces.
[0121] In the present application embodiment, the spatial type of each closed virtual space in the virtual scene is determined by the virtual space in which the receiving point is located. That is, as the movement of the sound source, the space type of each closed virtual space is dynamically changed. For a fixed closed virtual space, due to different times, the position of the sound source may be in this virtual space, and may be outside the virtual space, corresponding, the spatial type of the virtual space at different times may also be different. .
[0122] The above virtual scene is a game scene as an example. In the present application embodiment, the sound playback scenario in the game can be composed of three elements of sound sources, receivers, and acoustic spaces:
[0123]1) Sound source: Play the object of the sound.
[0124] Each sound source can play multiple sound at the same time, and the playback scenario generally contains multiple sound sources, and the sound sources are independent of each other.
[0125] 2) Listener: Receive all the objects of all sound sources to play sound.
[0126] For a given terminal, the playback scenario typically has a receiver. After the sound played by all sound sources, the mixed output to the receiver after various processes.
[0127] 3) Acoustic space (ROOM): Abstract acoustic space with independent acoustic effect area.
[0128] Arbitrary closed or incompletely closed areas, such as various warehouses, housing and cave, etc., such as a variety of warehouses, such as housing and caves, can be abstract to be ROOM. At the same time, it is analog sound diffraction effect, and the general Room will be connected by a portal.
[0129] In the present application embodiment, the space ROOM containing Source in the virtual scene requires analog calculation of the acoustic effect. When the number of SOURCE is more and distributed in different ROOM, the overall space acoustic operation is particularly large. Therefore, the computational complexity of spatial acoustic acoustic in this case can be considered as the number of ROOMs, indicated as: O (N).
[0130] According to the relative positional relationship between the ROOS of each source and Listener, it is also taken into account the spatial acoustic effect simulation in the process of sound propagation, and the entire scene area can be represented by U, V, and W. The three spaces are defined as follows:
[0131] U Space: The closed Room space where Listener is located, in the U space, the audio played by SOURCE can have a reverberation effect;
[0132] V Space: Does not contain Listener's closed Room space, audio played in V space Source can have reverberation and diffraction effects;
[0133] W Space: Non-closed space, when the Listener is located in W space, the Source playback audio of the W space can have a blocking effect; when the Listener is located in the U space, the audio of the SOURCE playback of the W space can have a diffraction effect.
[0134] After the virtual scene area is expressed by the UVW space, the space effect of the overall reverberation and reflection is related to the three ROOM type. At this time, the computational complexity of the spatial acoustic acoustic is reduced to o (1), Effective reduces spatial acoustic performance consumption.
[0135] Please refer to Figure 5 It shows an example diagram of a virtual scene to a UVW conversion according to an embodiment of the present application. Such as Figure 5 As shown in Figure 5 The upper part, the Listener is located in a non-enclosed space, and there are four spaces in the scene, and each space needs to perform independent acoustic effects. After the UVW conversion, the entire scene contains only two spaces of V and W; Figure 5 The lower part, Listener is located in a closed space, and the UVW conversion, the entire scene contains U, V, and W three spaces. When the scene is particularly complex, the number of independent space is uncontrollable, the amount of calculation is particularly large, and the UVW conversion can be maintained at the number of up to three spaces, ensuring that the acoustic effect is controllable.
[0136] Step 440, adding a sound effect on the spatial type of the virtual space in the receiving point position, and the spatial type of the target sound source, the sound source is added to the target sound source to obtain the target sound at the receiving point location.
[0137] In a possible implementation, a sound effect is added to the spatial type of the target sound source based on the spatial type of the virtual space in the receiving point position, and the spatial type of the virtual space of the target sound source, the sound source is added to the target sound source at the receiving point. The target sound at the location, including:
[0138] In response to the space type of the virtual space in the receiving point position, the space type of the virtual space where the target sound source is located satisfies the first condition, the first sound effect is added to the target sound source, and the target sound is obtained;
[0139] Among them, the first sound effect includes at least one of a blocking effect, a reflection effect, and a reverberation effect;
[0140] The first condition includes: the spatial type of the virtual space in the receiving point position is the first spatial type, and the space type of the virtual space where the target sound source is located is the first spatial type.
[0141] In the present application embodiment, when the receiving point position is in the same closed virtual space, for example, when the virtual object and the sound source controlled by the target terminal is in the same room, the sound source can pass The reflection of the room wall and the sound emitted by the reflected sound may be mixed to form a reverberation. At the same time, the obstacle between the sound source and the target terminal controls the obstacle to weaken the sound from the sound source. In the scheme shown in the present application, the computer device can process sound from the sound source for sound sources, and simulate blocking, reflection and reverberation of sound sources for sound sources and reception point locations. At least one sound effect in the middle.
[0142] In a possible implementation, the sound emitted to the target sound source is added to the target sound, including:
[0143] Add blocking sound effects to direct sound and get barrier;
[0144] Adding an early reverberation sound effect on the early reflection sound, gaining a reverberation;
[0145] Based on the early reflection sound, the barrier and the roll sound acquisition target sound.
[0146] Among them, the above straight reach can be a sound that directly propagates directly to the receiving point position after the sound source is issued; the above early reflection sound may be a sound after eurosurgery (such as wall).
[0147] In a possible implementation, a sound effect is added to the spatial type of the target sound source based on the spatial type of the virtual space in the receiving point position, and the spatial type of the virtual space of the target sound source, the sound source is added to the target sound source at the receiving point. The target sound at the location, including:
[0148] In response to the space type of the virtual space in the receiving point position, the space type of the virtual space in the target sound source satisfies the first condition, adding a second sound effect on the sound of the target sound source to obtain the target sound;
[0149] Among them, the second sound effect includes at least one of transmitted effects, diffraction effects, and reverberation effects;
[0150] The second condition includes:
[0151] The spatial type of the virtual space in the receiving point position is the first spatial type, and the space type of the virtual space in the target sound source is the second space type;
[0152] Alternatively, the spatial type of the virtual space in the receiving point position is the third space type, and the space type of the virtual space in the target sound source is the second space type.
[0153] In the present application embodiment, when the receiving point position is in different closed virtual spaces, respectively, respectively, when the receiving point position is in an open virtual space, the sound source position is in a closed virtual space, the sound source Directive / reflected sounds may require a wall transmission and diffraction of the channel port / door to reach the location of the receiving point. In this regard, in the scenario shown in the present application, the computer device can process sound from the sound source in the case where the sound source is in a closed virtual space, the receiving point position is in another closed or open virtual space. Thereby simulating at least one sound effect in transmitting effect, diffraction effect, and reverberation effect.
[0154] In a possible implementation, a second sound effect is added to the sound from the target sound source, obtain the target sound, including:
[0155] Generate direct sounds and reverberations corresponding to the target sound source;
[0156] Add a transmissive sound effect on direct reacon and reverberation, and get a transmission;
[0157] Add a diffraction sound effect on the roll to obtain a diffractive sound;
[0158] Based on transmissive sound and diffractive sound acquisition target sound.
[0159] In a possible implementation, a sound effect is added to the spatial type of the target sound source based on the spatial type of the virtual space in the receiving point position, and the spatial type of the virtual space of the target sound source, the sound source is added to the target sound source at the receiving point. The target sound at the location, including:
[0160] In response to the space type of the virtual space in the receiving point position, the space type of the virtual space where the target sound source is located, the third sound effect is added to the sound of the target sound source, and the target sound is obtained.
[0161] Among them, the third sound effect includes blocking sound effects;
[0162] Third conditions include:
[0163] The space type of the virtual space in which the receiving point position is located is the first spatial type, and the space type of the virtual space where the target sound source is located is the third space type;
[0164] Alternatively, the spatial type of the virtual space in the receiving point position is the third space type, and the space type of the virtual space in the target sound source is the third space type.
[0165] In the present application embodiment, when the sound source is in an open virtual space, the direct sound emitted by the sound source may need to reach the access point position when the receiving point position is open or closed virtual space. In this regard, in the scheme shown in the present application, the computer device can process the sound from the sound source in the case where the sound source is in an open virtual space, the receiving point position is in another closed or open virtual space. Thereby simulate the blocking effect.
[0166] In a possible implementation, a third sound effect is added to the sound from the target sound source, obtain the target sound, including:
[0167] Generate the direct sound corresponding to the target sound source;
[0168] Add a blocking sound effect on the direct sound to get the target sound.
[0169] In the present application example, the sound effects in the above cases are described only for example, and the specific sound effects are not limited in various cases; that is, the sound effects in the above circumstances can be followed by the developer. Demand is set.
[0170] In a possible implementation, computer devices can also set volume gain to the target sound source based on the space type of the virtual space where the target sound source is located.
[0171] The above-mentioned spatial type based on the virtual space in which the receiving point position is located, and the spatial type of the virtual space in the target sound source adds a sound effect to the sound of the target sound source, obtain the target sound at the receiving point location, including:
[0172] Based on the spatial type of the virtual space in the receiving point position, the spatial type of the virtual space of the target sound source, the volume gain of the target sound source, adds a sound effect to the sound of the target sound source, and obtain the target sound.
[0173]After the sound source, the sound source is reflected, blocking, reverberation, diffraction, etc., the volume will have a certain loss, in this application embodiment, the computer device adds a sound effect on the sound emitted by the target sound source, The volume gain of the target sound source can be determined in conjunction with the spatial source of the sound source, the reception point position, and the sound source and the virtual space where the reception point position is located, and the sound source is added to the volume gain.
[0174] Among them, the volume gain may be the volume coefficient of sound corresponding to the target sound source when the sound is added. That is, when the sound is added to the sound source of the target sound source, the computer device can multiply the volume gain by the volume of the sound corresponding to the target sound source.
[0175] In a possible implementation, the volume gain includes at least one of the following gains:
[0176] The gain of the direct acoustic, the gain of the barrier, the gain of the roll sound, and the gain of the diffractive sound.
[0177] Among them, a different sound component in the sound corresponding to the sound source is received for the receiving point position, and the computer device can set the corresponding volume gain, respectively.
[0178] In a possible implementation, the volume gain is set to the target sound source based on the spatial type of the virtual space where the target sound source is located, including:
[0179] The space type of the virtual space in response to the target sound source is the first spatial type, and the volume gain of the direct sound of the target sound source is set to 1;
[0180] In response to the space type of the virtual space where the target sound source is the second spatial type, the volume gain of the direct sound of the target sound source is set to a, 0 <1;
[0181] In response to the space type of the virtual space in which the target sound source is located is the third space type, the volume gain of the direct sound of the target sound source is set to B, 0 <1.
[0182] In the present application embodiment, for the direct sound of the sound source, when the sound source and the receiving point position is in the same closed virtual space, the distance between the sound source and the reception point position is closer, and it can be regarded at this time Blocking effect, the volume gain can be set to 1; when the sound source and the receiving point position is in different enclosed virtual spaces, or when the sound source is in an open virtual space, the distance between the sound source and the receiving point position may be more Far, at this time, the blocking effect can be considered, the volume gain can be set to less than 1.
[0183] Wherein, the values of the above A and B may be the same or different.
[0184] In a possible implementation, the volume gain is set to the target sound source based on the spatial type of the virtual space where the target sound source is located, including:
[0185] In response to the space type of the virtual space where the target sound source is the second spatial type, the virtual space where the target sound source is located and the virtual space in which the receiving point position is located, the reverberation sound of the target sound source is set and the diffracted sound of the target sound source. At least one volume gain of the sound.
[0186] In the present application embodiment, the case where the sound source and the receiving point position is located in two closed virtual spaces, and the two closed virtual spaces are present, and the sound source is transmitted to the receiving point position. The roll sound and diffraction sound can be generated during the process, and the computer device can set the volume gain to the roll sound and diffractive sound.
[0187] In a possible implementation, set the volume gain of the reverberation sound of the target sound source, including:
[0188] Set the roll sound of the target sound source and the volume gain of at least one of the sounds in the diffractive sound to a fixed value;
[0189] Alternatively, the rolling sound of the target sound source is set based on the distance between the target sound source and the receiving point position, and the volume gain of at least one sound in the diffractive sound.
[0190] Wherein, the volume gain of the rolling sound / diffraction sound can be set to a fixed value (for, for example, the developer), or the volume gain of the roll sound / diffractive sound can also be determined based on the distance between the target sound source and the receiving point position. For example, the volume gain of the above rolling acoustic / diffraction sound can be reversed to the distance between the sound source and the receiving point position, that is, the larger the distance between the sound source and the reception point position, the smaller the volume gain.
[0191] In a possible implementation, the volume gain is set to the target sound source based on the spatial type of the virtual space where the target sound source is located, including:
[0192] The volume gain of the barrier sound of the target sound source is set in response to the spatial type of the virtual space where the target sound source is located is the first spatial type or the third space type.
[0193] In the present application embodiment, when the sound source and the receiving point position are close to a closed virtual space, or when the sound source is open, the sound source propagates to the process of transmitting to the receiving point position. In this regard, the computer device can set the volume gain on the barrier.
[0194] In a possible implementation, the volume gain of the barrier of the target sound source is set, including:
[0195] Get the barrier value between the target sound source and the receiving point position, and the barrier value is used to indicate the degree of blocking between the target sound source and the receiving point position;
[0196] The volume gain of the barrier of the target sound source is set based on the barrier value between the target sound source and the receiving point position.
[0197] In the present application embodiment, the barrier value may be determined by the distance between the sound source and the receiving point position and the material of the obstacle. For example, the computer device can obtain the distance between the sound source and the receiving point position, and obstacle information between the sound source and the receiving point position (such as the material of the obstacle, the thickness of the obstacle, the identification of the obstacle, etc.), Determine the barrier value between the target sound source and the reception point position based on the obtained distance and obstacle information.
[0198] Wherein, the algorithm of determining the barrier value by distance and obstacle information, and the conversion method between the barrier value and the volume gain may be pre-set by the developer, the specific formula or flow of the algorithm / conversion method is not limited. .
[0199] In a virtual scene, the sound source source Source and Listener play a sound in the same room space, which will have a spatial reverberation sound effect, please refer to Image 6 It shows the schematic diagram of the sound effect of the present application embodiment. Such as Image 6 On the left side: The reverberation effect in the space can simulate the realization by relying on the pre - reflection sound of the space geometry and the position, and then superimposes the post-resection of the space size. If there is blocking between the two, the sound will also occur Blocking effect. Such as Image 6 On the right: When Listener is not in the same ROOM, Source Play the sound to the Listener propagating process, in addition to the indoor effect, there is an acoustic effect of an acoustic \ block or diffraction.
[0200] Sound sources Source and receiver Listener, as well as elements such as acoustic spaces, are typically placed and set in the scene editing. The scheme shown in the present application is to perform real-time processing at a virtual scene, so there is no influence of the acoustic model plastics and setup flows of the conventional scene space. Please refer to Figure 7 It shows the interaction diagram of the system module according to the embodiment of the present application. Taking the game as an example, such as Figure 7 As shown: The audio designer edit the game scene 71, according to the effect, the corresponding acoustic model 72 is placed, mainly including the space Room, and the connection between Room and Room (also known as a connective porch) Portal. When the game is running, the UVW system 73 automatically manages the loaded Room and Portal, and performs the UVW determination by tracking the position and the ROOM switch of the source source source and receiver Listener, and finally controls the audio engine 74 in accordance with the current UVW state to perform respective spatial acoustic effects. Rendering.
[0201] In the present application example, the UVW system works at the game running, please refer to Figure 8 It shows a system frame diagram of a UVW system according to the embodiment of the present application. Such as Figure 8 As shown, the system includes a UVW detection module 81, a UVW management module 82, and a UVW rendering module 83.
[0202] exist Figure 8 In the game scenario contains Source, Listener, Room, and the geometry of the geometry Geometry, the portal between the ROOM constitutes a sound playback element of the entire game scene.
[0203] The UVW detection module 81 is used for real-time tracking of Source and Listener locations and Room switching, performing UVW conversion.
[0204] The UVW Management Module 82 is responsible for managing the queue elements corresponding to each Room.
[0205] The UVW rendering module 83 is used to perform spatial acoustic effect rendering according to the UVW status of each source. The UVW system is controlled while controlling performance, ensuring spatial acoustic effects, in particular, to ensure the spatial effect of the first person.
[0206] based on Figure 8 The specific process of the system frame, the UVW system is running as follows:
[0207] 1) Register for Vroom and W Room, VROOM ID = 1, Wroom ID = 2. V Room acts as a reverberation space, add a preset reverberation effect, and set part of the other ROOM to block, no size and size. W Room has no reverberation, no blocking of other Room. Initialization U ROOM ID = -1. Among them, the ID, Wroom ID of the above V Room, and the ID of the U Room are used to identify V Room, Wroom, and U Room.
[0208] 2) When the Room is loaded in the scene, it is added to the ROOM table: r = , these Room does not generate the actual spatial acoustic effect, mainly for Source's space determination. When ROOM is uninstalled, remove it from the ROOM table. Among them, RoomID uniquely represents the corresponding ROOM within the virtual scene. The above V Room ID, WROM ID, and U ROOM's ID, used to represent a specific ROOM type ROOM type.
[0209] 3) When the portal in the scene is loaded, the corresponding ROOM is associated, added to the portal table: P =. When Portal is uninstalled, remove it from the Portal table.
[0210] 4) Source in the scene is created, added to the Source queue s = {s1, ..., sn}, usually the Listener itself is a source, so you can manage it. Each source will track the position independently, and detect the ROOM ID corresponding to the corresponding, forming a Source table: s =. Remove from the Source table when Source is destroyed.
[0211] 5) Traversing the Source Table S and get the corresponding Listener, and also get the RoomID in which Listener is located.
[0212] 6) Judge whether the ROOM ID in which the Listener is located. If the Listener is switched, register U and VROOM corresponding to the listner. If a Room occurs, first log out old Room ID and the corresponding geometry, update the u room ID, if Listener has new Room, then register the new Room ID, set the geometry of the Room.
[0213] 7) UVW classification management for SOURCE:
[0214] (a) If the uroom ID> = 0, the Source team with the Listener is located in Uroom, and the Source team is listed as US; the Source team is located in Vroom, and the Source team is listed as vs; other Source is located in W Room, Corresponding to the Source queue is WS;
[0215] (b) If u room ID <0, the US team is empty, the Source located in Room is Vroom, the corresponding Source queue is VS, and the SOURCE in the same non-ROOM in Listener is located in Wroom, and the corresponding Source queue is WS.
[0216] 8) Traverse the US queue, set Source to be in Uroom, restore Source Play Volume Gain GAIN = 1.0.
[0217] 9) Traverse the VS queue, set Source within V Room, set the source to play volume gain GAIN = a, where 0 <1.
[0218] 10) Traverse the WS queue, set the source within W Room, if the US is empty, set the source play volume gain = 1.0, if US is not empty, set the source to play the volume gain GAIN = B, where 0 <1.
[0219] 11) Traverse the VS queue, find the corresponding portal according to the corresponding ROOM ID and Portal table according to Source, if you find, check if the Lister is within the associated portal area, if, add Source to the diffraction table D: , where Distance is Distance to the Listener and Portal area.
[0220] 12) If the diffraction table D is not empty, it is set the auxiliary transmission channel, which is an additional audio propagation between Source and Listener. This channel sets the reverberation effect, and sets the volume gain level = C, where 0 <1, It is also possible to associate a Distance control gain for analog diffraction effects.
[0221] 13) If US is not empty, traverse the US queue, detect Listener and Source blocking value; otherwise, the WS queue is traversed, and the Listener and Source barrier value are detected, and the blocking volume is set by the blocking value.
[0222] 14) The game continues to run, then jump to step 5) to continue iteration. Otherwise.
[0223] based on Figure 8 The system framework shown, in the UVW system rendering, there can be a spatial acoustic rendering in different states:
[0224] 1) Listener and Source are located in Uroom, at which time there is an early reflection and post-reverberation, blocking three spatial acoustic effects. Sound rendering processing diagram Figure 9 Indicated.
[0225] 2) Listener is located in Uroom, Source is located in Vroom; or Listener is located in Wroom, Source is located in Vroom. At this time, there is a reverberation, an acoustic (transmissive) and a diffraction of three spatial acoustic effects. Sound rendering processing Figure 10 Indicated.
[0226] 3) Listener is located in Uroom, Source is located in w room; or, Listener is located in Wroom, Source is located in Wroom. There is a blocking acoustic effect at this time. Sound rendering processing Figure 11 Indicated.
[0227] In summary, the scheme shown in the present application, combines the closure attributes of each virtual space in the virtual scene, and the location of the receiving point in the virtual scene, determine the space type of each virtual space in real time, and then based on each virtual space The space type, and the location of the receiving point and the sound source, sound processing on the sound source, no need to track the sound from the sound source during this process, greatly simplified the calculation process, reducing resource occupancy The efficiency of sound processing in the virtual scene is improved.
[0228] The scheme shown in this application proposes an algorithm and system for play optimization of a virtual spatial acoustic effect. By performing UVW conversion and management by the acoustic model of the virtual space without affecting the acoustic model, the spatial effect rendering is performed in accordance with the play source and the UVW spatial information of the receiver. To ensure the premise of space audio effects, the overall performance consumption of the effective control of the spatial acoustic effect makes the space audio technology can be applied to the mobile terminal and other devices, which greatly enhances the audio immersion experience of the mobile terminal.
[0229] Figure 12 A block diagram showing a sound processing apparatus in a virtual scene provided by an exemplary embodiment of the present application. The sound processing apparatus in the virtual scene can be applied to the computer device to perform image 3 or Figure 4 All or some of the steps shown in the method shown. Such as Figure 12 As shown, the sound processing apparatus in the virtual scene, including:
[0230] The first acquisition module 1201 is configured to obtain a closed attribute of each virtual space in the virtual scene, and the receiving point position; the closed attribute is used to indicate whether the virtual space is a closed space; the receiving point position includes a target terminal control. The location of the virtual object in the virtual scene;
[0231] Spatial type acquisition module 1202 for obtaining a space type of each of the virtual spaces based on a closed attribute of each of the virtual spaces, and the receiving point position.
[0232] The sound processing module 1203 is used to add a sound effect to the sound source of the target sound source based on the space type of the virtual space in the receiving point position, and the space type of the virtual space in which the target sound source is located. The target sound source is at the target sound at the receiving point position.
[0233] In a possible implementation, the space type acquisition module 1202 is used for,
[0234] In response to a closed property of the target virtual space, the target virtual space is indicated to the closed space, and the receiving point position is within the target virtual space, acquiring the space type of the target virtual space is the first spatial type;
[0235] In response to the closed attribute of the target virtual space, the target virtual space is a closed space, and the receiving point position is located in addition to the target virtual space, and the space type of the target virtual space is the second space type;
[0236] In response to a closed attribute of the target virtual space indicates that the target virtual space is a non-enclosed space, the space type of the target virtual space is the third space type;
[0237] Wherein, the target virtual space is any of the respective virtual spaces.
[0238] In a possible implementation, the sound processing module 1203 is configured to meet the space type of the virtual space in response to the receiving point position, and the space type of the virtual space in the target sound source satisfies the first One condition, adding a first sound effect to the sound emitted by the target sound source, obtaining the target sound;
[0239] Wherein, the first sound effect includes at least one of a blocking effect, a reflection effect, and a reverberation effect;
[0240] The first condition comprises: the space type of the virtual space in the receiving point position is the first space type, and the space type of the virtual space in which the target sound source is located is the first space. type.
[0241] In a possible implementation, the sound processing module 1203 is for,
[0242] Adding a blocking sound effect on the staggery, obtaining a barrier;
[0243] Adding the early reverberation sound effect on the early reflection sound, gaining a reverberation;
[0244] The target sound is acquired based on the early reflection sound, the barrier and the reverb.
[0245] In a possible implementation, the sound processing module 1203 is configured to meet the space type of the virtual space in response to the receiving point position, and the space type of the virtual space in the target sound source satisfies the first One condition, adding a second sound effect to the sound from the target sound source, obtaining the target sound;
[0246] Wherein, the second sound effect includes at least one of transmitted effects, diffraction effects, and reverberation effects;
[0247] The second condition includes:
[0248] The space type of the virtual space in which the receiving point position is located is the first spatial type, and the space type of the virtual space in which the target sound source is located is the second space type;
[0249] Alternatively, the spatial type of the virtual space in which the receiving point position is located is the third space type, and the space type of the virtual space in which the target sound source is located is the second space type.
[0250] In a possible implementation, the sound processing module 1203 is for,
[0251] Generate direct sounds and reverberations corresponding to the target sound source;
[0252] The sound sound sound is added to the rookies and the reverberation sound, and the transmission sound is obtained;
[0253] Add a diffraction sound effect on the reverberation to obtain a diffractive sound;
[0254] The target sound is acquired based on the transmissive sound and the diffractive sound.
[0255] In a possible implementation, the sound processing module 1203 is configured to meet the space type of the virtual space in response to the receiving point position, and the space type of the virtual space in the target sound source satisfies the first Three conditions, add a third sound effect to the sound emitted by the target sound source to obtain the target sound;
[0256] Wherein, the third sound effect includes blocking sound effects;
[0257] The third condition includes:
[0258] The space type of the virtual space in which the receiving point position is located is the first spatial type, and the space type of the virtual space in which the target sound source is located is the third space type;
[0259] Alternatively, the spatial type of the virtual space in which the receiving point position is located is the third space type, and the space type of the virtual space in which the target sound source is located is the third space type.
[0260] In a possible implementation, the sound processing module 1203 is for,
[0261] Generate direct sounds corresponding to the target sound source;
[0262] Add a blocked sound effect on the staggery to obtain the target sound.
[0263] In a possible implementation, the apparatus further comprises:
[0264] Gain setting module, for setting volume gain on the target sound source based on the space type of the virtual space in which the target sound source is located;
[0265] The sound processing module 1203 is a spatial type of the virtual space where the target sound source is located, and the volume gain of the target sound source, the volume gain of the target sound source, based on the space type of the virtual space in the receiving point position. The sound of the target sound source is added to add a sound effect to obtain the target sound.
[0266] In a possible implementation, the volume gain comprises at least one of the following gains:
[0267] The gain of the direct acoustic, the gain of the barrier, the gain of the roll sound, and the gain of the diffractive sound.
[0268] In a possible implementation, the gain setting module for,
[0269] In response to the space type of the virtual space in response to the target sound source, the volume gain of the direct sound of the target sound source is set to 1;
[0270] In response to the space type of the virtual space in response to the target sound source, the volume gain of the direct sound of the target sound source is set to a, 0 <1;
[0271] The space type of the virtual space in response to the target sound source is the third space type, and the volume gain of the direct sound of the target sound source is b, 0 <1.
[0272] In a possible implementation, the gain setting module is configured to respond to the space type of the virtual space in response to the target sound source as the second space type, and the target sound source A connection port is present between the virtual space of the virtual space and the receiving point position, setting the roll sound of the target sound source, and the volume gain of at least one sound in the diffractive sound.
[0273] In a possible implementation, the gain setting module for,
[0274] The volume gain of the reverberation sound of the target sound source and at least one of the sounds in the diffractive sound is set to a fixed value;
[0275] Alternatively, the rolling sound of the target sound source is provided based on the distance between the target sound source and the receiving point position, and the volume gain of at least one sound in the diffractive sound is provided.
[0276] In a possible implementation, the gain setting module is configured to provide a spatial type or the third space type, setting location in response to the space type of the virtual space in which the target sound source is located. The volume gain of the barrier sound of the target sound source.
[0277] In a possible implementation, the gain setting module for,
[0278] The barrier value between the target sound source and the receiving point position is obtained, the barrier value to indicate the degree of blocking between the target sound source and the receiving point position;
[0279] The volume gain of the barrier of the target sound source is set based on the barrier value between the target sound source and the receiving point position.
[0280] In summary, the scheme shown in the present application, combines the closure attributes of each virtual space in the virtual scene, and the location of the receiving point in the virtual scene, determine the space type of each virtual space in real time, and then based on each virtual space The space type, and the location of the receiving point and the sound source, sound processing on the sound source, no need to track the sound from the sound source during this process, greatly simplified the calculation process, reducing resource occupancy The efficiency of sound processing in the virtual scene is improved.
[0281] Figure 13 A configuration block diagram of a computer device 1300 provided by an exemplary embodiment of the present application is shown. The computer device 1300 can be a portable mobile terminal, such as a smartphone, a tablet, a MP3 player (MOVING image expert compressed standard audio level 3), MP4 (MOVING PICTURE EXPERTS Group Audio Layer IV, Dynamics Image expert compressed standard audio level 4) Player, laptop or desktop computer. Computer device 1300 may also be referred to as other names such as user equipment, portable terminals, laptop terminals, and desktop terminals.
[0282] Typically, computer device 1300 includes: processor 1301 and memory 1302.
[0283] Processor 1301 can include one or more processing cores, such as a 4 core processor, 8 core processor, and the like. The processor 1301 can use at least one hardware form in DSP (Digital Signal Processing, Digital Signal), FPGA (Field-Programmable Gate Array, Field Programmable Gate Array), PLA (Programmable Logic Array, Programmable Logical Array). accomplish.
[0284] Memory 1302 can include one or more computer readable storage media, which can be readable in non-transient. Memory 1302 can also include high speed random access memory, as well as non-volatile memory, such as one or more disk storage devices, flash storage devices. In some embodiments, the non-transitory computer readable storage medium in the memory 1302 is used to store at least one computer instruction, which is used to implement the processor 1301 to implement the embodiments of the present application. method.
[0285] In some embodiments, computer device 1300 can also be selected: peripheral device interface 1303 and at least one peripheral device. Processor 1301, memory 1302 and peripheral interface 1303 can be connected via a bus or signal line. Each peripheral device can be connected to the peripheral device interface 1303 by a bus, a signal line or a circuit board. Specifically, the peripherals include: RF circuit 1304, display screen 1305, camera assembly 1306, at least one of the audio circuit 1307, and power source 1309.
[0286] In some embodiments, computer device 1300 further includes one or more sensors 1310. The one or more sensors 1310 include, but are not limited to, the acceleration sensor 1311, a gyroscope sensor 1312, a pressure sensor 1313, an optical sensor 1315, and a proximity sensor 1316.
[0287] Those skilled in the art will appreciate that Figure 13 The structure shown is not constituting the definition of computer device 1300, which may include more or fewer components, or in combination, or using different component arrangements.
[0288] Figure 14 The configuration block diagram of the computer device 1400 shown in the exemplary embodiment of the present application is shown. The computer device can be implemented as a protective blocking device in the above-described above-mentioned scheme. The computer device 1400 includes a central processing unit (CPU) 1401, including a system memory 1404, a random access memory (RAM) 1402, and read-only memory, ROM) 1403, and Connect the system memory 1404 and the system bus 1405 of the central processing unit 1401. The computer device 1400 also includes a basic input / output system (INPUT / OUTPUT system, I / O system) 1406 for transmitting information between various devices within the computer, and for storing operating system 1413, application 1414, and other programs The large capacity storage device 1407 of the module 1415.
[0289] The basic input / output system 1406 includes an input device 1409 such as a mouse, a keyboard such as a mouse, a keyboard for user input information. The display 1408 and the input device 1409 are connected to the central processing unit 1401 by the input / output controller 1410 connected to the system bus 1405. The basic input / output system 1406 may further include an input and output controller 1410 for receiving and processing input from a keyboard, mouse, or electronic stylus. Similarly, the input / output controller 1410 also provides output to the display screen, a printer, or other type of output device.
[0290] The mass storage device 1407 is coupled to the central processing unit 1401 by a large capacity storage controller (not shown) connected to the system bus 1405. The large capacity storage device 1407 and the associated computer readable medium thereof provide non-volatile storage for computer device 1400. That is, the large capacity storage device 1407 can include a computer readable medium (not shown) such as a hard disk or a compact disc read-only memory, a CD-ROM driver.
[0291] Without generality, the computer readable medium can include a computer storage medium and a communication medium. Computer storage media includes volatility and nonvolatile, removable and non-removable media for storing any methods or techniques such as computer readable instructions, data structures, program modules, or other data. Computer storage media includes RAM, ROM, erased programmable read-only registers, ERASABLE Programmable Read Only Memory, ERASABERMAM, electronic erase-erasable programbele read-onlymemory, EEPROM) flash memory or other solid state Store its technology, CD-ROM, Digital, DISC, DVD, or other optical storage, tape cassettes, tape, disk storage, or other magnetic storage devices. Of course, those skilled in the art can know that the computer storage medium is not limited to the above. The above system memory 1404 and the large capacity storage device 1407 can be collectively referred to as a memory.
[0292] According to various embodiments of the present disclosure, the computer device 1400 can also be operated from a remote computer that is connected to the network such as an Internet. That is, the computer device 1400 can be connected to the network 1412 by a network interface unit 1411 connected to the system bus 1405, or, or using the network interface unit 1411 to connect to other types of network or remote computer systems (not shown. .
[0293] The memory further includes at least one computer instruction, the at least one computer instruction stored in the memory, and the central processor 1401 implements all or some of the steps shown in the methods shown in the methods described above by executing the at least one computer instruction.
[0294] In an exemplary embodiment, a non-contextic computer readable storage medium including instructions, such as a memory comprising at least one computer instruction, and the at least one computer instruction can be performed by the processor to complete the above. image 3 or Figure 4 All or some steps of the method shown in any embodiment. For example, a non-temporary computer readable storage medium can be ROM, RAM, CD-ROM, tape, floppy disk, and optical data storage devices.
[0295] In an exemplary embodiment, a computer program product or computer program is also provided, which includes computer instructions, which are stored in a computer readable storage medium. The processor of the computer device reads the computer instruction from the computer readable storage medium, and the processor executes the computer instruction, so that the computer device performs the above image 3 or Figure 4 All or some steps of the method shown in any embodiment.
[0296] Other embodiments of the present application will be readily apparent to those skilled in the art. The present application is intended to cover any variations, uses, or adaptive changes in the present application, these variations, uses, or adaptive changes follow the general principles of the present application and include known common sense or customary techniques in the art of the present invention disclosed herein. . The specification and examples are considered to be illustrative, and the true scope and spirit of the present application are pointed out by the following claims.
[0297] It should be understood that the present application is not limited to the exact structure shown above and illustrated in the drawings, and various modifications and changes can be made without departing from their extent. The scope of this application is limited only by the appended claims.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Method for calculating DC offset coefficient of magnetic core of Flyback circuit
Owner:NANJING UNIV OF POSTS & TELECOMM
Method for calculating horizontal movement speed of seabed mud volcano mud flow
Owner:GUANGZHOU MARINE GEOLOGICAL SURVEY
Non-contact measuring device and non-contact measuring method for inner ring channel position of angular contact ball bearing
Owner:HENAN UNIV OF SCI & TECH
Coal pressure reduction calculation method and system for heat supply substitution of combined heat and power generation unit
Owner:ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER COMPANY +1
Classification and recommendation of technical efficacy words
- The calculation process is simple
- Reduce resource usage
Multi-motor torque output and distribution controlling method
Owner:杭州伯坦新能源科技有限公司
On-line life calculation method for battery system
Owner:CHONGQING CHANGAN AUTOMOBILE CO LTD +2
Method for compensating frequency and phase consistency of radio frequency channels of phased array radar
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY
High-precision discontinuous Galerkin artificial viscous shock wave capture method based on flow field density phase step
Owner:CALCULATION AERODYNAMICS INST CHINA AERODYNAMICS RES & DEV CENT
Mechanical arm trajectory tracking control method and device based on instruction filter
Owner:HARBIN INST OF TECH
Virtual machine cluster resource allocation and scheduling method
Owner:SICHUAN ZHONGDIAN AOSTAR INFORMATION TECHNOLOGIES CO LTD
Method and device for managing testing environment on the basis of container technology
Owner:BEIJING JINGDONG SHANGKE INFORMATION TECH CO LTD +1
Web page content displaying method and system
Owner:SHENZHEN TENCENT COMP SYST CO LTD