Voice over Internet Protocol power saving techniques for wireless systems
A wireless device and wireless communication technology, applied in the field of wireless communication, can solve problems such as low efficiency and multi-power usage
Pending Publication Date: 2020-04-03
QUALCOMM INC
4 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
Wake up the packet layer from low power mode or prevent the packet layer from entering low power mo...
Method used
[0066] In some cases, wireless communication system 100 may be a packet-based network that operates according to a layered protocol stack. In the user plane, communications at the bearer or Packet Data Convergence Protocol (PDCP) layer may be IP-based. In some cases, a radio link control (RLC) layer may perform packet segmentation and reassembly for communication over logical channels. The Medium Access Control (MAC) layer can perform priority handling and multiplexing of logical channels into transport channels. The MAC layer can also use hybrid ARQ (HARQ) to provide retransmission of the MAC layer to improve link efficiency. In the control plane, a radio resource control (RRC) p...
Abstract
Methods, systems, and devices for wireless communication are described. A user equipment (UE) may be enabled for voice over long term evolution (VoLTE). The UE may include an audio layer to encode anddecode voice information and a packet layer to transmit voice packets. The packet layer may store parameters related to a discontinuous reception (DRX) in a shared memory. The audio layer may obtainthe DRX parameters and encode voice information based on the parameters. For example, the audio layer coding may be synchronized with the wake period of the DRX cycle. The audio layer may encode voiceinformation during a wake up period of the packet layer DRX cycle, and the packet layer may transmit the voice packets while awake. The audio layer may perform back to back encodings at the beginningof the DRX cycle. The packet layer may extend the wake period to transmit the voice packets.
Application Domain
Power managementSpeech analysis +1
Technology Topic
Power savingTelecommunications +9
Image
Examples
- Experimental program(1)
Example Embodiment
[0047] The UE may be enabled for VoLTE or other packet-based operations, so that the UE can transmit voice information in a packet form (for example, through an LTE channel). The UE may include an audio layer and a packet layer. The audio layer can encode and decode voice information, and the UE can use the packet layer to send the encoded voice information as packets on LTE. The audio layer may encode voice information into packets during an audio layer compression/decompression (codec) period for transmission by the packet layer. When the voice information is ready for transmission, the audio layer can send voice packets to the packet layer for transmission by the packet layer. The packet layer can be configured based on the DRX cycle, where the UE periodically wakes up to check for pending data transmissions, and then goes back to sleep after processing any pending data transmissions, until the next DRX cycle. The audio layer and the packet layer of the UE can operate asynchronously. For example, if the UE is performing voice communication, even if the packet layer is in a dormant state, the UE can continuously receive incoming and outgoing voice information from the user and other devices participating in the voice call. In some examples, audio layer compression/decompression may occur when the packet layer is in the sleep period of the DRX cycle. The packet layer can wake up to send voice packets. Waking up the packet layer from the low power mode or preventing the packet layer from entering the low power mode reduces the time spent in the low power mode, but this leads to excessive power usage and low efficiency.
[0048] The packet layer and the audio layer can coordinate the audio layer coding with the wake-up period of the DRX cycle. The packet layer can store the parameters related to the DRX cycle in the shared memory. The audio layer can obtain DRX cycle parameters, and synchronize the audio layer compression/decompression period with the DRX cycle. In some examples, the audio layer may store voice information or audio layer information in the shared memory for retrieval and use by the packet layer.
[0049] The audio layer can perform speech encoding and decoding at the beginning of the DRX cycle based on the DRX cycle parameters. Therefore, the audio layer can complete voice coding and send the encoded voice information to the packet layer when the packet layer wakes up. The packet layer can send voice packets during the awake period and enter the low power mode or sleep period without being awakened by other encoded voice information later in the DRX cycle.
[0050] In some examples, the audio layer may continuously perform speech coding based on DRX parameters. The audio layer can encode a set of voice samples during the DRX wake period. The audio layer may encode the first subset of voice information, and then directly encode the second subset of voice information after encoding the first subset. When the packet layer wakes up, the audio layer can encode these sets of voice information, and the packet layer can send voice packets and enter a low power mode or sleep period during the remaining time of the DRX cycle.
[0051] In some examples, the encoding and decoding of voice information may last longer than the normal wake-up period of the DRX cycle. The packet layer can extend the wake-up period to send voice packets. The packet layer can enter low power mode after an extended wake-up period. In some examples, even with an extended wake-up period, the UE can use less power by entering low power mode during the remaining time of the DRX cycle without being awakened to send other voice packets.
[0052] The various aspects of this disclosure are initially described in the context of a wireless communication system. Various aspects of the present disclosure are further depicted and described by and referring to device diagrams, system diagrams, and flowcharts related to the voice over Internet protocol power saving technology for wireless systems.
[0053] figure 1 According to various aspects of the present disclosure, an example of a wireless communication system 100 is shown. The wireless communication system 100 includes a base station 105, a UE 115, and a core network 130. In some examples, the wireless communication system 100 may be an LTE, an improved LTE (LTE-A) network, or an NR network. In some cases, the wireless communication system 100 may support enhanced broadband communication, ultra-reliable (ie, mission-critical) communication, low-latency communication, and communication with low-cost and low-complexity devices. In some examples, a wireless device such as UE 115 or base station 105 may support VoLTE communication via the audio layer and packet layer of the wireless device. In order to save power, the packet layer can operate according to a DRX cycle with a wake-up period and a sleep period. When performing VoLTE communication, the audio layer may align the encoding or decoding operation with the awake period to avoid waking up the packet layer during the sleep period.
[0054] The base station 105 may perform wireless communication with the UE 115 via one or more base station antennas. Each base station 105 can provide communication coverage of a corresponding geographic coverage area 110. The communication link 125 shown in the wireless communication system 100 may include uplink transmission from the UE 115 to the base station 105 or downlink transmission from the base station 105 to the UE 115. The control information and data can be multiplexed on the uplink channel or the downlink according to various technologies. For example, time division duplex (TDM) technology, frequency division duplex (FDM) technology, or hybrid TDM-FDM technology can be used to multiplex control information and data on a downlink channel. In some examples, the control information sent during the transmission time interval (TTI) of the downlink channel may be distributed between different control areas in a cascaded manner (for example, distributed in a common control area and one or more specific Between the control areas of the UE).
[0055] The UE 115 may be dispersed in the wireless communication system 100, and each UE 115 may be stationary or mobile. UE 115 can also be referred to as a mobile station, subscriber station, mobile unit, subscriber unit, wireless unit, remote unit, mobile device, wireless device, wireless communication device, remote device, mobile subscriber station, access terminal, mobile terminal, wireless terminal , Remote terminal, handheld device, user agent, mobile client, client, or some other appropriate term. UE 115 can also be a cellular phone, personal digital assistant (PDA), wireless modem, wireless communication device, handheld device, tablet computer, laptop computer, cordless phone, personal electronic device, handheld device, personal computer, wireless local loop (WLL) stations, Internet of Things (IoT) equipment, Internet of Everything (IoE) equipment, Machine Type Communication (MTC) equipment, electrical appliances, automobiles, etc.
[0056] In some cases, the UE 115 can also directly communicate with other UEs (for example, using a peer-to-peer (P2P) protocol or a device-to-device (D2D) protocol). One or more of a group of UEs 115 using D2D communication may be located within the geographic coverage area 110 of the cell. The other UEs 115 in the group may be located outside the coverage area 110 of the cell or may not be able to receive transmissions from the base station 105. In some cases, a group of UEs 115 communicating via D2D communication may utilize a one-to-many (1:M) system in which each UE 115 transmits a signal to every other UE 115 in the group. In some cases, the base station 105 facilitates the scheduling of resources for D2D communication. In other cases, D2D communication is performed independently of the base station 105.
[0057] Some UEs 115 such as MTC or IoT devices may be low-cost or low-complexity devices, and may provide automated communication between machines (ie, machine-to-machine (M2M) communication). M2M or MTC may refer to a data communication technology that allows devices to communicate with each other or with a base station without manual intervention. For example, M2M or MTC can refer to communications from a device integrated with sensors or meters for measuring or capturing information and relaying the information to a central server or application program, which can make full use of the Information, or present the information to people interacting with the program or application. Some UEs 115 may be designed to collect information or implement automated behavior of machines. Examples of applications for MTC equipment include: smart metering, inventory monitoring, water level monitoring, equipment monitoring, healthcare monitoring, wildlife monitoring, weather and geological event monitoring, fleet management and tracking, remote security sensing, physical access Control and transaction-based business billing.
[0058] In some cases, MTC devices can operate at reduced peak rates using half-duplex (one-way) communication. The MTC device can also be configured to enter a power saving "deep sleep" mode when not participating in active communications. In some cases, MTC or IoT devices can be designed to support mission-critical functions, and wireless communication systems can be configured to provide ultra-reliable communications to these functions.
[0059] The base station 105 can communicate with the core network 130 and communicate with each other. For example, the base station 105 may interact with the core network 130 through the backhaul link 132 (for example, S1, etc.). The base stations 105 may communicate with each other directly or indirectly (for example, through the core network 130) through a backhaul link 134 (for example, X2, etc.). The base station 105 may perform radio configuration and scheduling for communication with the UE 115, or may operate under the control of a base station controller (not shown). In some examples, the base station 105 may be a macro cell, a small cell, a hot spot, and so on. The base station 105 may also be referred to as an evolved node B (eNB) 105.
[0060] The base station 105 may be connected to the core network 130 through the S1 interface. The core network may be an evolved packet core (EPC), and the EPC may include at least one mobility management entity (MME), at least one serving gateway (S-GW), and at least one packet data network (PDN) gateway (P-GW). The MME may be a control node that handles signaling between the UE 115 and the EPC. All user Internet Protocol (IP) packets can be transmitted through the S-GW, where the S-GW itself can be connected to the P-GW. P-GW can provide IP address allocation and other functions. The P-GW can be connected to the IP service of the network operator. The operator's IP services may include the Internet, Intranet, IP Multimedia Subsystem (IMS), and Packet Switched (PS) streaming services.
[0061] The core network 130 may provide user authentication, access authorization, tracking, IP connection, and other access, routing, or mobility functions. At least some of the network devices may include subcomponents such as access network entities, which may be an example of an access node controller (ANC). Each access network entity may communicate with multiple UEs 115 through multiple other access network transmission entities (each of which may be an example of a smart radio head or a transmission/reception point (TRP)). In some configurations, the various functions of each access network entity or base station 105 can be distributed in various network devices (for example, radio heads and access network controllers), or combined in a single network device (for example, The base station 105).
[0062] The wireless communication system 100 may use a frequency band from 700 MHz to 2600 MHz (2.6 GHz) and operate in a higher frequency (UHF) frequency domain, but some networks (eg, wireless local area network (WLAN)) may use frequencies up to 4 GHz. This frequency domain can also be called the decimeter band, because its wavelength ranges from about one decimeter to one meter in length. UHF waves mainly propagate in line of sight, which may be blocked by buildings and environmental features. However, these waves can penetrate the wall enough to provide service to the UE 115 located indoors. Compared with the transmission of smaller frequencies (and longer waves) using the high frequency (HF) or very high frequency (VHF) parts of the spectrum, the transmission of UHF waves is characterized by smaller antennas and shorter distances (for example, Less than 100km). In some cases, the wireless communication system 100 may also utilize the extremely high frequency (EHF) portion of the spectrum (for example, from 30 GHz to 300 GHz). This frequency domain can also be referred to as the millimeter wave band because its wavelength length ranges from about one millimeter to one centimeter. Therefore, EHF antennas may even be smaller and tighter than UHF antennas. In some cases, this may facilitate the use of antenna arrays within UE 115 (e.g., for directional beamforming). However, compared with UHF transmission, EHF transmission may be affected by greater atmospheric attenuation and shorter transmission distance.
[0063] Therefore, the wireless communication system 100 can support millimeter wave (mmW) communication between the UE 115 and the base station 105. Devices operating in mmW or EHF bands may have multiple antennas to allow beamforming. In other words, the base station 105 may use multiple antennas or antenna arrays to perform beamforming operations to achieve directional communication with the UE 115. Beamforming (which may also be referred to as spatial filtering or directional transmission) is signal processing that can be used at a transmitter (e.g., base station 105) to shape or steer the overall antenna beam in the direction of a target receiver (e.g., UE 115) technology. This can be achieved by combining the elements in the antenna array in such a way that the transmitted signal at a certain angle undergoes constructive interference and other signals undergo destructive interference.
[0064] Multiple-input multiple-output (MIMO) wireless systems use a transmission scheme between a transmitter (e.g., base station 105) and a receiver (e.g., UE 115), where both the transmitter and receiver are equipped with multiple antennas. Some parts of the wireless communication system 100 may use beamforming. For example, the base station 105 may have an antenna array with multiple rows and multiple columns of antenna ports, and the base station 105 may use these antenna ports for beamforming in its communication with the UE 115. The signal can be sent multiple times in different directions (for example, each transmission can be beamformed differently). The mmW receiver (e.g., UE115) can try multiple beams (e.g., antenna sub-arrays) when receiving synchronization signals.
[0065] In some cases, the antennas of the base station 105 or UE 115 may be located in one or more antenna arrays, where these antenna arrays may support beamforming or MIMO operation. One or more base station antennas or antenna arrays may be co-located at antenna components such as antenna towers. In some cases, the antennas or antenna arrays associated with the base station 105 may be located in different geographic locations. The base station 105 may use multiple antennas or antenna arrays to perform beamforming operations to achieve directional communication with the UE 115.
[0066] In some cases, the wireless communication system 100 may be a packet-based network operating according to a layered protocol stack. In the user plane, communication at the bearer or packet data convergence protocol (PDCP) layer may be IP-based. In some cases, the radio link control (RLC) layer can perform packet segmentation and reassembly to communicate through logical channels. The media access control (MAC) layer can perform priority processing and multiplexing of logical channels to transport channels. The MAC layer can also use Hybrid ARQ (HARQ) to provide retransmission of the MAC layer to improve link efficiency. In the control plane, the radio resource control (RRC) protocol layer can provide the establishment of an RRC connection between the UE 115 and the network device 105-c, the network device 105-b, or the core network 130 that supports the radio bearer for user plane data. , Configuration and maintenance. At the physical (PHY) layer, transport channels can be mapped to physical channels.
[0067] The time interval in LTE or NR can be expressed as a basic time unit (it can be T s = 1/30, a multiple of 720,000 second sampling period). According to the length of 10ms (T f = 307200T s ) To organize time resources. These radio frames can be identified by a system frame number (SFN) from 0 to 1023, for example. Each frame may include 10 1ms subframes numbered from 0 to 9. The subframe can be further divided into two 0.5ms time slots, and each time slot contains 6 or 7 modulation symbol periods (depending on the length of the cyclic prefix from the prefix to each symbol period). Excluding the cyclic prefix, each symbol contains 2048 sampling periods. In some cases, the subframe may be the smallest scheduling unit, which is also called TTI. In other cases, the TTI may be shorter than the subframe, or may be dynamically selected (e.g., in a short TTI burst, or in a selected component carrier using a short TTI).
[0068] The resource element may include one symbol period and one subcarrier (for example, 15KHz frequency range). The resource block may include 12 consecutive subcarriers in the frequency domain, and for a common cyclic prefix in each OFDM symbol, it includes 7 consecutive OFDM symbols (1 slot) or 84 resource elements in the time domain. The number of bits carried by each resource element depends on the modulation scheme (the configuration of symbols that can be selected during each symbol period). Therefore, the more resource blocks the UE receives, the higher the order of the modulation scheme, and the higher the data rate.
[0069] The wireless communication system 100 may support operations on multiple cells or carriers, and its feature may be referred to as carrier aggregation (CA) or multi-carrier operation. The carrier may also be referred to as a component carrier (CC), layer, channel, and so on. The terms "carrier", "component carrier", "cell" and "channel" may be used interchangeably herein. The UE 115 may be configured with multiple downlink CCs and one or more uplink CCs for carrier aggregation. Carrier aggregation can be used in combination with FDD and TDD component carriers.
[0070] In some cases, the wireless communication system 100 may utilize enhanced component carrier (eCC). The eCC can be described by one or more of the following features: wider bandwidth, shorter symbol duration, shorter TTI, and modified control channel configuration. In some cases, eCC may be associated with carrier aggregation configuration or dual connectivity configuration (for example, when multiple serving cells have sub-optimal or non-ideal backhaul links). The eCC can also be configured to be used in unlicensed spectrum or shared spectrum (allowing more than one operator to use the spectrum). An eCC with wider bandwidth characteristics may include one or more segments, where UE 115 that cannot monitor the entire bandwidth or preferably uses limited bandwidth (for example, to save power) can utilize these segments.
[0071] In some cases, eCC may utilize a different symbol duration from other CCs, which may include using a reduced symbol duration compared to the symbol duration of other CCs. A shorter symbol duration is associated with an increased subcarrier spacing. A device using eCC (for example, UE 115 or base station 105) can transmit a wideband signal (for example, 20, 40, 60, 80 MHz, etc.) in a reduced symbol duration (for example, 16.67 microseconds). The TTI in eCC can be composed of one or more symbols. In some cases, the TTI duration (that is, the number of symbols in the TTI) may be variable.
[0072] The shared radio spectrum can be used in the NR shared spectrum system. For example, NR shared spectrum can utilize any combination of licensed, shared, and unlicensed spectrum, and so on. The flexibility of eCC symbol duration and subcarrier spacing may allow the use of eCC across multiple spectrums. In some examples, NR sharing of spectrum can increase frequency utilization and spectral efficiency, especially through vertical (e.g., cross-frequency) and horizontal (e.g., cross-time) sharing of resources.
[0073] In some cases, the wireless system 100 may utilize licensed and unlicensed radio frequency bands. For example, the wireless system 100 may adopt LTE License Assisted Access (LTE-LAA) or LTE License-Free (LTE U) radio access technology in unlicensed frequency bands such as the 5GHz Industrial, Scientific, and Medical (ISM) band, or NR technology. When operating in an unlicensed radio frequency band, wireless devices such as base station 105 and UE 115 may adopt a listen before talk (LBT) process to ensure that the channel is free before sending data. In some cases, the operation in the unlicensed frequency band may be based on the CA configuration combined with the CC operating in the licensed frequency band. Operations in the unlicensed spectrum may include downlink transmission, uplink transmission, or both. The duplex in the unlicensed spectrum can be based on frequency division duplex (FDD), time division duplex (TDD) or a combination of the two.
[0074] figure 2 According to various aspects of the present disclosure, an example of a wireless communication system 200 supporting a voice over internet protocol power saving technology for a wireless system is shown. In some examples, the wireless communication system 200 can implement various aspects of the wireless communication system 100.
[0075] The wireless communication system 200 may include a base station 105-a, which may be an example of the base station 105 as described herein. The wireless communication system 200 may include a UE 115-a and a UE 115-b, and the UE 115-a and the UE 115-b may be examples of the UE 115 as described herein. UE 115-a and UE 115-b can participate in VoLTE communication. In some examples, UE 115-a may send uplink information 205 to base station 105-a, and base station 105-a may send downlink information 210 to UE 115-b. The uplink information 205 and the downlink information 210 may include voice packets for VoLTE communication. In some examples, the UE 115-a and the UE 115-b may communicate directly using the communication link 215 (eg, based on D2D communication).
[0076] The UE 115 may each include an audio layer and a packet layer. The audio layer can encode and decode voice information, and the packet layer can send the encoded voice information as packets on LTE. The packet layer and the audio layer can synchronize the audio layer encoding with the wake-up period of the DRX cycle. The packet layer can store parameters related to the DRX cycle in the shared memory. The audio layer can retrieve DRX cycle parameters and synchronize the audio layer compression/decompression period with the DRX cycle. That is, the UE 115 can establish an encoding timeline for the audio layer based on the DRX cycle parameter. In some examples, the audio layer may store voice information or audio layer information in the shared memory for retrieval and use by the packet layer.
[0077] For example, the audio layer of the UE 115-a may perform voice coding at the beginning of the DRX cycle based on the DRX cycle parameters. Voice coding may occur when the packet layer wakes up, so that the packet layer may not wake up during the low power period of the DRX cycle to transmit or receive voice packets. During the awake period, the packet layer of UE 115-a may send voice packets (e.g., via uplink information 205 or communication link 215). Then, the packet layer of the UE 115-a can enter a low power mode or a sleep period without being awakened by another encoded voice information later in the DRX cycle.
[0078] The UE 115-b may also establish an encoding timeline based on the DRX parameters. For example, UE 115-b may receive voice packets in downlink information 210 or through communication link 215. Then, the packet layer of the UE 115-b can decode the voice packet during the awake period of the DRX cycle. After decoding the voice packet, the packet layer of the UE 115-b can enter the sleep period of the DRX cycle without being awakened for sending or receiving another voice packet.
[0079] In some examples, the audio layer may continuously perform multiple speech encodings at the beginning of the DRX cycle. The audio layer can encode the speech sample set at the beginning of the DRX cycle. The audio layer may encode the first subset of voice information, and then directly encode the second subset of voice information after encoding the first subset. By continuously performing encoding, the audio layer can encode the same amount of voice information in less time than the duration of the two audio layer compression/decompression periods. The audio layer can encode the voice information set when the packet layer wakes up, and the packet layer can send voice packets and enter the low power mode or sleep period during the remaining time of the DRX cycle.
[0080] In some examples, the encoding and decoding of voice information may last longer than the normal wake-up period of the DRX cycle. The packet layer can extend the wake-up period to send voice packets. After an extended wake-up period, the packet layer can enter a low power mode. In some examples, even with an extended wake-up period, the UE 115 may use less power by entering the low power mode for the remainder of the DRX cycle without being awakened for sending additional voice packets.
[0081] image 3 According to various aspects of the present disclosure, an example of timeline synchronization 300 supporting a voice over internet protocol power saving technology for a wireless system is shown. In some examples, the timeline synchronization 300 can implement various aspects of the wireless communication system 100. The timeline synchronization 300 may show examples of the packet layer timeline 305, the asynchronous audio layer timeline 310, and the synchronous audio layer timeline 315.
[0082] The packet layer timeline 305 may include a DRX cycle 320. The DRX cycle 320 may include a wake period 325 and a sleep period 330. At the beginning of each DRX cycle 320, the packet layer can wake up within the wake up period 325. During the wake-up period 325, the packet layer can determine whether there are any data packets to be processed, and perform data packet switching. After the wake period 325, the packet layer may enter the sleep period 330. The packet layer may enter a low power mode during the sleep period 330. In some configurations, such as for the asynchronous audio layer timeline 310, the packet layer may wake up during the sleep period 330 to send voice packets.
[0083] In the asynchronous audio layer timeline 310, the audio layer can encode and decode voice information during the audio layer compression/decompression period 335. In some examples, the audio layer compression/decompression period 335 may have a period of 20 ms. The audio layer compression/decompression period 335 may include speech encoding 340 and speech decoding 345. The audio layer may continuously receive voice information from the audio front end (AFE) 350. The audio layer may perform voice encoding 340 and send the encoded voice information to the packet layer for transmission by the packet layer. In some examples, the audio layer may send the encoded voice information to the packet layer during the sleep period 330. The packet layer can wake up from the sleep period 330 to send encoded voice information.
[0084] A synchronized audio layer timeline 315 has been established based on DRX parameters. For example, the packet layer may store parameters related to the DRX cycle 320 in the shared memory, and the audio layer may obtain these parameters. Then, the audio layer can establish an audio layer timeline based on the DRX parameters.
[0085] In the synchronized audio layer timeline 315, the speech encoding 340 may start at the beginning of the DRX cycle 320, especially during the wake period 325. The audio layer may send the encoded voice information to the packet layer, and the packet layer may send the encoded voice information as a voice packet during the wake period 325. The audio layer can encode voice information at the beginning of the next DRX cycle 320. The audio layer may not wake up the packet layer after the wake period 325 (e.g., during the sleep period 330). Therefore, synchronizing the audio layer with the packet layer can provide a low power mode gap duration 355 during which the packet layer can enter the sleep period 330 without being awakened by the synchronized audio layer to send additional voice packets.
[0086] In some examples, the audio layer in the synchronized audio layer timeline 315 may continuously perform speech encoding 340. For example, the audio layer may continuously encode two sets of voice information, and send the two sets of encoded voice information to the packet layer during the wake period 325. Then, the packet layer can prepare the encoded voice information for transmission and send the voice packets.
[0087] In some examples, continuously performing speech encoding 340 can increase the amount of time during the sleep period 330 that the packet layer is not disturbed. However, in some cases, the amount of voice information transmitted may remain the same. For example, the audio layer compression/decompression period 335 may have a duration of 20 ms, and the DRX cycle 320 may have a duration of 40 ms. Although the asynchronous audio layer timeline 310 can perform two kinds of speech encoding 340 during one DRX cycle 320, the speech encoding 340 can be extended to the entire DRX cycle 320. Therefore, the audio layer may send voice packets to the packet layer when the packet layer is in a sleep state, which causes the packet layer to wake up and send the packet. In contrast, the synchronized audio layer timeline 315 shows that the audio layer performs the same amount of speech encoding 340, but the packet layer is in an awake state. Therefore, the synchronized audio layer can still perform two speech encoding 340, but the packet layer may not wake up during the sleep period 330.
[0088] In some examples, speech encoding 340 and speech decoding 345 may have a combined duration that is greater than the awake period 325. The packet layer can extend the wake-up period 325 until the audio layer has completed speech encoding 340 and speech decoding 345. Then, the packet layer may enter the sleep period 330.
[0089] Figure 4 According to various aspects of the present disclosure, an example of a voice packetization process 400 supporting a voice over Internet protocol power saving technology for a wireless system is shown. In some examples, the voice packetization process 400 can implement various aspects of the wireless communication system 100. The voice packetization process may be performed by UE 115-c, which may be an example of UE 115 as described herein. The UE 115-c may include an audio layer 405, a packet layer 410, and a shared memory 415.
[0090] The packet layer 410 may use the link 420-a to store information or parameters related to the DRX cycle in the shared memory 415. The audio layer 405 can retrieve the DRX parameters from the shared memory 415 using the link 420-b. The audio layer 405 may determine the timing of the DRX wake-up period based on the DRX parameters. The audio layer 405 may establish an encoding timeline for encoding voice transmission based on DRX parameters. In another implementation, the audio layer 405 may synchronize the audio timeline with the DRX cycle of the packet layer 410 based on DRX parameters.
[0091] The UE 115-c may participate in voice communication, and receive voice information 425 through the AFE, for example. During the awake period of the DRX cycle, the audio layer 405 may encode the incoming voice information into encoded voice information 430. In some examples, the audio layer 405 may continuously encode the voice information set during the awake period of the DRX cycle. The audio layer 405 may send the encoded voice information 430 to the packet layer 410 through the link 420-c.
[0092] The packet layer 410 may prepare the encoded voice information 430 for transmission as a voice packet 435. The packet layer 410 may receive the encoded voice information 430 during the wake-up period of the DRX cycle, and transmit voice packets during the same wake-up period. In some examples, the speech coding at the audio layer 405 may have a longer duration than the normal wake-up period of the DRX cycle. The packet layer 410 may extend the awake period to receive the encoded voice information 430 and transmit the voice packet 435.
[0093] Figure 5 According to various aspects of the present disclosure, an example of a voice encoding process 500 supporting a voice over Internet protocol power saving technology for a wireless system is shown. In some examples, the speech encoding process 500 can implement various aspects of the wireless communication system 100. The UE 115, as described herein and configured for VoLTE communication, may include an audio layer and a packet layer. The audio layer can encode voice information and send the encoded voice information to the packet layer. The packet layer can send the encoded voice information as voice packets according to the VoLTE configuration.
[0094] At 505, the packet layer may store the DRX parameter set related to the packet layer in the memory. In some examples, the memory may include a storage device shared between the audio layer and the packet layer.
[0095] At 510, the audio layer may obtain the DRX parameter set from a memory accessible by the audio layer and the packet layer. The audio layer can determine the DRX wake-up period of the packet layer based on the DRX parameter set. In some examples, an encoding timeline for encoding voice transmission can be established based on the DRX parameter set. For example, the encoding operation of the encoding timeline may be aligned with the start of the DRX wake-up period. In another embodiment, the audio timeline of the audio layer can be synchronized with the DRX cycle of the packet layer based on the DRX parameters.
[0096] At 515, the packet layer can start the DRX cycle and enter the wake period. At 520 (e.g., approximately the same time as 515), the audio layer can begin the audio encoding process. For example, the audio layer can encode the sample set of voice transmission based on DRX parameters. If there is an established encoding timeline, the audio layer can encode the sample set according to the established encoding timeline. Similarly, if there is a synchronized audio timeline, the audio layer can encode the sample set according to the synchronized audio timeline. At 525, the audio layer may send the encoded voice information to the packet layer.
[0097] At 530, the packet layer may prepare the encoded voice information for transmission. For example, the packet layer may send a packet corresponding to at least a part of the encoded sample set to the second wireless device.
[0098] In some examples, the audio layer can perform continuous audio coding. At 535, the audio layer may perform a second speech encoding. The audio layer may encode the first subset of samples, and encode the second subset of samples after encoding the first subset of samples. Then, at 545, the audio layer may send the encoded second subset of voice information to the packet layer, and the packet layer may send a packet corresponding to at least a portion of the encoded sample set to the second wireless device.
[0099] At 550, the packet layer may exit the awake period and enter the sleep period of the DRX cycle. The packet layer can enter a low power mode during the sleep period. In some examples, the grouping layer may extend the DRX wake-up period based on the time interval used to encode the first subset of samples and the second subset of samples.
[0100] Image 6 According to aspects of the present disclosure, an example of a process flow 600 supporting a voice over Internet protocol power saving technology for a wireless system is shown. In some aspects, the process flow 600 may implement aspects of the wireless communication system 100 or 200. The process flow 600 shows various aspects of the techniques performed by the sending device 605 and the receiving device 610.
[0101] The sending device 605 may be an example of the UE 115 or the base station 105 as described herein. The receiving device 610 may be an example of the UE 115 or the base station 105 as described herein. The sending device 605 and the receiving device 610 may communicate directly or communicate through another device such as the base station 105. At 615, the sending device 605 may recognize the sample set of voice transmission through the audio layer. At 620, the sending device 605 may obtain the DRX parameters from the memory accessible by the audio layer and the packet layer of the sending device 605. The sending device 605 may determine the DRX wake-up period of the packet layer based on the DRX parameter set through the audio layer.
[0102] At 625, the sending device 605 may establish an encoding timeline for encoding the voice transmission based on the DRX parameters. The sending device 605 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based on the DRX parameter set.
[0103] At 630, the sending device 605 may encode the sample set of voice transmission through the audio layer based on the synchronized audio timeline or the DRX parameter set. In some examples, the encoding may be performed during at least a portion of the DRX wake-up period. In some examples, the audio layer may encode the first subset of samples, and encode the second subset of samples after encoding the first subset of samples. In some examples, the sending device 605 may encode the sample set according to the established encoding timeline. The encoding operation of the encoding timeline may be aligned with the beginning of the DRX wake-up period of the DRX parameter set. In another example, the sending device 605 may encode the sample set according to the synchronized audio timeline.
[0104] At 635, the sending device 605 may send the packet corresponding to at least a part of the encoded sample set to the receiving device 610. The receiving device 610 may receive the packet, which may correspond to Internet Protocol voice transmission.
[0105] At 640, the receiving device 610 may obtain the DRX parameters from the memory accessible by the audio layer and the packet layer of the receiving device 610. The receiving device 610 may determine the DRX wake-up period of the packet layer based on the DRX parameter set through the audio layer.
[0106] At 645, the receiving device 610 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based on the DRX parameter set. In another embodiment, the receiving device 610 may establish an encoding timeline for decoding the packet based on the DRX parameter set.
[0107] At 650, the receiving device 610 may decode the voice packet 650. In some examples, the decoding of at least a part of the packet may be performed during at least a part of the DRX wake-up period. In some examples, the audio layer may decode the first part of the packet and decode the second part of the packet after the first part of the packet. In some examples, the receiving device 610 may decode the part of the packet according to the established encoding timeline. The decoding operation of the encoding timeline may be aligned with the beginning of the DRX wake-up period of the DRX parameter set. In another example, the receiving device 610 may decode the part of the packet according to the synchronized audio timeline.
[0108] Figure 7 According to various aspects of the present disclosure, a diagram 700 of a wireless device 705 supporting a voice over internet protocol power saving technology for a wireless system is shown. The wireless device 705 may be an example of some aspects of the base station 105 or the UE 115 as described herein. The wireless device 705 may include a receiver 710, a communication manager 715, and a transmitter 720. The wireless device 705 may also include a processor. Each of these components can communicate with each other (e.g., via one or more buses).
[0109] The receiver 710 can receive such as packets, user data, or control information associated with various information channels (for example, control channels, data channels, and information related to Internet Protocol voice power saving technology for wireless systems, etc.). Information. Information can be transferred to other parts of the device. The receiver 710 can be a reference Picture 10 Examples of some aspects of the transceiver 1035 are described. The receiver 710 may use a single antenna or a group of antennas.
[0110] The communication manager 715 can be a reference Picture 10 Examples of some aspects of the communication manager 1015 are described. The communication manager 715 and/or at least some of its various subcomponents may be implemented in a manner of hardware, software executed by a processor, firmware, or any combination thereof. When implemented with software executed by a processor, a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or a general-purpose processor used to perform the functions described in the present disclosure Other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, or any combination thereof may perform the functions of the communication manager 715 and/or at least some of its various subcomponents.
[0111] At least some of the communication manager 715 and/or its various sub-components may be physically distributed in multiple locations, including a part that is distributed to implement functions at different physical locations through one or more physical devices. In some examples, the communication manager 715 and/or at least some of its various sub-components may be separate and different components in accordance with various aspects of the present disclosure. In other examples, according to various aspects of the present disclosure, at least some of the communication manager 715 and/or its various subcomponents may be combined with one or more other hardware components, where these hardware components include but are not limited to: I/O components, transceivers, web servers, another computing device, one or more other components described in this disclosure, or combinations thereof.
[0112] The communication manager 715 can recognize the sample set of voice transmission through the audio layer of the wireless device, and obtain the DRX parameter set corresponding to the packet layer of the wireless device from the audio layer and the memory accessible by the packet layer of the wireless device. The DRX parameter set synchronizes the audio timeline of the audio layer with the DRX cycle of the packet layer, and the audio layer is based on the synchronized audio timeline, the DRX parameter set, or both the synchronized audio timeline and the DRX parameter set, Encode the sample set of voice transmission. The communication manager 715 can also receive the packet corresponding to the voice transmission of Internet Protocol at the wireless device, and obtain the DRX parameter corresponding to the packet layer of the wireless device from the memory accessible to the audio layer and the packet layer of the wireless device Set, based at least in part on the DRX parameter set, synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer, and the audio timeline synchronized by the audio layer, the DRX parameter set, or the synchronized audio timeline and the The DRX parameter sets both, and decodes at least a part of the packet.
[0113] The transmitter 720 can transmit signals generated by other components of the device. In some examples, the transmitter 720 may be collocated with the receiver 710 in the transceiver module. For example, the transmitter 720 can be a reference Picture 10 Examples of some aspects of the transceiver 1035 are described. The transmitter 720 may use a single antenna or a group of antennas.
[0114] Figure 8 According to aspects of the present disclosure, a diagram 800 of a wireless device 805 supporting a voice over internet protocol power saving technology for a wireless system is shown. The wireless device 805 can be as referenced Figure 7 Examples of some aspects of wireless device 705 or base station 105 or UE 115 are described. The wireless device 805 may include a receiver 810, a communication manager 815, and a transmitter 820. The wireless device 805 may also include a processor. Each of these components can communicate with each other (e.g., via one or more buses).
[0115] The receiver 810 may receive such as packets, user data, or control information associated with various information channels (for example, control channels, data channels, and information related to Internet Protocol voice power saving technology for wireless systems, etc.) Information. Information can be transferred to other parts of the device. The receiver 810 can be a reference Picture 10 Examples of some aspects of the transceiver 1035 are described. The receiver 810 may use a single antenna or a group of antennas.
[0116] Communication manager 815 can be a reference Picture 10 Examples of some aspects of the communication manager 1015 are described. The communication manager 815 may also include a speech recognizer 825, a DRX component 830, an encoder 835, a packet receiver 840, an access component 845, and a decoder 850.
[0117] The voice recognizer 825 may recognize the sample set of voice transmission through the audio layer of the wireless device.
[0118] The DRX component 830 can obtain the DRX parameter set corresponding to the packet layer of the wireless device from the audio layer and the memory accessible to the packet layer of the wireless device, and determine the DRX wakeup of the packet layer based on the DRX parameter set through the audio layer A period in which the encoding of the sample set is performed during at least a part of the DRX wake-up period. In some examples, the DRX component 830 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based at least in part on the DRX parameter set.
[0119] The encoder 835 may encode the sample set of voice transmission based on the DRX parameter set or the synchronized audio timeline through the audio layer. The encoder 835 may send the encoded sample set to the packet layer of the wireless device. The encoder 835 may encode the sample set according to the established encoding timeline, and encode the sample set according to the synchronized audio timeline. The encoder 835 may extend the DRX wake-up period of the packet layer based on the time interval for encoding the first subset of samples and the second subset of samples. In some cases, encoding a set of samples includes encoding a first subset of samples, and after encoding the first subset of samples, encoding a second subset of samples. In some cases, during the DRX wake-up period of the packet layer, the first subset of samples and the second subset of samples are encoded. In some cases, the first subset of samples and the second subset of samples are encoded consecutively.
[0120] The packet receiver 840 may receive a packet corresponding to the voice over Internet protocol at the wireless device.
[0121] The access component 845 can obtain the DRX parameter set corresponding to the packet layer of the wireless device from the memory accessible to the audio layer and the packet layer of the wireless device, and determine the DRX parameter of the packet layer based on the DRX parameter set through the audio layer. A wake-up period, wherein during at least a part of the DRX wake-up period, decoding of at least the part of the packet is performed.
[0122] The decoder 850 may decode at least a part of the packet based on the DRX parameter set through the audio layer. The decoder 850 may decode at least the part of the packet according to the established encoding timeline, and decode at least the part of the packet according to the synchronized audio timeline. The decoder 850 may extend the DRX wake-up period of the packet layer based on the time interval for decoding the first part and the second part of the packet. In some cases, the decoding operation of the encoding timeline is aligned with the beginning of the DRX wake-up period of the DRX parameter set. In some cases, decoding at least the part of the packet includes decoding the first part of the packet, and decoding the second part of the packet after decoding the first part of the packet. In some cases, during the DRX wake-up period of the packet layer, the first part and the second part of the packet are decoded. In some cases, the first part and the second part of the packet are decoded consecutively.
[0123] The transmitter 820 can transmit signals generated by other components of the device. In some examples, the transmitter 820 may be collocated with the receiver 810 in the transceiver module. For example, the transmitter 820 can be a reference Picture 10 Examples of some aspects of the transceiver 1035 are described. The transmitter 820 may use a single antenna or a group of antennas.
[0124] Picture 9 According to aspects of the present disclosure, a diagram 900 of a communication manager 915 supporting a voice over internet protocol power saving technology for a wireless system is shown. The communication manager 915 can be a reference Figure 7 , 8 with Picture 10 Examples of some aspects of the communication manager 715, communication manager 815, or communication manager 1015 are described. The communication manager 915 may include a speech recognizer 920, a DRX component 925, an encoder 930, a packet receiver 935, an access component 940, a decoder 945, a transmission component 950, a timeline component 955, a synchronization component 960, a storage component 965, Decoding timeline component 970 and audio timeline component 975. Each of these modules may directly or indirectly communicate with each other (for example, via one or more buses).
[0125] The voice recognizer 920 may recognize the sample set of voice transmission through the audio layer of the wireless device.
[0126] The DRX component 925 can obtain the DRX parameter set corresponding to the packet layer of the wireless device from the audio layer and the memory accessible to the packet layer of the wireless device, and determine the DRX wakeup of the packet layer based on the DRX parameter set through the audio layer. A period in which the encoding of the sample set is performed during at least a part of the DRX wake-up period.
[0127] The encoder 930 may encode the sample set of voice transmission based on the DRX parameter set or the synchronized audio timeline through the audio layer. The encoder 930 may send the encoded sample set to the packet layer of the wireless device. The encoder 930 may encode the sample set according to the established encoding timeline, and encode the sample set according to the synchronized audio timeline. The encoder 930 may extend the DRX wakeup period of the packet layer based on the time interval for encoding the first subset of samples and the second subset of samples. In some cases, encoding a set of samples includes encoding a first subset of samples, and after encoding the first subset of samples, encoding a second subset of samples. In some cases, during the DRX wake-up period of the packet layer, the first subset of samples and the second subset of samples are encoded. In some cases, the first subset of samples and the second subset of samples are encoded consecutively.
[0128] The packet receiver 935 may be at the wireless device to receive packets corresponding to Internet Protocol voice transmission.
[0129] The access component 940 can obtain the DRX parameter set corresponding to the packet layer of the wireless device from the memory accessible to the audio layer and the packet layer of the wireless device, and determine the DRX of the packet layer based on the DRX parameter set through the audio layer. A wake-up period, wherein during at least a part of the DRX wake-up period, decoding of at least the part of the packet is performed.
[0130] The decoder 945 may decode at least a part of the packet based on the DRX parameter set or the synchronized audio timeline through the audio layer. The decoder 945 may decode at least the part of the packet according to the established encoding timeline, and decode at least the part of the packet according to the synchronized audio timeline. The decoder 945 may extend the DRX wakeup period of the packet layer based on the time interval for decoding the first part and the second part of the packet. In some cases, the decoding operation of the encoding timeline is aligned with the beginning of the DRX wake-up period of the DRX parameter set. In some cases, decoding at least the part of the packet includes decoding the first part of the packet, and decoding the second part of the packet after decoding the first part of the packet. In some cases, during the DRX wake-up period of the packet layer, the first part and the second part of the packet are decoded. In some cases, the first part and the second part of the packet are decoded consecutively.
[0131] The transmission component 950 may send a packet corresponding to at least a part of the encoded sample set to the second wireless device.
[0132] The timeline component 955 may establish an encoding timeline for encoding voice transmission based on the DRX parameter set. In some cases, the encoding operation of the encoding timeline is aligned with the beginning of the DRX wake-up period of the DRX parameter set.
[0133] The synchronization component 960 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based on the DRX parameter set.
[0134] The storage component 965 can store the DRX parameter set in the memory through the grouping layer. In some cases, the memory includes a storage device shared between the audio layer and the packet layer.
[0135] The decoding timeline component 970 may establish an encoding timeline for decoding the packet based on the DRX parameter set.
[0136] The audio timeline component 975 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based on the DRX parameter set.
[0137] Picture 10 According to aspects of the present disclosure, there is shown a diagram of a system 1000 including a device 1005 that supports a voice over Internet protocol power saving technology for wireless systems. The device 1005 can be as above for example with reference to Figure 7 with Figure 8 The described examples of the wireless device 705, the wireless device 805, or the base station 105 or the UE 115, or include components of the wireless device 705, the wireless device 805, the base station 105, or the UE 115. The device 1005 may include components for two-way voice and data communication, including components for sending communications and components for receiving communications, including a communication manager 1015, a processor 1020, a memory 1025, software 1030, a transceiver 1035, Antenna 1040 and I/O controller 1045. These components may communicate electronically via one or more buses (e.g., bus 1010).
[0138] The processor 1020 may include intelligent hardware devices (for example, general-purpose processors, DSPs, central processing units (CPUs), microcontrollers, ASICs, FPGAs, programmable logic devices, separate gates or transistor logic components, separate hardware components, or any of them). combination). In some cases, the processor 1020 may be configured to use a memory controller to operate the memory array. In other cases, the memory controller may be integrated into the processor 1020. The processor 1020 may be configured to execute computer-readable instructions stored in the memory to perform various functions (for example, functions or tasks that support voice over internet protocol power saving technology for wireless systems).
[0139] The memory 1025 may include random access memory (RAM) and read only memory (ROM). The memory 1025 may store computer-readable and computer-executable software 1030 including instructions, which when executed, cause the processor to perform various functions described herein. In some cases, among other things, the memory 1025 may include a basic input/output system (BIOS), and the BIOS may control basic hardware or software operations (for example, interaction with peripheral components or devices).
[0140] The software 1030 may include code for implementing aspects of the present disclosure, including code for supporting voice over Internet protocol power saving technology for wireless systems. The software 1030 may be stored in a non-transitory computer readable medium such as system memory or other storage. In some cases, the software 1030 may not be directly executed by the processor, but cause the computer (for example, when compiled and executed) to perform the functions described herein.
[0141] The transceiver 1035 may perform two-way communication via one or more antennas, wired links or wireless links, as described above. For example, the transceiver 1035 may represent a wireless transceiver, which may perform two-way communication with another wireless transceiver. The transceiver 1035 may also include a modem to modulate the packet, provide the modulated packet to the antenna for transmission, and demodulate the packet received from the antenna.
[0142] In some cases, the wireless device may include a single antenna 1040. However, in some cases, the device may have more than one antenna 1040, which can simultaneously send or receive multiple wireless transmissions.
[0143] The I/O controller 1045 can manage input and output signals for the device 1005. The I/O controller 1045 can also manage peripheral devices that are not integrated into the device 1005. In some cases, the I/O controller 1045 may represent a physical connection or port to an external peripheral device. In some cases, the I/O controller 1045 can use Operating system or another known operating system. In other cases, the I/O controller 1045 may represent or interact with a modem, keyboard, mouse, touch screen, or similar device. In some cases, the I/O controller 1045 may be implemented as part of the processor. In some cases, the user can interact with the device 1005 via the I/O controller 1045 or via hardware components controlled by the I/O controller 1045.
[0144] Picture 11 According to aspects of the present disclosure, a flowchart depicting a method 1100 of a voice over internet protocol power saving technique for a wireless system is shown. The operations of the method 1100 may be implemented by the base station 105 or the UE 115 or components thereof as described herein. For example, the operation of method 1100 can be determined by Figure 7 to Figure 10 The described communication manager is implemented. In some examples, the base station 105 or the UE 115 can execute a code set to control the functional units of the device to perform the functions described below. Additionally or alternatively, the base station 105 or the UE 115 may use special-purpose hardware to perform aspects of the functions described below.
[0145] At block 1105, the base station 105 or the UE 115 may recognize the sample set of voice transmission through the audio layer of the wireless device. The operations of block 1105 can be performed according to the methods described herein. In some examples, aspects of the operation of block 1105 may be determined by reference to Figure 7 to Figure 10 The described speech recognizer is implemented.
[0146] At block 1110, the base station 105 or the UE 115 may obtain the DRX parameter set corresponding to the packet layer of the wireless device from the audio layer and the memory accessible to the packet layer of the wireless device. The operations of block 1110 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1110 may be determined by reference to Figure 7 to Figure 10 The described DRX components are implemented.
[0147] At block 1115, the base station 105 or the UE 115 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based on the DRX parameter set. The operations of block 1115 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1115 can be determined by reference to Figure 7 to Figure 10 The described encoder is implemented.
[0148] At block 1120, the base station 105 or the UE 115 may encode the sample set of the voice transmission through the audio layer, based at least in part on the synchronized audio timeline. The operations of block 1120 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1120 may be determined by reference to Figure 7 to Figure 10 The described encoder is implemented.
[0149] Picture 12 According to aspects of the present disclosure, a flowchart depicting a method 1200 of a voice over internet protocol power saving technique for a wireless system is shown. The operations of the method 1200 may be implemented by the base station 105 or the UE 115 or components thereof as described herein. For example, the operation of method 1200 can be determined by Figure 7 to Figure 10 The described communication manager is implemented. In some examples, the base station 105 or the UE 115 can execute a code set to control the functional units of the device to perform the functions described below. Additionally or alternatively, the base station 105 or the UE 115 may use special-purpose hardware to perform aspects of the functions described below.
[0150] At block 1205, the base station 105 or the UE 115 may receive a packet corresponding to a voice over internet protocol transmission at the wireless device. The operations of block 1205 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1205 may be determined by reference to Figure 7 to Figure 10 The described packet receiver is performed.
[0151] At block 1210, the base station 105 or the UE 115 may obtain the DRX parameter set corresponding to the packet layer of the wireless device from the memory accessible to the audio layer and the packet layer of the wireless device. The operations of block 1210 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1210 may be determined by reference to Figure 7 to Figure 10 The described access components are implemented.
[0152] At block 1215, the base station 105 or the UE 115 may synchronize the audio timeline of the audio layer with the DRX cycle of the packet layer based on the DRX parameter set. The operations of block 1215 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1215 may be determined by reference to Figure 7 to Figure 10 The described decoder is implemented.
[0153] At block 1220, the base station 105 or the UE 115 may decode at least a portion of the packet based at least in part on the synchronized audio timeline by the audio layer. The operations of block 1220 may be performed according to the methods described herein. In some examples, aspects of the operation of block 1220 may be determined by reference to Figure 7 to Figure 10 The described decoder is implemented.
[0154] It should be noted that the method described above describes possible implementations, these operations and steps can be rearranged or modified, and other implementations are also possible. In addition, two or more aspects from these methods can be combined.
[0155] The technology described in this article can be used in various wireless communication systems, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal frequency division multiple access (OFDMA), single Carrier Frequency Division Multiple Access (SC-FDMA) and other systems. The terms "system" and "network" are often used interchangeably. The CDMA system can implement radio technologies such as CDMA 2000, Universal Terrestrial Radio Access (UTRA), and so on. CDMA2000 covers IS-2000, IS-95 and IS-856 standards. IS-2000 release version is usually called CDMA 2000 1X, 1X, etc. IS-856 (TIA-856) is usually called CDMA2000 1xEV-DO, High Speed Packet Data (HRPD) and so on. UTRA includes Wideband CDMA (WCDMA) and other variants of CDMA. The TDMA system can implement radio technologies such as the Global System for Mobile Communications (GSM).
[0156] OFDMA system can implement such as Ultra Mobile Broadband (UMB), Evolved UTRA (E-UTRA), Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20, Flash-OFDM, etc. Radio technology like that. UTRA and E-UTRA are part of Universal Mobile Telecommunications System (UMTS). 3GPP LTE and LTE-A are new versions of UMTS that use E-UTRA. UTRA, E-UTRA, UMTS, LTE, LTE-A, NR and GSM are described in documents from an organization named "3rd Generation Partnership Project" (3GPP). CDMA2000 and UMB are described in documents from an organization named "3rd Generation Partnership Project 2" (3GPP2). The techniques described herein can be used for the systems and radio technologies mentioned above as well as other systems and radio technologies. Although aspects of the LTE or NR system are described for the purpose of example, and LTE or NR terminology is used in most of the description, the techniques described herein may also be applicable outside of LTE or NR applications.
[0157] In LTE/LTE-A networks including the networks described herein, the term eNB may generally be used to describe a base station. The wireless communication system or some systems described herein may include a heterogeneous LTE/LTE-A or NR network, where different types of eNBs provide coverage for various geographic areas. For example, each eNB, next-generation node B (gNB), or base station can provide communication coverage for a macro cell, a small cell, or other types of cells. Depending on the context, the term "cell" may be used to describe a base station, a carrier or component carrier associated with the base station, or a coverage area (e.g., sector, etc.) of a carrier or base station.
[0158] A base station may include or may be referred to by those of ordinary skill in the art as a base station transceiver, a radio base station, an access point, a radio transceiver, a Node B, an eNodeB (eNB), a gNB, a home Node B, a home eNodeB, or some other suitable the term. The geographic coverage area of the base station can be divided into sectors that form part of the coverage area. The wireless communication system or some systems described herein may include different types of base stations (e.g., macro base stations or small cell base stations). The UE described herein can communicate with various types of base stations and network devices including macro eNBs, small cell eNBs, gNBs, relay base stations, and so on. Different technologies can have overlapping geographic coverage areas.
[0159] A macro cell usually covers a relatively large geographic area (for example, several kilometers in radius), and may allow UEs with service subscriptions with network providers to have unrestricted access. Compared with a macro cell, a small cell is a low-power base station, and the small cell can operate in the same or different (for example, licensed, unlicensed, etc.) frequency band as the macro cell. According to various examples, the small cell may include a pico cell, a femto cell, and a micro cell. For example, a pico cell may cover a relatively small geographic area, and may allow unrestricted access by UEs with a service subscription with a network provider. A femto cell can also cover a small geographic area (e.g., a home), and it can provide UEs associated with the femto cell (e.g., UEs in a closed user group (CSG), UEs for users in the home). Etc.) Provide restricted access. An eNB used for a macro cell may be referred to as a macro eNB. An eNB used for a small cell may be called a small cell eNB, pico eNB, femto eNB, or home eNB. The eNB may support one or more (e.g., two, three, four, etc.) cells (e.g., component carriers).
[0160] The wireless communication system or some systems described herein may support synchronous or asynchronous operation. For synchronization operations, the base stations can have similar frame timing, and transmissions from different base stations can be approximately aligned in time. For asynchronous operation, base stations may have different frame timings, and transmissions from different base stations may not be aligned in time. The techniques described in this article can be used for synchronous operations as well as asynchronous operations.
[0161] The downlink transmission described herein may also be referred to as forward link transmission, and the uplink transmission may also be referred to as reverse link transmission. Each communication link described herein (for example, it includes figure 1 with figure 2 The wireless communication system 100 and the wireless communication system 200) may include one or more carriers, where each carrier may be a signal composed of multiple sub-carriers (for example, waveform signals of different frequencies).
[0162] The specific embodiments set forth herein in conjunction with the accompanying drawings describe exemplary configurations, but they do not represent all examples that can be implemented, nor do they represent all examples that fall within the protection scope of the claims. The word "exemplary" as used herein means "serving as an example, instance, or illustration", but does not mean "preferred" or "advantageous" over other examples. The detailed description includes specific details for providing a thorough understanding of the described technology. However, these techniques can be implemented without using these specific details. In some instances, in order to avoid obscuring the concepts of the described examples, well-known structures and devices are shown in block diagram form.
[0163] In the drawings, similar components or features have the same reference signs. In addition, each component of the same type can be distinguished by adding a dotted line after the reference number and a second label for distinguishing similar components. If only the first reference sign is used in the specification, the description can be applied to any one similar component having the same first reference sign regardless of other subsequent reference signs.
[0164] The information and signals described herein can be represented using any of a variety of different technologies and methods. For example, the data, instructions, commands, information, signals, bits, symbols, and chips mentioned throughout the above description can be represented by voltage, current, electromagnetic waves, magnetic fields or particles, light fields or particles, or any combination thereof.
[0165] General-purpose processors, DSPs, ASICs, FPGAs or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, or any combination thereof used to perform the functions described herein can be used to implement or execute the combination disclosed herein Various exemplary blocks and modules described. A general-purpose processor may be a microprocessor, or the processor may also be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (for example, a combination of a DSP and a microprocessor, several microprocessors, a combination of a microprocessor and a DSP core, or any other such structure).
[0166] The functions described herein can be implemented by hardware, software executed by a processor, firmware, or any combination thereof. When implemented by software executed by a processor, these functions may be stored on a computer-readable medium or transmitted as one or more instructions or codes on the computer-readable medium. Other examples and implementations also fall within the protection scope and spirit of the present disclosure and the appended claims. For example, due to the nature of software, the functions described above can be implemented using software, hardware, firmware, hardware connections, or any combination thereof executed by a processor. The features used to realize the function may be physically distributed in multiple locations, including being distributed in different physical locations to realize a part of the function. As used herein (including in the claims), when the term "and/or" is used in a list of two or more items, it means that any of the listed items can be used, Or use any combination of two or more of the listed items. For example, if a complex is described as containing components A, B and/or C, the complex can include: only A; only B; only C; a combination of A and B; a combination of A and C; B and C The combination of; or the combination of A, B and C. In addition, as used herein (including in the claims), as used in the list item "or" (for example, with phrases such as "at least one of" or "one or more of" Is the ending list of entries) indicates a separate list, so that, for example, the list "at least one of A, B, or C" means: A or B or C or AB or AC or BC or ABC (ie, A and B and C ).
[0167] Computer-readable media include non-transitory computer storage media and communication media, where communication media includes any media that facilitates the transfer of computer programs from one place to another. The non-transitory storage medium may be any available medium that can be accessed by a general-purpose or special-purpose computer. For example, but without limitation, non-transitory computer-readable media may include RAM, ROM, electrically erasable programmable read-only memory (EEPROM), compact disk (CD) ROM or other optical disk storage, magnetic disk storage or Other magnetic storage devices, or any other non-transitory media that can be used to carry or store desired program code units in the form of instructions or data structures and can be accessed by general-purpose or special-purpose computers, or general-purpose or special-purpose processors . Also, any connection can be properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, wireless, and microwave, The coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, wireless, and microwave are included in the definition of the medium. As used herein, magnetic disks and optical disks include CDs, laser disks, optical disks, digital versatile disks (DVDs), floppy disks, and Blu-ray disks, where disks usually copy data magnetically, while optical disks use lasers to optically copy data. The above combination should also be included in the protection scope of the computer-readable medium.
[0168] To enable those skilled in the art to implement or use the present disclosure, the foregoing description is provided. It is obvious to those skilled in the art to make various modifications to the present disclosure, and the general principles defined herein can also be applied to other modifications without departing from the protection scope of the present disclosure. Therefore, the present disclosure is not limited to the examples and design solutions described herein, but conforms to the widest scope consistent with the principles and novel features disclosed herein.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.