Methods and systems for streaming media data over content delivery network

A streaming and data technology, applied in transmission systems, selective content distribution, image communication, etc., can solve the problems of media data interruption and deterioration

Active Publication Date: 2020-12-18
DOLBY INT AB
2 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

Insufficient resources may result in interrup...
View more

Abstract

The present document describes a method (900) for establishing control information for a control policy of a client (102) for streaming data (103) from at least one server (101, 701). The method (900)comprises performing (901) a message passing process between a server agent of the server (101, 701) and a client agent of the client (102), in order to iteratively establish control information. Furthermore, the method (900) comprises generating (902) a convergence event for the message passing process to indicate that the control information has been established.

Application Domain

TransmissionSelective content distribution

Technology Topic

Streaming dataServer agent +5

Image

  • Methods and systems for streaming media data over content delivery network
  • Methods and systems for streaming media data over content delivery network
  • Methods and systems for streaming media data over content delivery network

Examples

  • Experimental program(1)

Example Embodiment

[0058] As outlined above, this document addresses the technical problem of increasing the QoE of streaming data within a content delivery network. In particular, this document is aimed at increasing the average QoE of multiple clients within a content delivery network. In this situation, figure 1 A block diagram of a content distribution network 100 comprising a server 101 and a plurality of clients 102 is shown. Server 101 is configured to send individual data streams or individual streams of data 103 (eg, audio and/or video data streams) to each of clients 102 .
[0059] This document is aimed at providing the allocation of (network) resources of the CDN 100 (in particular, the resources of the server 101 and/or the resources of the transport network between the server 101 and one or more clients 102) and the users of the clients 102 The optimal trade-off between the final quality of experience (QoE). In this context, feedback channels 111, 112 may be provided to facilitate iterative metadata exchange between server 101 and client 102, wherein feedback channels 111, 112 are provided in parallel with streaming channel 113 for multimedia streaming process. Metadata exchange via feedback channels 111 , 112 may be used to facilitate global and/or overall optimization of the streaming process, and may be used, for example, to reduce or eliminate client contention for the limited (network) resources of the CDN 100 . The metadata exchange process can generate dynamic edge or control information for each client 102 that can be used to adjust local policies that control the streaming process at each client 102 .
[0060] Each client 102 may update and/or generate its client metadata 122 (to be sent via the client feedback channel 112 ) based on the server metadata 121 received from the server 101 (via the server feedback channel 111 ). Additionally, client metadata 122 may be updated and/or generated based on a local client utility or cost function of client 102 . After generating and/or updating the client metadata 122, the client 102 may transmit the (updated) client metadata 122 to one or more servers 101, and the client 102 may transmit the (updated) client metadata 122 via corresponding one or more client feedback channels 112 is connected to one or more servers 101 for streaming data.
[0061] Server 101 may collect at least a subset of client metadata 122 received from clients 102 served by server 101 . Furthermore, server 101 may generate and/or update server metadata 121 based on received client metadata 122 and/or based on local server utility or cost functions. Generated and/or updated server metadata 121 may then be sent to at least a subset of clients 102 via one or more server feedback channels 111 . This process of exchanging and updating metadata 121, 122 may be repeated iteratively and/or periodically. Updates to metadata 121, 122 may be transmitted synchronously or asynchronously. Each client 102 may be configured to update its requests for new content from the server 101 based on received server metadata 111 and/or based on its local utility or cost function. In particular, requests for new content may be performed depending on edge or control information that has been established in the context of the iterative message passing process (subject to the occurrence of a convergent event of the iterative message passing process).
[0062] Exchanging metadata 121 , 122 via feedback channels 111 , 112 provides algorithmic solutions for decentralized control schemes of streaming scenarios (eg sequential streaming). The schemes described herein facilitate the decision of a client 102 to request content at a particular rate and/or at a particular quality level.
[0063]In the case of limited resources of the CDN 100 (eg, due to a transmission bottleneck), the clients 102 are forced to compete for the limited resources. Client contention can lead to unfair resource allocation or unstable QoE. These issues can be addressed by a centralized control scheme that collects requests from clients 102, analyzes the operations of clients 102, and then provides optimal resource allocation to clients 102. However, if content distribution occurs on a large scale, a centralized solution is generally not feasible (in view of computational complexity and/or in view of dynamic changes within the CDN 100 and/or within different clients 102). The (continuous and/or repeated) exchange of metadata 121 , 122 between the server 101 and the client 102 provides a robust and efficient resource allocation scheme for a CDN 100 comprising a high number of servers and/or clients 102 .
[0064] The client utility functions that may be used by clients 102 to generate client metadata 122 may be individual and/or different for each client 102 . The client utility function of the client 102 may be configured to capture and/or quantify the QoE of the client 102 . Client utility functions for different clients 102 may be designed such that QoE is compared among different clients 102 . In particular, the client utility function can utilize common criteria to measure QoE. An example of such a client utility function (in the context of audio streaming) is MUSHRA (Multiple Stimulation with Hidden Reference and Anchors) score as a function of bitrate (where total bitrate is a finite resource). Figure 4a An example client cost function 400 for different clients 102 is shown. Cost function 400 indicates that MUSHRA loss varies with bit rate. Figure 4b A corresponding client utility function 410 indicating the utility 412 of the client 102 as a function of bit rate 411 is illustrated. The client cost function 400 can be viewed as the complement of the corresponding client utility function 410 .
[0065] The cost or utility function 400, 410 for the client 102 may depend on the type of content being streamed and/or rendered, the codec used, the playout or rendering scheme, listening conditions, and/or the client's 102 preference right. A cost or utility function 400, 410 may be provided, for example, for video and/or audio content. As such, the cost or utility function 400, 410 for the client 102 may indicate that the QoE level varies with the bit rate 411 and/or another resource. Differences in the values ​​of the utility function 410 for different clients 102 may indicate that certain changes in the bit rate 411 and/or resources may have different effects on the QoE levels of different clients 102 . From this Figure 4a The points 402, 403 in are illustrated. It can be seen that the increase in MUSHRA loss caused by a certain decrease in bit rate 411 may be different for different cost functions 400 (ie, for different clients 102). The scheme outlined in this document may be aimed at maximizing the average QoE level of the different clients 102 of the CDN 100 (given a finite amount of overall network resources).
[0066] Therefore, a decentralized control scheme for massive streaming scenarios is described in order to improve the trade-off between the allocation of the limited resources of the CDN 100 among the clients 102 of the CDN 100 and the final QoE of these clients 102 . This technical problem can be viewed as a distributed optimization problem. To facilitate distributed optimization, a metadata exchange scheme may be used in which metadata exchange between one or more servers 101 and one or more clients 102 occurs in parallel with the actual streaming process towards one or more clients 102. data exchange. The metadata exchange scheme allows execution of message passing as a component of a distributed optimization scheme. In addition to the messages of the distributed optimization scheme, the metadata 121, 122 exchanged between the one or more servers 102 and the one or more clients 102 may include additional parameters governing the convergence rate and/or adaptability of the optimization algorithm. parameter.
[0067] Given the fact that the client utility function 410 of one or more clients 102 is typically a non-linear function, linear programming cannot be used to solve the resource allocation problem. On the other hand, distributed optimization schemes such as alternating direction multiplier schemes can be used to determine the (optimal) solution to the resource allocation problem based on message passing (ie based on the exchange of metadata 121 , 122 ). It should be noted that the aspects outlined in this document can also be applied to cross-CDN optimization (eg in the case where network coding is used to implement simultaneous streaming of client 102's data 103 from multiple servers 101).
[0068] The total resources (eg, bit rate) available within CDN 100 may be r total. The total resource has to be shared among N clients 102 (n=1, . . . , N). The allocated amount of resources for each client n can be expressed as r n. Therefore, the overall optimization problem to be solved is
[0069]
[0070] where d b (r n ) is a client cost function 400 for client n, such as a rate-distortion function, such as where α n and beta n is a positive parameter that can be different for each client 102 .
[0071] It can be shown that the above optimization problem can be reformulated to provide
[0072]
[0073] where g() is the indicator function given by
[0074]
[0075] Minimize the client cost function d n (r n ) can be divided into subproblems that can be solved independently by each client 102 (using the corresponding client cost function). At some iteration k of the messaging process, client n 102 may receive a message from server 101 along with server metadata 121, wherein server metadata 121 includes allocated amounts about resources that have been allocated to client n Information. Specifically, server metadata 121 may include the value and which in combination indicate the allocated amount of the resource Client 102 may then solve the following optimization problem in order to determine an updated request, i.e., an updated requested amount of a resource
[0076]
[0077] where ρ is a tunable parameter. Accordingly, the client 102 may determine an updated resource request that takes into account the client cost function 400 for the particular client 102 and takes into account the resource allocation provided by the server 101 . Updated Resource Request This can then be provided to the server 101 within client metadata 122 .
[0078] Server 101 receives resource requests from N clients 102 (where the resource requests of different clients 102 depend on the client cost function 400 of the different clients 102). Based on this information, the server 101 can update resource allocations for different clients 102 .
[0079]
[0080]
[0081] value and This can then be sent to the client 102 as server metadata 121 (in order to indicate the updated allocated amount of the resource). This process may be repeated iteratively in order to provide an optimized resource allocation of N clients 102 such that the overall cost of the plurality of clients 102 is minimized That is, such that the average QoE of clients 102 is maximized.
[0082] It can be shown that the first equation of the optimization problem for server 101 can be simplified as
[0083]
[0084] in and This further reduces the computational complexity of the scheme.
[0085] Figure 2a A flowchart of method 200 performed by client 102 is shown. The client 102 receives 201 server metadata 121 from the server 101 . Optionally, the client 102 may update 202 its local client cost function 400 . The client cost function 400 can be minimized 203 based on the received server metadata 121 and updated client metadata 122 can be generated 204 . This can be requested by using the updated resource The formula mentioned above is implemented. The updated resource request can then be transmitted 205 to the server 101 . Subject to convergence of a messaging scheme for establishing resource allocations, may be based on received server metadata 121 (in particular based on allocated resources indicated within server metadata 121 ) while updating 206 the content request for streaming data. Furthermore, the streaming data 103 may be requested from the server 101 according to the allocated resources (step 207).
[0086] Accordingly, the client 102 generates new client metadata 122 to be sent to the server 101 , the new client metadata 122 being based on the local client cost function 400 and based on the server metadata 121 received from the server 101 . Furthermore, the client 102 includes decision logic based on the received server metadata 121 and newly generated client metadata 122 and used to trigger the actual request for content from the server 101 at a particular bit rate (e.g., at After observing the convergence of the message passing process).
[0087] Each client 102 after convergence of the messaging process will have side information r* (which is also referred to herein as control information). By observing its buffer status and by using the fair rate allocation r* (i.e., by using established control information), each client 102 will generate a local client utility function 410 based on the QoE (e.g., A request for a specific amount of resources (especially a specific bit rate) as a function of the rate, a function that facilitates smooth playout, or a combination of these functions). An example of this strategy can be derived by using an update system model with two queues: the first queue represents the playout process, and the second queue represents the bit rate selected by the client 102 and the recommended rate r* provided by the server 101 deviation. The policy can be derived by minimizing the Lyapunov drift of the request with a penalty term representing the QoE-dependent client utility function 410 . Additional instances of this policy can be implemented by limiting the maximum amount of resources a client can request such that r* is not exceeded for the client. In particular, this constraint can be combined with strategies targeting local optimization of QoE.
[0088] Figure 2b Process 210 performed within client 102 as part of the control policy for streaming data 103 is illustrated. Client 102 receives a fair rate allocation within server metadata 121 (e.g., subject to converged and ) instructions 212. Additionally, the client 102 accesses a QoE function 211 (eg, client cost function 400 or client utility function 410 ). Additionally, the client 102 may access the state of its buffer 213 for buffering the streaming data 103 . Based on this information 212 , 211 , 213 , an updated resource request 214 can be generated and provided to the server 101 within the client metadata 122 .
[0089] Figure 3a and3b The methods 310, 320 performed by the server 101 are illustrated. The method 310 aims at generating server metadata 121 for different clients 102 . The method 320 aims at providing content (media) data 103 to different clients 102 . Client metadata 122 is received 311 from at least a subset of clients 102 . The client metadata 122 is accumulated 312 . Furthermore, for example using the given above and The equation for calculates 313 the server utility function and generates 314 new server metadata 121 . The updated server metadata 121 can then be sent to the client 102 . Regarding the streaming process, an updated content request (eg, indicating an updated bit rate 411 ) can be received 321 from the client 102 . The content data 103 can then be sent 325 to the client 102 based on the updated content request.
[0090] Thus, the server 101 collects all client metadata 122 (or a subset of metadata) from the clients 102 connected to the server 101, and based on its own utility function (e.g., denoting the maximum allowed bit rate r total , or consider the estimated bottleneck capacity of the web client 102) to solve the optimization problem. The solution to the optimization problem is used to generate new server metadata 121 for transmission to clients 102 (or at least some of clients 102). Preferably, the method 310 is designed in such a way that the server optimization problem is computationally light. In addition, the server 101 provides access to data 103 requested by the client 102 (the streaming process operates in pull mode). In some embodiments, services may operate on different machines or in different physical locations.
[0091] Figure 5a A streaming process is shown comprising a single server 101 providing streaming data 103 to a plurality of clients 102 via a streaming channel 113 . Each client 102 operates in a unicast setting. Each client 102 is characterized by its own client utility function 410 . An example of this client utility function 410 in a streaming context is a concave curve fitted to the results of the MUSHRA test, which evaluates the subjective performance of an audio codec as a function of operating bit rate 411 ( Figure 4b illustrated in the diagram). If the optimization is formulated as a minimization problem (as outlined above), the utility function 410 can be replaced by a corresponding cost function or rate-distortion function d n (r n ) 400, which is preferably convex. The rate-distortion function or cost function 400 may be obtained by inverting the sign of the utility function 410 and by canceling the utility function 410 . Furthermore, the rate-distortion functions or cost functions 400 of different clients 102 may be weighted and/or cancellation may be applied to facilitate comparison between different rate-distortion functions or cost functions 400 .
[0092] As outlined above and Figure 5b and 5c as shown in , can be directed to allocate resources The server metadata 121 is sent to the client 102. On the other hand, the updated resource request can be Sent from the client 102 to the (single) server 101 as client metadata 122 . By iteratively performing this exchange of metadata 121 , 122 a convergent resource allocation for different clients 102 can be determined. It should be noted that each client 102 can be configured to determine the updated resource request when evaluating the residual term to check whether the scheme has converged. Alternatively or additionally, convergence can be achieved by observing all or some metadata variables exchanged during message passing (e.g., and for comparison) to assess convergence. Alternatively or additionally, each client may exhibit convergence after reaching a fixed number of iterations. Alternatively or additionally, the fact of convergence may be signaled from the server 101 during the process of exchanging metadata 121 , 122 .
[0093] Figure 6a The CDN 100 is shown with a bottleneck 602 which is a limited throughput between two transport nodes 601 . A finite throughput can be distributed among different clients 102, ie r total limited value. Alternatively or additionally, the number N of clients 102 that can be provided by the server 101 can be adjusted, in particular reduced. like Figure 6b As shown in , the limited throughput of bottleneck 602 can also be caused by competing traffic between source point 611 and sink point 612 .
[0094] Figure 7a A streaming scenario in the case of multiple CDNs 100, 700 is shown. Each CDN 100 comprises at least one server 101,701. One or more clients 102 may be part of multiple CDNs 100, 700 and may stream data 103 from multiple servers 101, 701 (eg Figure 7b illustrated in the diagram). In this case, the client 102 may receive server metadata 121 from the plurality of servers 100 , 700 . Additionally, the client 102 may send client metadata 122 to multiple servers 100, 700 (eg Figure 7c illustrated in the diagram).
[0095] In the following, describe Figure 7a Cooperative resource allocation for the multi-server streaming process shown in . Assume that a client 102 can stream data 103 from M different servers 101 , where M>1. r n is an M-dimensional vector indicating M allocated resources for streaming data 103 provided by M servers 101 respectively. 1 T is an M-dimensional vector with all dimensions. The cost function or rate-distortion function 400 for the client 102 is then written as
[0096] dn(r n )=α n exp(-β n 1 T r n )
[0097] The overall optimization problem for all N clients 102 and all M servers 101 can be written as (in a similar fashion as outlined above)
[0098]
[0099] where [z n ] m is the M-dimensional resource allocation vector z of the nth client 102 n The mth entry of (for the mth server 101). Indicator function g of the mth server 101 m () can be defined as
[0100]
[0101] in is the total bit rate available to the mth server.
[0102] It can be shown that the optimization problem mentioned above can be solved using message passing between the client 102 and the server 101 . Each client 102 may receive server metadata 121m=1, . [z n ] m. In addition, auxiliary variables [u m ] n (as a resource allocation). The client 102 may receive this data from all servers 101 , 701 and based on this, the client 102 may generate updated resource requests for the M servers 101 , 701 .
[0103] In general, N×M resource requests can be generated under the condition that there are N clients 102 and M servers 101 , 701 . These can be summarized in a matrix R with dimension NxM. The resource request from the nth client 102 to the mth server 101, 701 can be written as a scalar A complete set of resource requests from the nth client 102 to all M servers 101, 701 can be written as an M-dimensional vector r n =[R] n. Similarly, the allocation of resources from M servers 101, 701 to N clients 102 can be written as an N×M-dimensional matrix Z, where is the resource allocation from the mth server 101, 701 to the nth client 102, and where [Z] m is an N-dimensional vector including resource allocation from the mth server 101, 701 to the N clients 102. u m can be a scalar auxiliary variable [u m ] n An N-dimensional vector of . In addition, for the mth server 101, 701, the resource constraint function G m (z) is defined as
[0104]
[0105] Using the concepts mentioned above, the optimization problem to be solved by the nth client 102 in the kth iteration can be formulated as
[0106]
[0107] where ρ m is a design parameter. updated resource request In the client metadata 122 will be sent from the nth client 102 to the mth server 101 , 701 .
[0108] The optimization problem solved by the mth server 101, 701 in the kth iteration can be expressed as
[0109]
[0110] which according to to update the auxiliary variable
[0111] The updated resource can be allocated and the updated auxiliary variable It is sent from the mth server 101 , 701 to the nth client 102 in the server metadata 121 .
[0112] The above distributed optimization scheme can be iterated until a stopping criterion is met (ie, convergence occurs). After convergence, each client 102 may use the determined resource allocation to make content requests to different servers 101 , 701 .
[0113] Thus, performance (e.g., QoE) optimization can be performed across different CDNs 100, 700, where a CDN 100, 700 can be viewed as a set of clients 102 streaming content 103 from a particular server 101, 701, and each of which A CDN 100,700 may comprise a single server 101,701. (Especially each) Client 102 may be configured to participate in one or more CDNs 100, 700 (eg Figure 7a illustrated in the diagram).
[0114]Each client 102 may be characterized by a known client function 400 , 410 that describes the variation in performance or utility 412 of the client 102 with a specified rate 411 . One example of such a function 400, 410 is a rate-distortion function (eg, in the context of an audio codec, such a function may map rate to eg MUSHRA performance). Performance functions (in particular, client utility functions) 400, 410 generally vary from client to client. Furthermore, performance functions 400, 410 may vary over time according to content type, play time, and/or listening conditions, for example. The different CDNs 100, 700 often operate in an uncoordinated manner. Hence, no (explicit) communication may take place between servers 101 , 701 of different CDNs 100 , 700 . Furthermore, clients 102 may not communicate with each other.
[0115] The goal of the overall optimization scheme is to provide the best possible average experience for all clients 102 that experience:
[0116] • One or more server and/or resource constraints (e.g., average rate/client may be constrained in each CDN 100, 700); and/or
[0117] • Channel capacity constraints that vary for each client 102 .
[0118] This optimization scheme can be formulated as a shared problem. From the perspective of the client 102, the problem faced is to find the optimal way to use the available network resources. On the other hand, the server 101 will facilitate the cooperation of the clients 102 so that an equilibrium can be reached. Optimization problems can be solved by message passing in a distributed environment. The overall optimization problem can be subdivided into partial problems that can be solved independently by clients 102 . Rather, an iterative solution comprising exchanging messages between the server 101, 701 and the client 102 may be provided, as outlined above.
[0119] like Figure 7c As illustrated in , each server 101 , 701 sends a message with individual server metadata 121 to each client 102 connected to the corresponding server 101 , 701 . The server metadata 121 is specific to each client 102 . The derivation of server metadata 121 is the result of a specific provisioning optimization method, such as the distributed optimization method described above. This optimization method can be computationally light and scales efficiently with the number N of connected clients 102 . The client 102 provides updates to the server 101 , 701 that are derived based on its own goals (in particular based on its client utility function 410 ) and using messages received from the server 101 , 701 .
[0120] In practice, the granularity of the resource (eg, bit rate) and/or the quality of the different available versions of the content may be limited. This problem can be addressed by quantifying the determined resources (in particular resource requests and/or allocated resources) according to available granularity. Thus, optimal solutions can be solved for successive situations, and the determined solutions can be quantified. Specifically, the client 102 may be adapted to determine the (continuous) resource request r as outlined in this document n. Then, the resource request r n Projected to available discrete resource quantities s n one of. Specifically, the closest determined (consecutive) resource request r may be selected n The amount of discrete resources s n And can be provided to the server 101. On the other hand, operations at the server 101, 701 may be performed continuously.
[0121] As outlined above, the client 102 may request content from the server 101 based on (converging) side information or control information exchanged between the client 102 and the server 101, in particular based on r*, r* being the resource's value after convergence Allocations that have converged, such as and/or This side information or control information can be used to improve the client control policies, ie the policies used by the client 102 to manage the streaming, buffering and rendering of the media data 103 .
[0122] In a typical HTTP-based adaptive streaming scenario, there are several versions (quality levels) of media content, encoded at different bit rates and exhibiting different qualities. The available bandwidth of the data connection from the server 101 to the client 102 is generally time variable and depends on the client's control strategy for selecting the appropriate quality version or quality level of the content. A client control policy may seek to maximize a client utility function 410 (eg, maximize quality) that is constrained by not allowing buffer underruns. Given the available bandwidth of the data connection and the continuous playout rate, there is usually a compromise between playout quality and the average fullness of the buffer. This is due to the fact that higher quality versions of the downloaded content will take longer to download (given constant data connection bandwidth). Additionally, one or more other types of constraints may be taken into account within a client control policy. For example, switching between different content quality versions may not be desirable as this would result in inconsistent playout quality.
[0123] The client control policy may be configured to estimate the throughput of the available network by explicitly measuring the download speed and/or by observing the status of the client's 102 buffers. For example, if the buffer is observed to be filling up, this indicates that the requested data rate is too high. On the other hand, if the buffer is observed to be getting holed, this may indicate that the requested rate is too low.
[0124] The streaming process may be performed in fragments, where the streaming content is separated into relatively short fragments. The control policy of the client 102 may be configured to adapt the quality version or quality level of the content streamed for each of the segments.
[0125] Figure 8a An example of the operation of a client-side control policy is shown in . To be precise, Figure 8a The fullness level 801 of the playout buffer is shown as a function of time. also, Figure 8a The observed throughput 802 and the downloaded segments are illustrated at the selected quality level 803 . It can be seen that after the transition phase, the control strategy aims to match the available throughput 802 by selecting a quality level 803 corresponding to a bit rate consistent with the available throughput 802 . The transition period is relatively long. Furthermore, buffer fullness level 801 is relatively high. Also, the choice of quality level 803 is very erratic.
[0126] Side information or control information r* can be used to improve the performance of the control strategy of client 1β2. In effect, the side information provides the client 102 with an indication of the available or allocated rate of the client 102 . It can be assumed that all clients 102 implement the same control strategy and are provided with their respective r* side information. exist Figure 8b An example of the effect achieved by supplying r* is shown in . It can be seen that the playout buffer fullness level 801 is lower than Figure 8a situation. Specifically, the buffer fullness 801 is greater than Figure 8a The case increases at a much slower rate. However, there is no buffer underrun so that smooth playout is ensured. also, Figure 8b The quality level 803 of the downloaded segments and the observed throughput 802 are shown. As can be seen, the transition phase has been shortened. This is due to the fact that knowledge of the allocation rate r* enables the client 102 to apply a less conservative strategy (since the client 102 can be considered less conservative given the fact that the value of r* should correspond to the observed throughput 802 A rebuffer event may occur). Furthermore, the playout policy may allow for a more stable selection of the quality level 803 since over-rate situations may be avoided and since the policy accepts increased risk in the event of an observed throughput 802 decrease.
[0127] If side information r* is temporarily unavailable, the client policy can operate by performing a local optimization of QoE without said side information, otherwise penalized by buffer underrun prevention. However, once the side information r* is established, the client control policy can be adjusted based on the side information. The established side information r* may be considered stale after some predefined time interval and the client may revert to a local optimization strategy. The side information r* can be established periodically, in particular at a frequency which ensures that an update of the side information r* is provided with sufficient time resolution.
[0128] In other words, the messaging process described in this document can be repeated with a certain frequency f=1/T, where f is 0,01 Hz, 0,05 Hz, 0,1 Hz, 1 Hz or greater than 1 Hz. Thus, updated side information r* may be provided at frequency f. The client 102 or client agent may assume that side information that has been established at a particular time is valid for a validity period (which may be equal to or greater than T). If no updated side information is received at the end of the valid period, the client 102 or client agent may modify its client control policy by switching to a local optimization of QoE (eg, regardless of the amount of resource allocation) to change from one or Multiple servers 101 request content. On the other hand, if the updated side information is received within the valid period and/or is received immediately, the updated side information may be taken into account. In this way, stable operation of the client 102 can be achieved.
[0129] It should be noted that the scheme described in this document can be integrated with MPEG DASH, especially in server and network-assisted In the interface of DASH (SAND).
[0130] Therefore, in this document, a method for establishing side information or control information for client streaming control policies is described. The side information is established using an iterative message passing process that occurs between server side nodes and client nodes. The process generates a convergence event when side information is established. Client control policies can be adapted using side information.
[0131] In this paper, a method based on the Alternating Direction Multiplier Method (ADMM) scheme is described to solve the overall resource allocation problem. It should be noted that other distributed optimization algorithms can be used to solve the overall resource allocation problem. In general, the overall resource allocation problem can be solved by
[0132] Decomposition of the global optimization problem into partial optimization problems (e.g. optimization problems that can be solved separately on the client 102 and on the server 101);
[0133] Executing an iterative message passing process; and
[0134] • When side information is established, a convergence event is generated.
[0135] Once the side information is established, it can be used to adapt client control policies. For example, client control policies can govern the process of requesting content at a specific bit rate.
[0136] Convergence events can be generated, for example, as follows:
[0137] Convergence events can be determined by each client 102 by observing the convergence of the residual term in the client-side optimization problem;
[0138] · The convergence event may be achieved using a fixed number of messaging iterations; and/or
[0139] • Convergence events can be indicated by server side nodes by tagging outgoing server messages and thus triggering side information update events in the client.
[0140] When the client's available side information is considered stale, the client can identify a convergence event by observing:
[0141] No incoming message from the server side within some predefined time window;
[0142] · Its residuals have not converged;
[0143] a predefined time interval measured on the client side; and/or
[0144] • Receive tagged server-side messages.
[0145]It should be noted that the server-side metadata exchange proxy does not have to be co-located with the server 101 storing the content. A distributed optimization scheme may involve a proxy operating on the server side and participating in the message exchange process for all connected clients 102 . In the iterative optimization scheme described in this document, an agent may be configured to collect resource requests from clients 102 and determine resource allocations. Even though the proxy is usually located on the server side, the actual network node does not have to be co-located with the server 101 that stores and provides the content. Nodes or proxies participating in metadata exchange may be located at deeper or higher levels of the network 100, 700, for example.
[0146] It should be noted that the schemes described herein generally assume that the server 101 is in a pull mode where the client 102 requests content (as opposed to a push mode in which the server 101 actively pushes data to the client 102) operate.
[0147] In view of stable and/or fast convergence, the updating of the iterative message passing process is preferably performed synchronously. Specifically, server 101 may be configured to issue updates (ie, server metadata 121 ) to connected clients 102 only after all messages (ie, client metadata 122 ) have been received from clients 102 . On the other hand, partial barrier schemes can be applied to the processing of the server 101 to perform asynchronous operations.
[0148] Figure 9a Flowchart showing an example method 900 for establishing control information for a control policy of a client 102 for streaming data 103 from at least one server 101, 701, in particular media such as video and/or audio data . The control information may be used by a client 102 to request data 103 from at least one server 101 , 701 . Clients 102 may include, for example, smartphones, computers, televisions, and the like.
[0149] The method 900 includes performing 901 a messaging process between a server agent of the server 101, 701 and a client agent of the client 102 to iteratively establish control information. The server agent may be co-located with the server 101,701 or separate from the server 101,701. Similarly, the client agent may be co-located with the client 102 or separate from the client 102 . In the context of a messaging process, a server agent may generate server metadata 121 based on client metadata 122 received from a client agent. Additionally, the client agent can generate client metadata 122 based on server metadata 121 received from the server agent. This iterative exchange of messages and/or metadata may be repeated to establish control information for the control policy to be used by the client 102 for streaming the data 103 from the server 101 , 701 .
[0150] Additionally, method 900 includes generating 902 a convergence event of the messaging process to indicate that control information has been established. Thus, the client 102 is able to request data 103 from the server 101 , 701 based on repeatedly established control information (also referred to herein as side information). As such, optimized content distribution may be performed within the content distribution network 100,700.
[0151] Figure 9b A flowchart showing an example method 910 for establishing control information for a control policy of at least one client 102 for streaming data 103 from a server 101 , 701 . The method 910 may be performed by a server agent of the server 101 , 701 , eg in the context of the method 900 .
[0152] Method 910 includes receiving 911 client metadata 122 , where client metadata 122 may be received from a client proxy of client 102 . The client metadata 122 may indicate a requested amount (eg a requested bit rate) of a resource 411 requested by at least one client 102 for streaming data 103 from the server 101 , 701 . Rather, different sets of client metadata 122 may be received 911 from multiple client proxies of multiple clients 102 that may compete for a limited total amount of resources 411 .
[0153] Additionally, method 910 includes determining 912 server metadata 121 based on received client metadata 122 (from one or more client agents). The server metadata 121 may indicate an allocation of resources 411 allocated to at least one client 102 for streaming data 103 from the server 101 , 701 . In particular, different sets of server metadata 121 (in particular, a set of server metadata 121 for each client 102 ) can be determined 912 for the plurality of clients 102 .
[0154] Additionally, method 910 includes sending 913 server metadata 121 , which is typically sent to a client agent of client 102 . In particular, individual server metadata 121 can be sent to each of the plurality of client proxies of the plurality of clients 102 .
[0155] The method 910 includes repeating 914 the steps of receiving 911 , determining 912 and sending 913 until a convergence event occurs. Repeat 914 may be performed in the context of a messaging process between a server proxy and one or more client proxies. The convergence event may indicate that a control policy for at least one client 102 for streaming data 103 from a server 101, 701 has been established based on repeatedly receiving 911 client metadata 122 and sending 913 server metadata 121 control information. The client 102 may then request data 103 from the server 101, 701 using the established control information. By performing the method 910, optimized content distribution may be performed within the content distribution network 100,700.
[0156] Figure 9c A flowchart showing an example method 920 for establishing control information for a control policy of a client 102 for streaming data 103 from at least one server 101 , 701 . Method 920 may be performed by a client agent of client 102 .
[0157] Method 920 includes receiving 921 server metadata 121 . Server metadata 121 may be received by a server agent of at least one server 101 , 701 . The client 102 may be configured to stream data 103 from a plurality of different servers 101 , 701 for the overall media content to be rendered by the client 102 . In this case, server metadata 121 may be received from each of the plurality of servers 101 , 701 . The server metadata 121 may indicate an allocation of resources 411 allocated to the client 102 for streaming the data 103 from the respective server 101 , 701 .
[0158] Additionally, method 920 includes determining 922 client metadata 122 based on server metadata 121 (from one or more servers 101, 701 or server proxies). Client metadata 122 may indicate a requested amount of resource 411 requested by client 102 for streaming data 103 from at least one server 101 , 701 . A set of client metadata 122 may be generated for each of the plurality of servers 101 , 701 .
[0159] Additionally, the method 920 includes sending 923 the client metadata 122 (to one or more servers 101, 701 or server proxies).
[0160] The method 920 repeats 924 the receive 921 , determine 922 and send 923 steps until a convergence event occurs (eg in the context of a messaging process). A convergence event may indicate that control information has been established for a control policy for the client 102 for streaming data 103 from at least one server 101 , 701 based on repeated receiving 921 server metadata 121 and sending 923 client metadata 122 . The client 102 may then request data 103 from one or more servers 101, 701 using the established control information. By performing the method 920, optimized content distribution may be performed within the content distribution network 100,700.
[0161] Various aspects of the invention can be seen from the following enumerated exemplary embodiments (EE):
[0162] EE 1) A method (900) for establishing control information for a control policy of a client (102) for streaming data (103) from at least one server (101, 701); wherein said method ( 900) including,
[0163] - performing (901) a messaging process between a server agent of said server (101, 701) and a client agent of said client (102) to iteratively establish control information; and
[0164] - generating (902) a convergence event of said messaging process to indicate that said control information has been established.
[0165] EE 2) The method (900) according to EE 1, wherein performing (901) said messaging process comprises, within a given iteration,
[0166] - sending server metadata (121) from said server proxy to said client proxy; wherein said server metadata (121) at said given iteration depends on client metadata sent by the proxy to the server proxy (122); and
[0167] - sending client metadata (122) from said server proxy to said client proxy; wherein said client metadata (122) at said given iteration depends on The server proxy sends the server metadata to the client proxy (121).
[0168] EE 3) The method (900) according to EE 2, wherein
[0169] - the method (900) includes determining client metadata (122) based on the client utility function (410); and
[0170] - the client utility function (410) indicates the utility (412) to the client (102) of the data (103) received by the client (102) along with the (103) the amount of resources (411) of said client (102) varies.
[0171] EE 4) The method (900) according to EE 3, wherein the client utility function (410)
[0172] - indicating and/or depending on the perceived quality of the streamed media data (103) presented by said client (102); and/or
[0173] - indicating and/or depending on the signal-to-noise ratio of the streamed data (103) received and/or presented by said client (102); and/or
[0174] - depends on the rendering mode of said client (102); and/or
[0175] - depends on the rendering environment of said client (102); and/or
[0176] - depends on the type of said client (102); and/or
[0177] - is time-varying.
[0178] EE 5) The method (900) according to any one of EEs 2 to 4, wherein
[0179] - said method (900) comprises determining a requested amount of said resource (411 ) requested by said client (102) based on said client utility function (400, 410); and
[0180] - said client metadata (122) indicates the requested amount of said resource (411).
[0181]EE 6) The method (900) according to any one of EEs 2 to 5, wherein
[0182] - said server metadata (121 ) indicates an allocated amount of said resource (411 ) allocated to said client (102) for streaming data (103); and
[0183] - Said method (900) comprises determining a requested amount of said resource (411) requested by said client (102) depending on said allocated amount of said resource (411).
[0184] EE 7) The method (900) according to EE 6 with reference back to EE 5, wherein said said said resource (411) is determined based on (in particular by reducing or minimizing) a cost function comprising Request volume
[0185] - a first item indicating a deviation of said requested amount of said resource (411 ) from said allocated amount of said resource (411 ); and
[0186] - A second term comprising the complement of said client utility function (410).
[0187] EE 8) The method (900) according to EE 7, wherein
[0188] - said cost function comprises a weighted sum of said first term and said second term; and/or
[0189] - said first term depends on the absolute or squared deviation of said requested amount of said resource (411 ) from said allocated amount of said resource (411 ).
[0190] EE 9) The method (900) according to any one of EEs 2 to 8, wherein
[0191] - said client metadata (122) indicates a requested amount of said resource (411 ) requested by said client (102); and
[0192] - determining said allocated amount of said resource (411 ) based on said requested amount of said resource (411 ).
[0193] EE 10) The method (900) according to any one of EEs 2 to 9, wherein the method (900) includes depending on the number of clients (102) to be allocated for streaming data (103) The server metadata (121) is determined based on the total amount of resources (411).
[0194] EE 11) The method (900) according to EE 10, wherein
[0195] - said client (102) is a first client (102) of said plurality of clients (102);
[0196] - said method (900) comprises determining said resource (411) allocated to said first client (102) for streaming data (103) based on said total amount of said resource (411) the allocated amount of ; and
[0197] - said server metadata (121) indicates said allocated amount of said resource (411) allocated to said first client (102) for streaming data (103).
[0198] EE 12) The method (900) according to EE 11, wherein
[0199] - said method (900) comprises receiving client metadata (122) from said plurality of clients (102) competing for said total amount of said resource (411); and
[0200] - determining said resource (411) for said first client (102) based on said requested amount of said resource (411) from each of said plurality of clients (102) of the dispensed amount.
[0201] EE 13) The method (900) according to any one of the preceding EEs, wherein said control information indicates or includes resources (411) allocated to said client (102) for streaming data (103) ) amount.
[0202] EE 14) The method (900) according to EE 13, wherein the resource (411) comprises one or more of the following,
[0203] - bit rate for streaming data (103); and/or
[0204] - processing capacity of said server (101) for providing data (103); and/or
[0205] - The bandwidth of the transport network (601) between said server (101) and said client (102).
[0206] EE 15) The method (900) according to any one of the preceding EEs, wherein generating (902) said convergence event comprises,
[0207] - determining that a predetermined maximum number of iterations of said messaging process has been reached; and/or
[0208] - determining that a change in said control information between two successive iterations of said messaging procedure is equal to or less than a predetermined change threshold; and/or
[0209] - determining that said client agent and/or said server agent has sent an indication to terminate said messaging procedure.
[0210] EE 16) The method (900) according to any one of the preceding EEs, wherein said method (900) comprises,
[0211] - generating a request for data (103) based on said established control information; and/or
[0212] - managing a buffer of said client (103) for buffering data (103) based on said established control information; and/or
[0213] - Selecting a quality level (803) of a plurality of different quality levels (803) of content to be streamed.
[0214] EE 17) The method (900) according to any one of the preceding EEs, wherein
[0215] - said method (900) comprises performing a pairwise messaging procedure between said server agent of said server (101, 701) and client agents of a plurality of clients (102) in order to iteratively establish said plurality control information for each of the clients (102); and
[0216] - said plurality of clients (102) compete for resources (411) available for streaming the total amount of data (103) from said server (102).
[0217] EE 18) The method (900) according to any one of the preceding EEs, wherein said method (900) comprises
[0218] - performing a pairwise messaging process between server agents of a plurality of servers (101, 701) and said client agent of said client (102) for streaming from said plurality of servers ( 101, 701) the control policy of said client (102) for the data (103) of each of them iteratively establishes control information; and
[0219] - Streaming different fractions of an overall media stream from different servers (101, 701) based on said established control information for said different servers (101, 701), respectively.
[0220] EE 19) A method (910) for establishing control information for a control policy of at least one client (102) for streaming data (103) from a server (101, 701); wherein the method ( 910) including,
[0221] - receiving (911) client metadata (122); wherein said client metadata (122) indicates said at least one client for streaming data (103) from said server (101, 701) (102) the requested amount of the requested resource (411);
[0222] - determining (912) server metadata (121) based on said received client metadata (122); wherein said server metadata (121) indicates the an allocated amount of said resource (411 ) of said at least one client (102) for data (103) of );
[0223] - sending (913) said server metadata (121); and
[0224] - repeating (914) said receiving (911), determining (912) and sending (913) steps until the occurrence of a convergence event; wherein said convergence event indicates that the client metadata (122) has been received (911) repeatedly and sending (913) server metadata (121) establishing control information for said control policy of said at least one client (102) for streaming data (103) from said server (101, 701) .
[0225] EE 20) A method (920) for establishing control information for a control policy of a client (102) for streaming data (103) from at least one server (101, 701); wherein said method ( 920) including,
[0226] - receiving (921) server metadata (121); wherein said server metadata (121) indicates allocation to said client for streaming data (103) from said at least one server (101, 701) the allocated amount of resources (411) of (102);
[0227] - determining ( 922 ) client metadata ( 122 ) based on said server metadata ( 121 ); wherein said client metadata ( 122 ) indicates that the source for streaming is from said at least one server ( 101 , 701 ) the requested amount of said resource (411) requested by said client (102) of data (103);
[0228] - sending (923) said client metadata (122); and
[0229] - repeating (924) said receiving (921), determining (922) and sending (923) steps until the occurrence of a convergence event; wherein said convergence event indication has been based on repeatedly receiving (921) server metadata (121) and sending (923) client metadata (122) establishing control information for said control policy of said client (102) for streaming data (103) from said at least one server (101, 701) .
[0230] EE 21) The method (920) according to EE 20, wherein the method (920) comprises repeatedly determining updated control information using the steps of receiving, determining, sending and repeating.
[0231] EE 22) The method (920) according to any one of EEs 20 to 21, wherein
[0232] - establishing said control information at the first moment;
[0233] - said control information exhibits a validity period starting from said first moment;
[0234] - said method (920) comprises determining said control policy for streaming data (103) based on said control information during said validity period of said control information; and
[0235] - Said method (920) comprises applying a control policy for streaming data (103) dependent on said control information after said validity period of said control information.
[0236] EE 23) A system (100, 700) for distributing content; wherein said system (100, 700) comprises
[0237] - at least one server (101, 701) configured to provide data (103) for streaming content to one or more clients (102);
[0238] - at least one client (102) configured to request data (103) for streaming content from said at least one server (101, 701);
[0239] - a server agent of said at least one server (101, 701) and a client agent of said at least one client (102); wherein said server agent and said client agent are configured to
[0240] - performing a messaging process between said server agent and said client agent for said at least one client for streaming data (103) from said at least one server (101, 701) ( 102) the control strategy repeatedly establishes the control information; and
[0241] - generating a convergence event of said messaging process to indicate that said control information has been established.
[0242] EE 24) A server agent for a server (101, 701) of a content distribution network (100, 700); wherein said server agent is configured to
[0243] - receiving client metadata (122); wherein said client metadata (122) indicates resources requested by a client (102) for streaming data (103) from said server (101, 701) (411) the requested amount;
[0244] - determining server metadata (121) based on said received client metadata (122); wherein said server metadata (121) indicates allocation for streaming data from said server (101, 701) (103) the allocated amount of said resource (411 ) of said client (102);
[0245] - sending said server metadata (121); and
[0246] -repeating receiving, determining and sending until the occurrence of a convergence event; wherein said convergence event indication has been targeted for streaming from said The control policy of said client (102) of the data (103) of the server (101, 701) establishes control information.
[0247] EE 25) A client agent for a client (102) of a content delivery network (100, 700); wherein the client agent is configured to
[0248] - receiving server metadata (121); wherein said server metadata (121) indicates resources (411) allocated to said client (102) for streaming data (103) from a server (101, 701) ) of the allocated amount;
[0249] - determining client metadata (122) based on said server metadata (121); wherein said client metadata (122) indicates data (103) for streaming from said server (101, 701) the requested amount of said resource (411) requested by said client (102);
[0250] - sending said client metadata (122); and
[0251] -repeating receiving, determining and sending until the occurrence of a convergence event; wherein said convergence event indication has been targeted for streaming from said The control policy of said client (102) of the data (103) of the server (101, 701) establishes control information.
[0252] The methods and systems described in this document can be implemented as software, firmware and/or hardware. Certain components may be implemented, for example, as software running on a digital signal processor or microprocessor. Other components may, for example, be implemented as hardware and/or as application specific integrated circuits. Signals encountered in the described methods and systems may be stored on media such as random access memory or optical storage media. The signal may be transmitted via a network, such as a radio network, a satellite network, a wireless network, or a wired network, such as the Internet. A typical device utilizing the methods and systems described in this document is a portable electronic device or other consumer equipment for storing and/or presenting audio signals.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products