# Method for judging self-given delay repeatability of streaming data in real time

## A data stream and data element technology, which is applied in the direction of electrical digital data processing, special data processing applications, digital data information retrieval, etc.

Pending Publication Date: 2020-12-04

吕纪竹

0 Cites 0 Cited by

## AI-Extracted Technical Summary

### Problems solved by technology

Using traditional methods to recalculate autocorrelation on stream data after some data changes cannot achieve real-time processing and takes up and wastes a lot...

### Method used

[0038] Calculating autocorrelation is an effective method for judging the repeatability of time series or streaming big data itself with a given delay. The present invention extends to a method for judging the repeatability of the given delay of the stream data itself in real time by iterating the autocorrelation of the specified delay l (1≤l1) of the stream data , System and Computing Device Program Products. A computing system consists of one or more processor-based computing devices. Each computing device contains one or more processors. The computing system includes an input buffer. This input buffer holds large data or streaming data elements. This buffer can be in memory or other computer-readable media, such as hard disk or other media, or even multiple distributed files allocated on multiple computing devices. They are logically connected end-to-end to form a "circular buffer" ". Multiple data elements from the data stream that are involved in autocorrelation calculations form an unadjusted calculation window. The calculation window size n(n>l) indicates the number of data elements...

## Abstract

The autocorrelation of a given delay may be used to determine the repeatability of the time series or streaming data's own given delay. The invention discloses a method, a system and a computing system program product which can judge repeatability of time series or streaming data self-given delay in real time by iteratively computing self-correlation of appointed delay of a computing window of a given scale. Embodiments of the present invention include iteratively calculating two or more components of an autocorrelation of a specified delay of a post-adjustment calculation window based on thetwo or more components of the autocorrelation of the specified delay of the pre-adjustment calculation window, and then generating the autocorrelation of the specified delay of the post-adjustment calculation window based on the two or more components that need to be iteratively calculated. Iterative computation autocorrelation avoids accessing all data elements in the adjusted computation windowand executing repeated computation, so that the computation efficiency is improved, computation resources are saved, and the energy consumption of a computation system is reduced; therefore, the real-time judgment of the self-given delay repeatability of the streaming data is efficient and low in consumption, and some scenes of real-time judgment of the self-given delay repeatability of the streaming data become possible.

Application Domain

Digital data information retrievalEnergy efficient computing +1

Technology Topic

Streaming dataComputing systems +8

## Image

## Examples

- Experimental program(1)

### Example Embodiment

[0038] Calculating autocorrelation is an effective method to judge the repeatability of time series or streaming big data. The invention extends to a method, a system and a computing device program product which can judge the given delay repeatability of stream data itself in real time by iterating the autocorrelation of the specified delay l(1≤l 1). A computing system includes one or more processor-based computing devices. Each computing device contains one or more processors. The computing system contains an input buffer. The input buffer holds big data or streaming data elements. This buffer can be in memory or other computer-readable media, such as hard disk or other media, or even a number of distributed files distributed on multiple computing devices, which are logically interconnected end-to-end to form a "circular buffer". A number of data elements from the data stream, which are involved in autocorrelation calculation, form a calculation window before adjustment. Calculation window size n(n>l) indicates the number of data elements in a calculation window of the buffer. L delay indicates the delay used for autocorrelation calculation. The embodiment of the invention includes two or more components that iteratively calculate the autocorrelation of the specified delay of the adjusted calculation window based on two or more (p(p>1)) components of the autocorrelation of the specified delay of the calculated window before adjustment, and then generates the autocorrelation of the specified delay of the adjusted calculation window based on the two or more components that are iteratively calculated as needed. Iterative autocorrelation calculation avoids accessing all data elements in the adjusted calculation window and performing repeated calculations, thus improving the calculation efficiency, saving the calculation resources and reducing the energy consumption of the calculation system, and making it possible to judge the given delay repeatability of streaming data in real time with high efficiency and low consumption, and some scenarios of judging the given delay repeatability of streaming data in real time from impossible to possible.

[0039] Autocorrelation, also known as delay correlation or sequence correlation, is a measure of the degree of correlation between a specific time series and the time series itself delayed by L time points. It can be obtained by dividing the correlation of observed values separated by L time points in a time series by its standard deviation. If the autocorrelation of all different delay values of a time series is calculated, the autocorrelation function of the time series can be obtained. For a time series that does not change with time, its autocorrelation value will decrease exponentially to 0. The range of autocorrelation values is between -1 and +1. A value of +1 indicates that the past and future values of time series have a completely positive linear relationship, while a value of -1 indicates that the past and future values of time series have a completely negative linear relationship.

[0040] In this paper, a calculation window is a moving window that contains data involved in autocorrelation calculation on stream data. The calculation window can be moved left or right. For example, when processing real-time data streams, the computing window moves to the right. At this time, new data is added to the right side of the calculation window and the oldest data element on the left side of the calculation window is removed from the calculation window. When the autocorrelation of the data elements of the previous data stream is recalculated, the calculation window moves to the left. At this time, a data is added to the left side of the calculation window and a data element on the right side of the calculation window is removed from the calculation window. The goal is to iteratively calculate the autocorrelation of a given delay of data elements in the calculation window whenever the calculation window moves one or more data to the left or right. Both cases can be handled in the same way, but only the equations used in iterative calculation are different. For example, but not limitation, in the following description, the implementation scheme of the present invention will be described and explained by taking the first case (the calculation window moves to the right) as an example.

[0041] In this paper, a component of autocorrelation is a quantity or expression that appears in the definition formula of autocorrelation or any conversion of its definition formula. Autocorrelation is its own largest component. The following are some examples of auto-correlated components.

[0042]

[0043]

[0044]

[0045]

[0046] (l is the delay)

[0047] Autocorrelation can be calculated based on one or more components or their combination, so multiple algorithms support iterative autocorrelation calculation.

[0048] A component can be calculated by direct iteration or indirect iteration. The difference between them is that when a component is directly iteratively calculated, the component is calculated by the value calculated by the component in the previous round, while when the component is indirectly iteratively calculated, the component is calculated by other components other than the component.

[0049] For a given component, it may be directly iterated in one algorithm but indirectly iterated in another algorithm.

[0050] For any algorithm, at least two components will be iteratively calculated, one of which is directly iteratively calculated and the other is directly or indirectly iteratively calculated. For a given algorithm, it is assumed that the total number of different components used is p(p>1). If the number of components calculated directly is v(1≤v≤p), then the number of components calculated indirectly is W = P-V (0 ≤ W 1 and w = 0). However, regardless of whether the autocorrelation results are needed and accessed in a specific round, the components of direct iterative calculation must be calculated.

[0051] For a given algorithm, if a component is directly iteratively calculated, it must be calculated (that is, whenever an existing data element is removed from the calculation window and whenever a data element is added to the calculation window). However, if a component is indirectly iteratively calculated, the component can be calculated as needed by using one or more components other than the component, that is, only when autocorrelation needs to be calculated and accessed. In this way, when the autocorrelation is not accessed in a certain calculation round, only a few components need to be iteratively calculated. A component of indirect iterative calculation may be used for the direct iterative calculation of a component, in which case the calculation of the component of indirect iterative calculation cannot be omitted.

[0052] The implementation scheme of the present invention includes more than two (p(p>1)) components that iteratively calculate the autocorrelation of the adjusted calculation window based on more than two (p(p>1)) components calculated for the previous calculation window.

[0053] The computing system contains a buffer. The buffer holds big data or streaming data elements. Calculation window size n(n>l) indicates the number of data elements in a calculation window of the buffer.

[0054] The system initializes two or more (p(p>1)) components of the autocorrelation of a given delay l(l≥1) of the pre-adjustment calculation window with a given size n(n>1). The initialization of the two or more components includes calculating or accessing or receiving the calculated components from one or more computing device readable media based on the data elements in the calculation window according to their definitions.

[0055] The computing system receives a new stream data element after receiving one or more stream data elements.

[0056] The system saves new data elements to the buffer.

[0057] The system adjusts the pre-adjustment calculation window by removing the oldest data elements from the left side of the pre-adjustment calculation window and adding new data elements to the right side of the pre-adjustment calculation window.

[0058] The calculation system iteratively calculates a sum or an average or a sum and an average of the adjusted calculation windows.

[0059] The calculation system directly iteratively calculates one or more (let v(1≤v

[0060] The system calculate w = p-v components of that autocorrelation of the given delay L indirectly and iteratively for the adjust calculation window as needed. The w components that indirectly calculate the autocorrelation of the given delay l include each of the w components that indirectly calculate the given delay l one by one. A component for indirectly calculating a given delay L includes accessing one or more components of the given delay L other than the component and calculating the component based on the accessed components. One or more components of these given delays L can be initialized, directly iteratively calculated or indirectly iteratively calculated.

[0061] The calculation system generates the autocorrelation of the given delay L for the adjusted calculation window based on one or more components of the autocorrelation of the given delay L iteratively calculated for the adjusted calculation window as needed.

[0062] The calculation system can continuously receive new data elements, save the data elements into the input buffer, adjust the calculation window, iteratively calculate one or an average value or one or an average value of the adjusted calculation window, directly and iteratively calculate one or more components with v specified delays, indirectly and iteratively calculate w = p-v components with specified delays as needed, generate autocorrelation of given delays based on one or more iteratively calculated components as needed, and repeat this process as many times as needed.

[0063] Embodiments of the present invention may include or utilize special-purpose or general-purpose computing devices including computing device hardware, such as one or more processors and memory devices described in more detail below. The scope of embodiments of the present invention also includes physical and other computing device readable media used to carry or store computing device executable instructions and/or data structures. The media readable by these computing devices can be any media accessible by general-purpose or special-purpose computing devices. A computing device readable medium that stores executable instructions of the computing device is a storage medium (device). A computing device readable medium carrying executable instructions of the computing device is a transmission medium. Therefore, by way of example and not limitation, embodiments of the present invention can include at least two different types of computing device readable media: storage media (devices) and transmission media.

[0064] Storage media (devices) include random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), read-only optical disk memory (CD-ROM), solid-state disk (SSD), Flash Memory, phase change memory (PCM), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other devices that can be used for storage.

[0065] A "network" is defined as one or more data links that enable computing devices and/or modules and/or other electronic devices to transmit electronic data. When information is transmitted or provided to a computing device by a network or another communication connection (wired, wireless, or a combination of wired and wireless), the computing device regards the connection as a transmission medium. The transmission medium may include a network and/or data link for carrying the required program code in the form of executable instructions or data structures of the computing device, and which can be accessed by general-purpose or special-purpose computing devices. The combination of the above should also be included in the range of readable media of computing devices.

[0066]In addition, when different computing device components are applied, program codes in the form of executable instructions or data structures of computing devices can be automatically transmitted from transmission media to storage media (devices) (or vice versa). For example, the executable instructions or data structures of the computing device received from the network or data link can be temporarily stored in the RAM of the network interface module (for example, NIC), and then finally transmitted to the RAM of the computing device and/or to a smaller volatile storage medium (device) of the computing device. Therefore, it should be understood that storage media (devices) can be included in the components of computing devices that also (or even mainly) apply transmission media.

[0067] A device executable instruction includes, for example, instructions and data that, when executed by a processor, cause a general-purpose computing device or a special-purpose computing device to perform a specific function or a set of functions. The executable instructions of the computing device can be, for example, binary, intermediate format instructions such as assembly code, or even source code. Although the described objects are described in specific language of structural features and/or methodological acts, it should be understood that the objects defined in the appended claims are not necessarily limited to the features or acts described above. Rather, the described features or actions are only disclosed in the form of examples to realize the claims.

[0068] Embodiments of the present invention can be implemented in a network computing environment configured by various types of computing devices, including personal computers, desktop computers, notebook computers, information processors, handheld devices, multi-processing systems, microprocessor-based or programmable consumer electronics, network computers, minicomputers, host computers, supercomputers, mobile phones, palmtop computers, tablet computers, pagers, routers, switches and similar products. The embodiment of the invention can also be applied to a distributed system environment composed of local or remote computing devices that perform tasks through network interconnection (that is, through wired data link, wireless data link, or the combination of wired data link and wireless data link). In a distributed system environment, program modules can be stored on local or remote storage devices.

[0069] The embodiment of the invention can also be implemented in a cloud computing environment. In this description and the following claims, "cloud computing" is defined as a model that enables the shared pool of configurable computing resources to be accessed through the network on demand. For example, cloud computing can be used by the market to provide a universal and convenient shared pool for accessing configurable computing resources on demand. The shared pool of configurable computing resources can be prepared quickly through virtualization and provided with low management overhead or low service provider interaction, and then adjusted accordingly.

[0070] Cloud computing model can include various features, such as on-demand self-service, broadband network access, resource collection, quick retraction, metering service and so on. Cloud computing models can also be embodied in various service models, such as software as a service ("SaaS"), platform as a service ("PaaS") and facilities as a service ("IaaS"). Cloud computing models can also be deployed by different deployment models such as private cloud, community cloud, public cloud, hybrid cloud and so on.

[0071] Because the invention effectively reduces the requirement of computing power, its embodiment can also be applied to edge computing.

[0072] In this specification and claims, a "circular buffer" is a data structure that uses a single "buffer" with a fixed length that seems to be connected end to end, sometimes also called a ring buffer. This "buffer" can be either a common circular buffer, which is usually a space allocated in local memory, or a "virtual circular buffer", which is not necessarily in memory but a file on a hard disk or even a plurality of distributed files on a plurality of distributed computing devices, as long as these distributed files are logically connected to each other to form a "circular buffer".

[0073] Usually, the input data is added to a buffer of n size. When the buffer is not full of data, there are at least two ways. One is to calculate two or more components with the first N data according to the definition of components without autocorrelation calculation until the buffer is full of data. Another way is that when necessary, the autocorrelation can be calculated incrementally from the beginning by the method of incremental calculation, which is described in another patent application of the present inventor, until the buffer is full. Once the buffer is filled and two or more components of autocorrelation of the first n data are calculated, the iterative algorithm provided in this paper can be used to iteratively calculate the two or more components of autocorrelation, and then the autocorrelation can be calculated based on the iteratively calculated components.

[0074] Several examples are given in the following chapters.

[0075] Figure 1 There is illustrated a high-level overview of an example computing system 100 that iteratively computes autocorrelation for stream data. refer to Figure 1 The computing system 100 includes a plurality of devices connected by different networks, such as the local area network 1021, the wireless network 1022 and the Internet 1023. The devices include, for example, a data analysis engine 1007, a storage system 1011, a real-time data stream 1006, and a plurality of distributed computing devices that can arrange data analysis tasks and/or query data analysis results, such as personal computers 1016, handheld devices 1017 and desktop computers 1018.

[0076] The data analysis engine 1007 may include one or more processors, such as CPU 1009 and CPU1010, one or more system memories, such as system memory 1008, and a component calculation module 131 and an autocorrelation calculation module 192. The details of module 131 will be illustrated in more detail in other diagrams (for example, Figure 1-1 and Figure 1-2 )。 The storage system 1011 can include one or more storage media, such as storage media 1012 and storage media 1014, which can be used to store large data sets. For example, 1012 and or 1014 may include a data set 123. The data set in the storage system 1011 can be accessed by the data analysis engine 1007.

[0077] Generally, the data stream 1006 can include stream data from different data sources, such as stock price, audio data, video data, geospatial data, Internet data, mobile communication data, online game data, bank transaction data, sensor data, and/or closed caption data. Here are a few examples. Real-time data 1000 can include data collected in real time from sensors 1001, stocks 1002, communications 1003, banks 1004 and so on. The data analysis engine 1007 can receive data elements from the data stream 1006. Data from different data sources can be stored in the storage system 1011 and accessed by big data analysis. For example, the data set 123 can come from different data sources and be accessed by big data analysis.

[0078] Please understand Figure 1 Some concepts are introduced in a very simplified form. For example, the distribution devices 1016 and 1017 may be connected to the data analysis engine 1007 through a firewall, and the data accessed or received by the data analysis engine 1007 from the data stream 1006 and/or the storage system 1011 may be screened by a data filter, etc.

[0079] Figure 1-1 There is illustrated an example computing system architecture 100A for iteratively computing autocorrelation of stream data, in which all components (v = p > 1) are directly iteratively computed. With regard to the computing system architecture 100A, here, only the functions and relationships of the main components in the architecture will be introduced first, and the process of how these components cooperate to complete iterative autocorrelation calculation will be combined later. Figure 2 The process described in Figure 1 Introduce. Figure 1-1 Illustrated Figure 1 1006 and 1007 are displayed. refer to Figure 1-1The computing system architecture 100A includes a component calculation module 131 and an autocorrelation calculation module 192. The component module 131 may be tightly coupled with one or more storage media through a high-speed data bus or loosely coupled with one or more storage media managed by a storage system through a network, such as a local area network, a wide area network, or even the Internet. Accordingly, the component computing module 131 and any other connected computing devices and their components can send and receive message-related data (for example, Internet Protocol ("IP") datagrams and other high-level protocols using IP datagrams, such as User Datagram Protocol ("UDP"), Real-time Streaming Protocol ("RTSP"), Real-time Transport Protocol ("SMTP") and Microsoft Media Server) on the network. The output of the component calculation module 131 will be used as the input of the autocorrelation calculation module 192, which can generate the autocorrelation 193.

[0080] Generally, the data stream 190 can be a sequence of digitally encoded signals (i.e., data packets or data packets) for transmitting or receiving information in the transmission process, respectively. The data stream 190 can contain data from different kinds, such as stock price, audio data, video data, geospatial data, Internet data, mobile communication data, online game data, bank transaction data, sensor data, closed caption data, and real-time text. The data stream 190 may be a real-time stream or streaming stored data.

[0081] As the stream data elements are received, the stream data elements can be placed in the circular buffer 121. For example, data element 101 is placed in position 121A, data element 102 in position 121B, data element 103 in position 121C, data element 104 in position 121D, data element 105 in position 121E, data element 106 in position 121F, data element 107 in position 121G, and data element 108 in position 121g.

[0082] Then, the data element 110 may be received. The data element 110 can be placed in the position 121A (covering the data element 101).

[0083] As shown in the figure, the calculation window size is 8 (i.e., n = 8) and the circular buffer 121 has 9 positions, 121A-121I. The data elements in the calculation window can be reversed as new data elements are placed in the circular buffer 121. For example, when the data element 109 is placed in the position 121I, the calculation window 122 becomes the calculation window 122A. When the data element 110 is then placed in the position 121A, the calculation window 122A becomes the calculation window 122B.

[0084] Referring to the computing system architecture 100A, generally, the component computing module 131 includes V component computing modules which are V (V = P > 1) components of a set of N data elements of a computing window by direct iteration. V is the number of components that are directly iteratively calculated in a given algorithm that iteratively calculates autocorrelation at a given delay, and it varies with different iterative algorithms used. such as Figure 1-1 As shown, the component computing module 131 includes a component Cd 1 A computing module 161 and a component Cd v A computing module 162, between which there are v-2 other component computing modules, which can be component Cd. 2 Computing module, component Cd 3 Computing modules, ..., and component Cd v-1 Calculation module. Each component calculation module calculates a specific component with a given delay. Each component calculation module includes an initialization module that initializes a component for the first calculation window and an algorithm that iteratively calculates the component directly for the adjusted calculation window. For example, the component Cd 1 The calculation module 161 includes an initialization module 132 to initialize the component Cd with a given delay 1 And iterative algorithm 133 to iteratively calculate the component Cd with a given delay. 1 , component Cd v The calculation module 162 includes an initialization module 138 to initialize the component Cd with a given delay v And iterative algorithm 139 to iteratively calculate the component Cd with a given delay. v.

[0085] The initialization module 132 can initialize the component Cd 1 When used or when the autocorrelation calculation is reset. Similarly, the initialization module 138 can initialize the component Cd v When used or when the autocorrelation calculation is reset.

[0086] refer to Figure 1-1 The computing system architecture 100A further includes an autocorrelation calculation module 192. The auto-correlation calculation module 192 can calculate the auto-correlation 193 of a given delay based on one or more components of the given delay calculated iteratively as needed.

[0087] Figure 1-2 There is illustrated an example computing system architecture 100B which iteratively computes autocorrelation for a data stream, and some (v(1≤v 1, but in 100B, 1≤v Figure 1-2 , the computing system architecture 100B includes a component computing module 135. The output of component calculation module 131 can be used as the input of component calculation module 135, and the outputs of calculation modules 131 and 135 can be used as the input of autocorrelation calculation module 192, which can generate autocorrelation 193. Component calculation module 135 generally includes w = p-v component calculation modules to indirectly iteratively calculate w components. For example, the component calculation module 135 includes a component calculation module 163 for indirectly iteratively calculating the component Ci. 1 And the component calculation module 164 is used for indirectly iteratively calculating the component Ci. w And other w-2 component computing modules between them. Indirect iterative calculation of w components includes indirect iterative calculation of each of the w components one by one. Indirect iterative calculation of a component includes accessing and using one or more components other than the component itself. One or more components can be initialized, directly iterated or indirectly iterated.

[0088] Figure 2 A flowchart of an example method 200 of iteratively calculating autocorrelation for a streaming large data set or data stream is illustrated. The method 200 will be described in connection with the components and data of the computing system architectures 100A and 100B, respectively.

[0089] The method 200 includes initializing v(1≤v≤p,p>1) auto-correlation components with a specified delay of l(0 1) of a fluidized large data set or data stream (201). For example, for the computing devices 100A and 100B, the method 200 can access and initialize the V components of a computing window stored in the circular buffer 121 according to the definition of the components, and these streaming data are received and stored in the circular buffer in time sequence. For example, data element 101 is received and saved earlier than 102. The component module 131 can access the data elements 101, 102, 103, 104, 105, 106, 107, and 108 in the calculation window 122 on the buffer. The initialization module 132 can initialize the component Cd of a given delay with data elements 101 to 108. 1 141。 As shown in the figure, the component Cd1 41 contains contribution 151, contribution 152 and other contributions 153. Contribution 151 is the component Cd of data element 101 for a given delay. 1 41' s contribution. Contribution 152 is the component Cd of data element 102 for a given delay. 1 41' s contribution. Other contribution 153 is the component Cd of data elements 103 to 108 for a given delay. 1 41' s contribution. Similarly, the initialization module 138 can initialize the component Cd with a given delay with 101 to 108. v 145。 As shown in the figure, the component Cd v 45 contains contribution 181, contribution 182 and other contributions 183. Contribution 181 is the component Cd of data element 101 for a given delay. v The contribution of 45. Contribution 182 is the component Cd of data element 102 for a given delay. v The contribution of 45. Other contribution 183 is the component Cd of data elements 103 to 108 for a given delay. v The contribution of 45.

[0090] The method 200 includes when v Figure 1-2 Part of its components are iteratively calculated directly and some components are iteratively calculated indirectly, and the calculation module 163 can be based on the component Ci 1 And indirectly iteratively calculate the component Ci by using one or more components outside. 1 , the calculation module 164 can be based on the component Ci w And indirectly iteratively calculate the component Ci by using one or more components outside. w. One or more components can be initialized, directly iteratively calculated, or indirectly iteratively calculated.

[0091] The method 200 includes calculating the autocorrelation with delay L with one or more components with delay L that are initialized or iteratively calculated as needed (210). When the autocorrelation is accessed, the autocorrelation will be calculated based on one or more iteratively calculated components, otherwise only those V components will be iteratively calculated.

[0092] The method 200 includes receiving a data element and saving the received data element to a buffer (202). For example, referring to 100A and 100B, the data element 109 may be received after the data elements 101-108 are received. The element 109 can be saved in the circular buffer 121 at the position 121I.

[0093] The method 200 includes adjusting the calculation window, including removing the earliest received data element from the calculation window and adding the newly received data element into the calculation window (203). For example, the data element 101 is removed from the calculation window 122, the data element 109 is added to the calculation window 122, and then the calculation window 122 is transformed into the adjusted calculation window 122A.

[0094] The method 200 include directly iteratively calculating v component of autocorrelation with delay l for that adjusted calculation window (204), including accessing l data element adjacent to the removed data elements and l data elements adjacent to the added data elements in the calculation window (205); Accessing the auto-correlated V components with delay of L (206); Mathematically removing any contribution of the removed data elements from each of the V components (207); And mathematically adding any contributions of the added data elements to each of the v components (208). The details are as follows.

[0095] The v components for directly iteratively calculating the autocorrelation of the specified delay l for the adjusted calculation window include accessing L data elements adjacent to the removed data elements and L data elements adjacent to the added data elements in the calculation window (205). For example, if the specified delay L = 1, the iterative algorithm 133 can access the data element 102 which is adjacent to the removed data element 101 and the data element 108 which is adjacent to the added data element 109. If the specified delay L = 2, the iterative algorithm 133 can access the data elements 102 and 103 which are adjacent to the removed data element 101 and the data elements 107 and 108 which are adjacent to the added data element 109. Similarly, if the specified delay L = 1, the iterative algorithm 139 can access the data element 102 adjacent to the removed data element 101 and the data element 108 adjacent to the added data element 109. If the specified delay L = 2, the iterative algorithm 139 can access the data elements 102 and 103 which are adjacent to the removed data element 101 and the data elements 107 and 108 which are adjacent to the added data element 109.

[0096] The v components that directly iteratively calculate the autocorrelation with delay l for the adjusted calculation window include v(1≤v≤p) components (206) that access the autocorrelation with delay l of the pre-adjusted calculation window. For example, if the delay L = 1 is specified, the iterative algorithm 133 can access the component Cd with the delay 1. 1 11, if the specified delay l = 2, the iterative algorithm 133 can access the component Cd with the delay of 2. 1 141……。 Similarly, if the delay L = 1 is specified, the iterative algorithm 139 can access the component Cd with the delay of 1. v 15, if the specified delay l = 2, the iterative algorithm 139 can access the component Cd with delay 2. v 145……。

[0097] Directly iteratively calculating the auto-correlation of the five components with the specified delay L for the adjusted calculation window includes mathematically removing any contribution of the removed data elements from each of the five components (207). For example, if the delay L = 2 is specified, the component Cd with the delay 2 is directly iteratively calculated. 1 13 may include a contribution removal module 133A to remove the component Cd with delay of 2. 1 41 mathematically remove the contribution 151. Similarly, the component Cd with delay of 2 is directly iteratively calculated. v 17 may include a contribution removal module 139A to remove the component Cd with delay of 2. v 45 mathematically remove the contribution 181. Contributions 151 and 181 come from data element 101.

[0098] The v components that directly iteratively calculate the autocorrelation with delay l for the adjusted calculation window include mathematically adding any contribution of the added data elements from each of the v components (208). For example, if the delay L = 2 is specified, the component Cd with the delay 2 is directly iteratively calculated. 1 13 may include contributing to the module 133B to the component Cd with delay of 2. 1 41, add contribution 154 mathematically. Similarly, the component Cd with delay of 2 is directly iteratively calculated. v 17 may include contributing to the module 139B to the component Cd with delay of 2. v 45, add the contribution 184 mathematically. Contributions 154 and 184 come from data element 109.

[0099] such as Figure 1-1 And 1-2, the component Cd 1 43 includes contribution 152 (contribution from data element 102), other contributions 153 (contributions from data elements 103-108), and contribution 154 (contribution from data element 109). Similarly, the component Cd v 47 includes contribution 182 (contribution from data element 102), other contributions 183 (contributions from data elements 103-108), and contribution 184 (contribution from data element 109).

[0100]When the autocorrelation is accessed and v Figure 1-2 Some components are directly iterated and some components are indirectly iterated, and the calculation module 163 can be based on component Ci. 1 And indirectly iteratively calculate the component Ci by using one or more components outside. 1 , the calculation module 164 can be based on the component Ci w And indirectly iteratively calculate the component Ci by using one or more components outside. w. One or more components can be initialized, directly iteratively calculated, or indirectly iteratively calculated.

[0101] Method 200 includes calculating autocorrelation on an as-needed basis. When the autocorrelation is accessed, the autocorrelation is calculated based on one or more iterative calculation components; Otherwise, only v components will be directly iteratively calculated. When the autocorrelation is accessed, the method 200 includes W components with delay of L that can be indirectly iteratively calculated as needed (209). For example, in the architecture 100A, the autocorrelation module 192 may calculate the autocorrelation 193 of a given delay. In the architecture 100B, the calculation module 163 can be based on the component Ci. 1 One or more components other than the components indirectly iteratively calculate Ci. 1 And the calculation module 164 can be based on the component Ci. w One or more components other than the components indirectly iteratively calculate Ci. w..., the autocorrelation calculation module 192 can calculate the autocorrelation 193 of the given delay (210). Once the autocorrelation of a given delay is calculated, the method 200 includes receiving the next stream data element to start the next round of iterative calculation. When a new round of iterative calculation is started, the adjusted calculation window of the previous round becomes the pre-adjusted calculation window of the new round of iterative calculation.

[0102] As access to more data elements 202-208 can be repeated, 209-210 can be repeated as needed. For example, after the data element 109 is received and the component Cd 1 13 to the component Cd v 47, the data element 110 can be received (202). Once a new data element is received, the method 200 includes removing the earliest received data element from the calculation window and adding the latest received data element to the calculation window to adjust the calculation window (203). For example, the data element 110 can be placed at the position 121A to cover the data element 101. The calculation window 122A can be transformed into a calculation window 122B after the data element 102 is removed and the data element 110 is added.

[0103] The method 200 includes directly iteratively calculating the auto-correlation of L for the adjusted calculation window based on the V components of the pre-adjustment calculation window (204), which includes accessing L data elements adjacent to removed data elements and removed data elements and L data elements adjacent to added data elements and added data elements in the calculation window (205), accessing V components (206), Any contribution of the removed data element is mathematically removed from each of the V components (207), and any contribution of the added data element is mathematically added to each of the V components (208). For example, referring to 100A and 100B, at a specified delay such as L = 1, the iterative algorithm 133 can be used to directly iteratively calculate the component Cd with a delay of 1 for the calculation window 122B. 1 44 Based on the component Cd with a delay of 1 calculated for the calculation window 122A 1 143(204)。 The iteration algorithm 133 can access the data element 103 adjacent to the removed data element 102 and the data element 109 adjacent to the added data element 110 (205). The iterative algorithm 133 can access the component Cd with delay of 1 1 143(206)。 Direct iterative calculation of component Cd with delay of 1 1 14 includes the contribution removal module 133A from the component Cd with delay of 1. 1 43, that is, the contribution of the data element 102 is mathematically removed (207). Direct iterative calculation of component Cd with delay of 1 1 14 includes the contribution joining module 133B to the component Cd with delay of 1. 1 43, the contribution 155, that is, the contribution of the data element 110, is added mathematically (208). Similarly, at a specified delay such as L = 1, the iterative algorithm 139 can be used to directly iteratively calculate the component Cd with a delay of 1 for the calculation window 122B. v 48 Based on the component Cd with a delay of 1 calculated for the calculation window 122A v 147。 The algorithm 139 can access the data element 103 adjacent to the removed data element 102 and the data element 109 adjacent to the added data element 110. Iteration algorithm 139 can access the component Cd with delay of 1 v 147。 Direct iterative calculation of component Cd with delay of 1 v 48 includes the contribution removal module 139A from the component Cd with delay of 1. v 47, that is, the contribution of the data element 102. Direct iterative calculation of component Cd with delay of 1 v 18 includes the contribution joining module 139B to the component Cd with delay of 1. v 47, that is, the contribution of data element 110.

[0104] As shown in the figure, the component Cd with delay of L. 1 44 includes other contributions 153 (contributions from data elements 103-108), 154 (contributions from data element 109), and 155 (contributions from data element 110), and the component Cd with delay of L. v 48 include other contributions 183 (contributions from data elements 103-108), 184 (contribution from data element 109), and 185 (contribution from data element 110).

[0105] The method 200 includes indirectly iteratively calculating w components and autocorrelation of a given delay as needed.

[0106] The method 200 includes indirectly iteratively calculating w components and autocorrelation of a given delay as needed, that is, when only autocorrelation is accessed. If the autocorrelation is not accessed, the method 200 includes continuing to receive the next data element to be added for the next calculation window (202). If the autocorrelation is accessed, the method 200 includes indirectly iteratively calculating w components of the given delay (209), and calculating the autocorrelation of the given delay based on one or more iteratively calculated components of the given delay (210).

[0107] When the next stream data element is accessed, the component Cd 1 44 can be used to iteratively calculate the next component Cd directly. 1 , component Cd v 48 can be used to iteratively calculate the next component Cd directly. v.

[0108] Figure 3-1 Data elements removed from the calculation window 300A and data elements added to the calculation window 300A when autocorrelation is iteratively calculated on stream data are illustrated. The calculation window 300A moves to the right. refer to Figure 3-1 An existing data element is always removed from the left side of the calculation window 300A, and a data element is always added to the right side of the calculation window 300A.

[0109] Figure 3-2There is illustrated the data accessed in the calculation window 300A when the autocorrelation is iteratively calculated on stream data. For the calculation window 300A, the first n data elements are accessed to initialize two or more components with a given delay for the first calculation window, and then w = p-v components and autocorrelation are calculated indirectly and iteratively as needed. As time goes by, an oldest data element such as the (m+1) th data element is removed from the calculation window 300A, and a data element such as the (m+n+1) th data element is added to the calculation window 300A. One or more components of the given delay of the adjusted calculation window are then directly iteratively calculated based on two or more components calculated for the first calculation window. If the specified delay is 1, a total of 4 data elements are accessed, including the removed data element, one adjacent to the removed data element, the added data element, and one adjacent to the added data element. If the specified delay is 2, a total of 6 data elements are accessed, including the removed data element, 2 adjacent data elements, the added data element, and 2 adjacent data elements. If the specified delay is L, a total of 2*(l+1) data elements are accessed, including the removed data element, L data elements adjacent to the removed data element, the added data element and L data elements adjacent to the added data element. Then, w = p-v components with a given delay and autocorrelation are calculated indirectly according to the need. Then the calculation window 300A is adjusted again by removing an old data element and adding a new data element. For a given iterative algorithm, V is a constant, and the operands of W = P-V components in indirect iteration are also a constant, so for a given delay, the amount of data access and computation is reduced and constant. The larger the computing window size n is, the more significant the reduction of data access and computation is.

[0110] Figure 3-3 There are illustrated data elements removed from the calculation window 300B and data elements added to the calculation window 300B when autocorrelation is iteratively calculated on stream data. The calculation window 300B moves to the left. refer to Figure 3-3 , a new data element is always removed from the right side of the calculation window 300B and an old data element is always added to the left side of the calculation window 300B.

[0111] Figure 3-4 There is illustrated data accessed from the calculation window 300B when autocorrelation is iteratively calculated on stream data. For the calculation window 300B, the first n data elements are accessed to initialize two or more components with a given delay for the first calculation window, and then w = p-v components and autocorrelation are calculated indirectly and iteratively as needed. As time goes by, a data element such as the (m+n) th data element is removed from the calculation window 300B, and a data element such as the m th data element is added to the calculation window 300B. Number of data elements to be accessed and Figure 3-2 The description is the same, but the calculation window moves in a different direction.

[0112] Figure 4-1 The definition of autocorrelation is illustrated. Suppose x = (x m+1 ,x m+2 ,……,x m+n ) is a calculation window of size n of a data stream containing data related to autocorrelation calculation. The calculation window can be moved to the right or left. For example, when the autocorrelation of the latest data is to be calculated, the calculation window moves to the right. At this time, one data is removed from the left side of the calculation window and one data is added to the right side of the calculation window. When you want to review the autocorrelation of old data, the calculation window moves to the left. At this time, one data is removed from the right side of the calculation window and one data is added to the left side of the calculation window. In both cases, the equations used to iteratively calculate components are different. To distinguish them, the adjusted calculation window of the former case is defined as X. I In the latter case, the adjusted calculation window is X. II. Equations 401 and 402 are the sum s of all data elements in the calculation window x with the calculation scale of n for the k-th round respectively. k And average value The traditional equation. Equation 403 is the autocorrelation ρ of the given delay of the calculation window x for the k-th round. (k,l) The traditional equation. Equations 404 and 405 are the adjusted calculation windows x with the calculation scale of n for the k+1th round respectively. I The sum s of all data elements in the I k+1 And average value The traditional equation. Equation 406 is to calculate the adjusted calculation window x for the k+1th round. I The given delay of is the autocorrelation ρ of L. I (k+1,l) The traditional equation. As mentioned earlier, when the calculation window moves to the left, the adjusted calculation window is defined as X. II. Equations 407 and 408 are the adjusted calculation windows x with the calculation scale of n for the k+1th round respectively. II The sum s of all data elements in the II k+1 And average value The traditional equation. Equation 409 is to calculate the adjusted calculation window x for the k+1th round. II The given delay of is the autocorrelation ρ of L. II (k+1,l) The traditional equation.

[0113] To show how to iteratively calculate autocorrelation using components, three different iterative autocorrelation algorithms are provided as examples. Every time there is a data change in the calculation window, a new round of calculation begins (for example, 122→122A→122B). A sum or average is the basic component for calculating autocorrelation. The equation for iteratively calculating a sum or average value is an iterative component equation used by all examples of iterative autocorrelation calculation algorithms.

[0114] Figure 4-2 The first example iterative autocorrelation calculation algorithm (iterative algorithm 1) will be explained. Equations 401 and 402 can be used to initialize component s, respectively. k And/or Equations 410, 411, and 412 can be used to initialize component SS, respectively. k ，SX k , and covX (k,l). Equation 413 can be used to calculate autocorrelation ρ. (k,l). When the calculation window moves to the right, iterative algorithm 1 includes component S. I k+1 or SS I k+1 ,SX I k+1 , and covX I (k+1,l) Iterative calculation, once the component SX I k+1 And covX I (k+1,l) Computed, autocorrelation ρ I (k+1,l) Can be calculated based on them. Once component s k And/or Yes, equations 414 and 415 can be used to iteratively calculate the adjusted calculation window x, respectively. I Component s of I k+1 and Once the component SS k Yes, equation 416 can be used to directly iteratively calculate the adjusted calculation window X. I Component SS of I k+1. Once component s I k+1 or And SS I k+1 Yes, equation 417 can be used to indirectly iteratively calculate the adjusted calculation window X. I Component SX of I k+1. Once the component covX (k,l) ，SS Ik+1 ，S k or And s I k+1 or Yes, equation 418 can be used to directly iteratively calculate the adjusted calculation window X. I Component covX of I (k+1,l). 44, 415, 417, and 418 contain multiple equations, respectively, but only one of them is needed, depending on whether the sum or average or both are available. Once the component covX I (k+1,l) And SX I k+1 Calculated, equation 419 can be used to indirectly iteratively calculate the adjusted calculation window X. I The given delay of is the autocorrelation ρ of L. I (k+1,l). When the calculation window moves to the left, iterative algorithm 1 includes component S. II k+1 or SS II k+1 ,SX II k+1 , and covX II (k+1,l) Iterative calculation, once the component SX II k+1 And covX II (k+1,l) Computed, autocorrelation ρ II (k+1,l) Can be calculated based on them. Equations 420 and 421 can be used to iteratively calculate the adjusted calculation window x, respectively. II Component s of II k+1 and Once component s k And/or Available. Equation 422 can be used to directly iteratively calculate the adjusted calculation window X. II Component SS of II k+1 Once the component SS k Available. 43 can be used for indirect iterative calculation of the adjusted calculation window X. II Component SX of II k+1 Once component s II k+1 or And SS II k+1 Available. Equation 424 can be used to directly iteratively calculate the adjusted calculation window X. II Component covX of II (k+1,l) Once the component covX (k,l) ，SS II k+1 ，S k or And s II k+1 or Available. 40, 421, 423, and 424 contain multiple equations, respectively, but only one of them is needed, depending on whether the sum or average or both are available. Equation 425 can be used to indirectly iteratively calculate the adjusted calculation window X. II The given delay of is the autocorrelation ρ of L. II (k+1,l) Once the component covX II (k+1,l) And SX II k+1 Be calculated.

[0115] Figure 4-3 The second example iterative autocorrelation calculation algorithm (iterative algorithm 2) will be explained. Equations 401 and 402 can be used to initialize component s, respectively. k And/or Equations 426 and 427 can be used to initialize component SX, respectively. k And covX (k,l). Equation 428 can be used to calculate autocorrelation ρ. (k,l). When the calculation window moves to the right, iterative algorithm 2 includes component S. I k+1 or SX I k+1 , and covX I (k+1,l) Iterative calculation, once the component SX I k+1 And covX I (k+1,l) Computed, autocorrelation ρ I (k+1,l) Can be calculated based on them. Once component s k And/or Yes, equations 429 and 430 can be used to iteratively calculate the adjusted calculation window x, respectively. I Component s of I k+1 and Once the assembly SX k ,SI k+1 And/or Yes, equation 431 can be used to directly iteratively calculate the adjusted calculation window X. I Component SX of I k+1. Equation 432 can be used to directly iteratively calculate the adjusted calculation window X. I Component covX of I (k+1,l) Once the component covX (k,l) ，S k or And s I k+1 or Available. 49, 430, 431, and 432 contain multiple equations, respectively, but only one of them is needed, depending on whether the sum or average or both are available. Once the component covX I (k+1,l) And SX I k+1 Calculated, equation 433 can be used to indirectly iteratively calculate the adjusted calculation window X. I The given delay of is the autocorrelation ρ of L. I (k+1,l). When the calculation window moves to the left, iterative algorithm 2 includes component S. II k+1 or SX II k+1 , and covX II (k+1,l) Iterative calculation, once the component SX II k+1 And covX II (k+1,l) Computed, autocorrelation ρ II (k+1,l) Can be calculated based on them. Equations 434 and 435 can be used to iteratively calculate the adjusted calculation window x, respectively. II Component s of II k+1 and Once component s k And/or Available. Equation 436 can be used to directly iteratively calculate the adjusted calculation window X. II Component SX of II k+1 Once the assembly SX II k ，S II k+1 And/or Available. Equation 437 can be used to directly iteratively calculate the adjusted calculation window X. II Component covX of II (k+1,l) Once the component covX (k,l) ，S k or And s IIk+1 or Available. 44, 435, 436, and 437 contain multiple equations, respectively, but only one of them is needed, depending on whether the sum or average or both are available. Equation 438 can be used to calculate the adjusted calculation window x indirectly and iteratively. II The given delay of is the autocorrelation ρ of L. II (k+1,l) Once the component covX II (k+1,l) And SX II k+1 Be calculated.

[0116] Figure 4-4 The third example iterative autocorrelation calculation algorithm (iterative algorithm 3) will be explained. Equations 401 and 402 can be used to initialize component s, respectively. k And/or Equations 439 and 440 can be used to initialize component SX, respectively. k And covX (k,l). Equation 441 can be used to calculate autocorrelation ρ. (k,l). When the calculation window moves to the right, iterative algorithm 3 includes component S. I k+1 or SX I k+1 , and covX I (k+1,l) Iterative calculation, once the component SX I k+1 And covX I (k+1,l) Computed, autocorrelation ρ I (k+1,l) Can be calculated based on them. Equations 442 and 443 can be used to iteratively calculate the adjusted calculation window x, respectively. I Component s of I k+1 and Once component s k And/or Available. Equation 444 can be used to directly iteratively calculate the adjusted calculation window X. I Component SX of I k+1 Once the assembly SX k ，S k And/or And s I k+1 And/or Available. Equation 445 can be used to directly iteratively calculate the adjusted calculation window X. I Component covX of I (k+1,l) Once the component covX (k,l) ，S k or And s I k+1 or Available. 42, 443, 444, and 445 contain multiple equations respectively, but only one of them is needed depending on whether the sum or average or both are available. Equation 446 can be used to calculate the adjusted calculation window x indirectly and iteratively. I The given delay of is the autocorrelation ρ of L. I (k+1,l) Once the component covX I (k+1,l) And SX I k+1 Be calculated. When the calculation window moves to the left, iterative algorithm 3 includes component S. II k+1 or SX II k+1 , and covX II (k+1,l) Iterative calculation, once the component SX II k+1 And covX II (k+1,l) Computed, autocorrelation ρ II (k+1,l) Can be calculated based on them. Equations 447 and 448 can be used to iteratively calculate the adjusted calculation window x, respectively. II Component s of II k+1 and Once component s k And/or Available. Equation 449 can be used to directly iteratively calculate the adjusted calculation window X. II Component SX of II k+1 Once the assembly SX k ，S k And/or And s II k+1 And/or Available. Equation 450 can be used to directly iteratively calculate the adjusted calculation window X .. II Component covX of II (k+1,l) Once the component covX (k,l) ，S k or And s II k+1 or Available. 47, 448, 449, and 450 contain multiple equations, respectively, but only one of them is needed, depending on whether the sum or average or both are available. Once the component covX II (k+1,l) And SX II k+1 Calculated, equation 451 can be used to indirectly iteratively calculate the adjusted calculation window X. II The given delay of is the autocorrelation ρ of L. II (k+1,l).

[0117] To show iterative autocorrelation algorithms and their comparison with traditional algorithms, three examples are given below. Use data from 3 mobile computing windows. For the traditional algorithm, the calculation process of all three calculation windows is exactly the same. For iterative algorithm, the first calculation window initializes two or more components, and the second and third calculation windows perform iterative calculation.

[0118] Figure 5-1 , Figure 5-2 , Figure 5-3 The first calculation window, the second calculation window and the third calculation window for a calculation example are shown respectively. The calculation window 503 includes the first four data elements of the data stream 501: 8, 3, 6, 1. The calculation window 504 includes four data elements of the data stream 501: 3, 6, 1, 9. The calculation window 505 includes four data elements of the data stream 501: 6, 1, 9, 2. This calculation example assumes that the calculation window moves from left to right. The data stream 501 may be streaming big data or streaming data. The calculation window size 502(n) is 4.

[0119] Firstly, the autocorrelation of the calculation windows 503, 504, and 505 with a delay of 1 is calculated by the traditional algorithm.

[0120] Calculate the autocorrelation with delay of 1 for the calculation window 503:

[0121]

[0122]

[0123]

[0124]

[0125] Without any optimization, there are 2 times of division, 7 times of multiplication, 8 times of addition and 10 times of subtraction to calculate the autocorrelation with delay of 1 for the calculation window with size 4.

[0126] The same equations and processes can be used to Figure 5-2 The display calculation window 504 calculates the autocorrelation sum with delay of 1 as Figure 5-3The displayed calculation window 505 calculates the autocorrelation with a delay of 1. Calculate the autocorrelation of the window 504 with a delay of 1. Calculate the autocorrelation of the window 505 delay of 1. Each of these two calculations includes 2 times of division, 7 times of multiplication, 8 times of addition and 10 times of subtraction without optimization. The traditional algorithm usually needs to complete 2 times of division, 2n-l times of multiplication, 3n-(l+3) times of addition and 3n-2l times of subtraction when calculating the autocorrelation with the window size of n and a given delay of l without optimization.

[0127] The following iterative algorithm 1 is used to calculate the autocorrelation of the calculation windows 503, 504, and 505 with a delay of 1, respectively.

[0128] Calculate the autocorrelation with delay of 1 for the calculation window 503:

[0129] 1. initialize the components of the first round with equations 402,410,411 and 412, respectively. SS 1 ,SX 1 , and covX (1,1) ：

[0130]

[0131]

[0132]

[0133]

[0134] 2. Calculate the autocorrelation ρ of the first round with equation 413. (1,1) ：

[0135]

[0136] There are 2 divisions, 9 multiplications, 8 additions and 7 subtractions when calculating the autocorrelation with delay of 1 for the calculation window 503.

[0137] Calculate the autocorrelation with delay of 1 for the calculation window 504:

[0138] 1. Use equations 415,416,417 and 418 to iteratively calculate the components of the second round. SS 2 ,SX 2 , and covX (2,1) ：

[0139]

[0140] SS 2 ＝SS 1 +x m+1+4 2 -x m+1 2 ＝110+9 2 -8 2 ＝110+81-64＝127

[0141]

[0142]

[0143] 2. Calculate the autocorrelation ρ of the second round with equation 419. (2,1) ：

[0144]

[0145] There are two divisions, ten multiplications, eight additions and seven subtractions when the calculation window 504 iteratively calculates the autocorrelation with the delay of one.

[0146] Calculate the autocorrelation with delay of 1 for the calculation window 505:

[0147] 1. Use equations 415,416,417, and 418 to iteratively calculate the components of the third round respectively. SS 3 ,SX 3 , and covX (3,1) ：

[0148]

[0149] SS 3 ＝SS 2 +x m+1+4 2 -x m+1 2 ＝127+2 2 -3 2 ＝127+4-9＝122

[0150]

[0151]

[0152] 2. Calculate the autocorrelation ρ of the third round with equation 419. (3,1) ：

[0153]

[0154] There are 2 divisions, 10 multiplications, 8 additions and 7 subtractions when calculating the autocorrelation with delay of 1 for the calculation window 505.

[0155] The following iterative algorithm 2 is used to calculate the autocorrelation of the calculation windows 503, 504, and 505 with a delay of 1, respectively.

[0156] Calculate the autocorrelation with delay of 1 for the calculation window 503:

[0157] 1. Initialize the components of the first round with equations 402,426 and 427. SX 1 , and covX (1,1) ：

[0158]

[0159]

[0160]

[0161] 2. Calculate the autocorrelation ρ of the first round with equation 428. (1,1) ：

[0162]

[0163] There are 2 divisions, 7 multiplications, 8 additions and 10 subtractions when calculating the autocorrelation with delay of 1 for the calculation window 503.

[0164] Calculate the autocorrelation with delay of 1 for the calculation window 504:

[0165] 1. iteratively calculate the components of the second round with equations 430,431 and 432, respectively. SX 2 , and covX (2,1) :

[0166]

[0167]

[0168]

[0169] 2. Calculate the autocorrelation ρ of the second round with equation 433. (2,1) :

[0170]

[0171] There are two divisions, seven multiplications, ten additions and seven subtractions when the calculation window 504 iteratively calculates the autocorrelation with the delay of 1.

[0172] Calculate the autocorrelation with delay of 1 for the calculation window 505:

[0173]1. iteratively calculate the components of the third round with equations 430,431 and 432, respectively. SX 3 , and covX (3,1) :

[0174]

[0175]

[0176]

[0177] 2. Calculate the autocorrelation ρ of the third round with equation 433. (3,1) :

[0178]

[0179] There are two divisions, seven multiplications, ten additions and seven subtractions when the calculation window 505 iteratively calculates the autocorrelation with the delay of 1.

[0180] The following iterative algorithm 3 is used to calculate the autocorrelation of the calculation windows 503, 504, and 505 with a delay of 1, respectively.

[0181] Calculate the autocorrelation with delay of 1 for the calculation window 503:

[0182] 1. initialize the components of the first round with equations 402,439 and 440. SX 1 , and covX (1,1) :

[0183]

[0184]

[0185]

[0186]

[0187] 2. Calculate the autocorrelation ρ of the first round with equation 441. (1,1) :

[0188]

[0189] There are 2 divisions, 7 multiplications, 8 additions and 10 subtractions when calculating the autocorrelation with delay of 1 for the calculation window 503.

[0190] Calculate the autocorrelation with delay of 1 for the calculation window 504:

[0191] 1. iteratively calculate the components of the second round with equations 443,444 and 445, respectively. SX 2 , and covX (2,1) :

[0192]

[0193]

[0194]

[0195] 2. Calculate the autocorrelation ρ of the second round with equation 446. (2,1) :

[0196]

[0197] There are two divisions, seven multiplications, nine additions and eight subtractions when the calculation window 504 iteratively calculates the autocorrelation with the delay of 1.

[0198] Calculate the autocorrelation with delay of 1 for the calculation window 505:

[0199] 1. iteratively calculate the components of the third round with equations 443,444 and 445, respectively. SX 3 , and covX (3,1) :

[0200]

[0201]

[0202]

[0203] 2. Calculate the autocorrelation ρ of the third round with equation 446. (3,1) :

[0204]

[0205] There are two divisions, seven multiplications, nine additions and eight subtractions when the calculation window 505 iteratively calculates the autocorrelation with the delay of 1.

[0206] In the above three examples, the average value is used for iterative autocorrelation calculation. And can also be used for autocorrelation iterative calculation, but the operands are different. In addition, the calculation window in the above three examples moves from left to right. When the calculation window moves from right to left, the calculation process is similar, just applying a set of different equations.

[0207] Figure 6-1 The comparison of computation between traditional autocorrelation algorithm and iterative autocorrelation algorithm is illustrated when the delay of n = 4 is 1. As shown in the figure, the division, multiplication, addition and subtraction of any iterative algorithm are similar to those of the traditional algorithm.

[0208] Figure 6-2 The comparison of the calculation amount between the traditional autocorrelation algorithm and the iterative autocorrelation algorithm is illustrated when the delay of n = 1,000,000 is 1. As shown in the figure, any iterative algorithm has many fewer multiplication, addition and subtraction operations than the traditional algorithm. Iterative autocorrelation algorithm can finish the data that need to be processed on thousands of computers only on a single computer. Greatly improve the calculation efficiency, reduce the calculation resources, and reduce the energy consumption of computing equipment, making it possible to judge the given delay repeatability of streaming data in real time with high efficiency and low consumption, and some scenarios of judging the given delay repeatability of streaming data in real time from impossible to possible.

[0209] The present invention can be implemented in other specific ways without departing from its ideological or essential characteristics. The implementation described in this application is only exemplary rather than restrictive in all aspects. Therefore, the scope of the invention is indicated by the appended claims rather than the foregoing description. All changes that are equivalent to the meaning and scope of the claims in the claims are included within their scope.

## PUM

## Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.