Audio quality evaluating method and device, electronic device and storage medium
A technology of audio quality and audio files, applied in speech analysis, speech recognition, instruments, etc., can solve problems such as time-consuming and laborious, low efficiency of audio quality evaluation, and low coverage of detection samples
Active Publication Date: 2019-05-31
北京海天瑞声科技股份有限公司
4 Cites 3 Cited by
AI-Extracted Technical Summary
Problems solved by technology
However, the above method needs to be done manually, which is time-consuming and labor-intensive, resulting in low efficiency of audio quality assessment, and the way of m...
Method used
A kind of voice activity detection (VAD) tool is based on VAD technology, through processing processes such as noise reduction processing, feature extraction and block classification, the automation tool that speech segment is detected, analyzed. In this embodiment, a VAD tool is used to automatically analyze all the speech segments corresponding to the audio file, and identify effective speech segments from the speech segments, thereby determining the effective speech duration in the speech segment. Using the VAD tool to detect speech segments has higher detection efficiency and more accurate results.
In the present embodiment, by calling the VAD tool, the corresponding speech segment of long audio file is detected, obtains the effective speech duration of each speech segment, further, according to the effective speech duration of each speech segment corresponding to audio file and each The corpus text corresponding to each speech segment, obtain the speech rate value of the speaker corresponding to each speech segment, obtain the grouping results according to the preset speech rate level range and the speech rate value of the speaker corresponding to the speech segment, and obtain the grouping results according to the preset aggregation strategy And the grouping results, obtain the aggregation results, and then obtain the statistical results according to the number of speech segments in the first clustering set, the number of speech segments in the second clustering set, and the total number of speech segments, and obtain audio files according to the statistical results and preset conditions quality assessment results. In this embodiment, by using the VAD tool to automatically detect the speech segment, obtain the effective speech duration corresponding to the speech segment, and then perform engineering automatic analysis according to the speaker's speech rate and preset rules, which can effectively improve the audio quality evaluation efficiency. And effectively improve the coverage of detection samples.
In the present embodiment, by the effective speech duration of each speech section corresponding to audio file and the corpus text corresponding to each speech section, obtain the speech rate value of the speaker corresponding to each speec...
Abstract
The invention provides an audio quality evaluating method and device, an electronic device and a storage medium. The method includes the steps of obtaining the speed value of a sound maker corresponding to each voice segment according to the effective voice duration of each voice segment corresponding to an audio file and the corpora text corresponding to each voice segment, conducting statisticsanalysis according to the speed value of the sound maker corresponding to each voice segment and a preset rule to obtain a statistics result, and obtaining the quality evaluating result of the audio file according to the statistics result and preset conditions. By means of the method, engineering automatic analysis is conducted according to the speed reset rule of the sound maker, the audio quality evaluating efficiency can be effectively improved, and the coverage of detection samples is effectively improved.
Application Domain
Speech recognition
Technology Topic
Assessment methodsRate of speech +5
Image
Examples
- Experimental program(1)
Example Embodiment
[0040] In order to make the objectives, technical solutions, and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of the embodiments of the present invention, not all the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present invention.
[0041] Definition of professional terms:
[0042] Corpus: The basic resources of language knowledge are carried by the electronic computer, and the corpus is stored in the language materials that have actually appeared in the actual use of the language.
[0043] Voice Activity Detection (Voice Activity Detection, referred to as: VAD): Also known as voice endpoint detection or voice boundary detection, it refers to detecting the presence or absence of voice in a noisy environment, usually used in voice processing systems such as voice coding and voice enhancement , Play a role in reducing the voice coding rate, saving communication bandwidth, reducing the energy consumption of mobile devices, and improving the recognition rate.
[0044] Speaking speed: the vocabulary speed of words or human language symbols expressing meaning in unit time. In different languages and cultures, there are differences in information capacity at the same speaking speed.
[0045] In the process of making the long free dialogue corpus, often due to audio cutting and other reasons, there are a large number of invalid silent segments in the speech segment, or even no valid speech content in the speech segment. Or, the marked corpus text and the actual speech segment The occurrence of audio content discrepancies and other phenomena, resulting in defects in audio file quality.
[0046] In the prior art, manual spot checks are often used to evaluate audio quality of audio files. However, the manual spot checks are time-consuming and laborious, resulting in low efficiency, and the manual spot checks are randomly sampled from multiple speech segments. The number of samples taken is limited, resulting in a low coverage of testing samples. If the number of samples drawn is large, the workload of the auditors is relatively large, and the detection efficiency is low. If the number of samples drawn is small, it is difficult to ensure the accuracy of the detection results.
[0047] In addition, in the process of manual random inspection, due to the individual's own reasons, such as whether the listener is rigorous, serious, proficient or familiar with the language in the voice segment, and the listener is easy to receive other outsiders during the inspection process. The interference of factors, the above factors will affect the accuracy of the detection results, resulting in low accuracy.
[0048] Based on the foregoing problems, embodiments of the present invention provide an audio quality evaluation method to improve audio quality evaluation efficiency and effectively improve detection coverage.
[0049] figure 1 It is a schematic flowchart of Embodiment 1 of the audio quality evaluation method provided by the present invention. The execution subject of the audio quality evaluation method provided by the embodiment of the present invention may be the audio quality evaluation device provided by the present invention. The audio quality evaluation device may be implemented by any software and/or hardware. The implementation of the method in the embodiment of the present invention The main body can also be an electronic device provided by the present invention. Illustratively, the electronic device can be a computer, a palmtop computer, or the like. In this embodiment, the execution subject is a computer as an example for description.
[0050] Such as figure 1 As shown, the method of this embodiment includes:
[0051] S101: Acquire the speech rate value of the speaker corresponding to each voice segment according to the effective voice duration of each voice segment corresponding to the audio file and the corpus text corresponding to each voice segment.
[0052] The audio file becomes multiple voice segments after audio cutting. Usually, each voice segment contains the actual voice content. Sometimes, due to audio cutting, there may be invalid silent segments in the voice segment. In this step , The effective voice duration of each voice segment represents the length of time that the voice content actually occurs in the voice segment, for example: the total duration of a voice segment is 1 minute, of which the actual voice content is in the continuous 30 seconds, and the rest are continuous 30 If there is no actual voice content within seconds, the effective voice duration of the voice segment is 30 seconds.
[0053] Since in the process of making the corpus, each speech segment will be marked with its corresponding corpus text, where the corpus text contains the text corresponding to the speech content in the speech segment, which can be stored in the form of a document, and the naming of the corpus text can be compared with the speech The sequence numbers of the segments remain the same.
[0054] Then, in this step, the speech rate value of the speaker corresponding to each speech segment can be obtained according to the effective speech duration of each speech segment and the total number of characters in the corpus text. Specifically, the ratio of the total number of characters of the corpus text to the effective speech duration is determined as the speech rate value of the speaker corresponding to the speech segment. Since the audio file corresponds to multiple speech segments, the speech rate value of the speaker corresponding to each speech segment corresponding to the audio file can be obtained in the foregoing manner. It is understandable that the speech rate value in this embodiment is the average speech rate within the effective speech duration.
[0055] Exemplarily, the total duration of a certain voice segment is 1 minute (the language type in the voice segment is Chinese), where the effective voice duration is 30 seconds, and the corpus text contains 120 Chinese characters. Then, the language of the speaker corresponding to the voice segment The speed value is 4 characters/sec.
[0056] S102: Perform statistical analysis according to the speech rate value of the speaker corresponding to the speech segment and preset rules, and obtain statistical results.
[0057] The purpose of this step is to classify speech segments to distinguish between normal type speech segments and abnormal type speech segments. The normal type speech segment indicates that the speech segment contains valid speech content, and the abnormal type speech segment indicates the speech in the speech segment. There are abnormalities in the content, such as: voice content, effective voice duration, and corpus text mismatch. This step is to perform statistical analysis to classify and aggregate the speech segments according to the speech speed value of the speaker of the speech segment and preset rules, and further obtain statistical results based on the classification and aggregation results, where the statistical results include: the number of normal types of speech segments One or more of the number of abnormal type speech segments, the ratio of the number of normal type speech segments to the total number of speech segments, and the ratio of the number of abnormal type speech segments to the total number of speech segments. The speech rate value is a basic factor that can reflect whether the speech content is normal. Statistical analysis of the speech segment according to the speech rate value and preset rules is not only simple and convenient, but also can ensure the accuracy of the detection result.
[0058] A possible implementation is to start statistical analysis of the speech segment according to the control instruction input by the user. Specifically, after the computer receives the control instruction input by the user, it starts to start according to the speech rate and prediction of the speaker corresponding to the speech segment. Set up rules to classify the speech segments, determine the speech segments with higher or lower speech rate values as abnormal types, and determine the speech segments with speech rate values within the appropriate range as normal speech segments, and then Get statistical results. For example: If the speech rate is within the range of 0-1 words/second, it is determined that the speech segment is an abnormal type. Wherein, the control instruction may be used for manual input, or may be input by the user through voice.
[0059] In another possible implementation manner, after obtaining the speech rate value of the speaker corresponding to each speech segment, the computer automatically performs statistical analysis on the speech segment to obtain the statistical result.
[0060] S103: Acquire a quality evaluation result of the audio file according to the statistical result and the preset condition.
[0061] A possible implementation is to determine the quality evaluation result of the audio file according to the number of normal-type speech segments and preset conditions. Specifically, the preset condition can be numerically converted into a specific value, and the number of normal type speech segments can be compared with the preset condition. If the number of normal type speech segments is greater than the preset condition (ie, the preset value), then, It is determined that the audio quality of the audio file meets the standard; otherwise, the audio quality of the audio file does not meet the standard.
[0062] Another possible implementation manner is to determine the quality evaluation result of the audio file according to the number of abnormal voice segments and preset conditions. Specifically, the preset condition can be numerically converted into a specific value, and the number of abnormal type speech segments can be compared with the preset condition. If the number of abnormal type speech segments is less than the preset condition (ie, the preset value), then, It is determined that the audio quality of the audio file meets the standard; otherwise, the audio quality of the audio file does not meet the standard.
[0063] Further, the audio quality evaluation result of the audio file can also be determined according to the proportion of the normal type of speech segment, or the proportion of the abnormal type of speech segment, and corresponding preset conditions. The specific implementation is similar to the above two implementations.
[0064] In this embodiment, the speech rate value of the speaker corresponding to each voice segment is obtained according to the effective speech duration of each voice segment corresponding to the audio file and the corpus text corresponding to each voice segment; according to the speaker corresponding to the voice segment Perform statistical analysis on the speech rate value and preset rules to obtain statistical results; obtain the quality assessment results of audio files according to the statistical results and preset conditions. The method provided by the present invention performs engineering automated analysis according to the speaker's speech speed and preset rules, which can effectively improve the audio quality evaluation efficiency and effectively improve the detection coverage rate.
[0065] In addition, the method in this embodiment can effectively ensure the accuracy of the audio quality detection result by reducing the influence of human factors on the audio quality evaluation.
[0066] Combine below figure 2 The audio quality evaluation method provided by the present invention is described in detail, figure 2 This is a schematic flowchart of Embodiment 2 of the audio quality evaluation method provided by the present invention. Such as figure 2 As shown, the method of this embodiment includes:
[0067] S201: Invoke the voice activity detection VAD tool to detect all voice segments corresponding to the long audio file, and obtain the effective voice duration of each voice segment.
[0068] Voice Activity Detection (VAD) tool is an automated tool based on VAD technology that detects and analyzes voice segments through processing processes such as noise reduction, feature extraction, and block classification. In this embodiment, the VAD tool is used to automatically analyze all voice segments corresponding to the audio file, and identify effective voice segments from the voice segments, so as to determine the effective voice duration in the voice segments. The use of VAD tools to detect voice segments has a higher detection efficiency and more accurate results.
[0069] A possible implementation is that a VAD tool is installed on the computer, and the computer detects and analyzes the voice segment by calling the VAD tool installed on it.
[0070] S202: According to the effective speech duration of each speech segment corresponding to the audio file and the corpus text corresponding to each speech segment, obtain the speech rate value of the speaker corresponding to each speech segment.
[0071] This step is the same as figure 1 Step S101 in the illustrated embodiment is similar, please refer to figure 1 The detailed description in, will not be repeated here.
[0072] Optionally, figure 1 In the illustrated embodiment, step S102 is to perform statistical analysis according to the speech rate value of the speaker corresponding to the speech segment and preset rules to obtain the statistical result, which can be implemented through steps S203 to S205 in this embodiment, specifically:
[0073] S203: Obtain a grouping result according to the preset speech rate level range and the speech rate value of the speaker corresponding to the speech segment.
[0074] Specifically, the preset speech rate level range can be set in advance according to the language type, the pronunciation habits of the speaker, and the like. Further, the speech segments are grouped according to the preset speech rate level range and the speech rate value corresponding to the speaker of the speech segment.
[0075] Exemplarily, the language type of the audio file numbered 2042-S0-A is Chinese. After the audio is cut, the audio file corresponds to 558 speech segments. The preset speech rate level range includes the following 5 different speech rate level ranges , Speaking rate 0-1 words/second, 2-3 words/second, 4-5 words/second, 6-7 words/second, greater than or equal to 8 words/second.
[0076] Groups 558 speech segments according to the preset speech rate level range. For example, if the speaker of a speech segment corresponds to a speech rate of 2 words/sec, it falls within the range of 2-3 words/sec speech rate, and so on , To group all speech segments to obtain the grouping result, which includes: 239 speech segments within the range of 0-1 word/second speech rate level, and speech segments within the range of 2-3 word/second speech rate level There are 47 voice segments within the range of 4-5 words/sec speech rate level, and there are 46 voice segments within the range of 6-7 words/sec speech rate level, which are greater than or equal to 8 words/sec. There are 149 voice segments within the speed level.
[0077] The grouping result is expressed in the form of a table, and the grouping result is shown in Table 1:
[0078] Table 1
[0079] Audio number
[0080] It is understandable that if it is an audio file in other languages, the corresponding preset speech rate level range can be set according to the language type and the voice characteristics of the speaker, etc., and then classified according to the speech rate value corresponding to the speech segment to obtain the grouping As a result, the implementation process is similar.
[0081] S204. Obtain an aggregation result according to the preset aggregation strategy and the grouping result.
[0082] In this step, according to the preset aggregation strategy and the grouping result obtained in step S203, the voice segment corresponding to the audio file is further classified to obtain the aggregation result, where the aggregation result includes the first cluster set and the second cluster set, where , The speech segments included in the first cluster set are all normal types, and the speech segments included in the second cluster set are all abnormal types. Since a small speech rate value or a high speech rate value indicates that the speech segment is abnormal, it can be determined according to the speech rate value how to distinguish the speech segment as a normal type or an abnormal type, where the abnormal type means: the speech segment has no valid speech The content, or the actual voice content does not match the corpus text. The normal type means that there is valid speech content in the speech segment, and the valid speech content of the speech segment matches the corpus text.
[0083] Take the audio file numbered 2042-S0-A in step S203 as an example for illustration. Based on the grouping results shown in Table 1 above, the speech segments that belong to the 0-1 word/second speech rate level range and The speech segments that belong to the speech rate level greater than or equal to 8 words/sec are determined as abnormal types, and the speech segments in the remaining speech rate levels are determined as normal types, so that the first cluster set contains 2-3 words/sec. 170 speech segments corresponding to the 3 speech rate ranges of, 4-5 words/sec, and 6-7 words/sec. The second cluster set contains 0-1 words/sec and a speech rate greater than or equal to 8 words/sec. 388 voice segments corresponding to the level range.
[0084] S205: Obtain a statistical result according to the number of voice segments in the first cluster set, the number of voice segments in the second cluster set, and the total number of voice segments.
[0085] Optionally, when the statistical result includes the ratio of the number of normal type speech segments to the total number of speech segments (that is, the ratio of the number of speech segments to the total number of speech segments in the first cluster set), and the number of abnormal type speech segments and the total number of speech segments The ratio of the total number of segments (that is, the ratio of the number of speech segments to the total number of speech segments in the second cluster set), the statistical results corresponding to the audio file numbered 2042-S0-A can be seen in Table 2:
[0086] Table 2
[0087]
[0088] Among them, the proportion of the first cluster set means the ratio of the number of voice segments to the total number of voice segments in the first cluster set, expressed in percentage. The proportion of the second cluster set represents the ratio of the number of voice segments to the total number of voice segments in the second cluster set, expressed as a percentage.
[0089] S206: Acquire a quality evaluation result of the audio file according to the statistical result and the preset condition.
[0090] Optionally, if the statistical result is the ratio of the number of normal-type speech segments to the total number of speech segments, the audio quality evaluation result can be obtained in the following manner:
[0091] If the ratio of the number of normal-type voice segments (ie, the number of voice segments in the first clustering set) to the total number of voice segments is greater than or equal to the first preset threshold, it is determined that the audio quality of the audio file meets the standard; if the number of normal-type voice segments The ratio to the total number of speech segments is less than the first preset threshold, and it is determined that the audio quality of the audio file does not meet the standard.
[0092] In practical applications, preferably, the first preset threshold is 70%.
[0093] Optionally, if the statistical result is the ratio of the number of abnormal voice segments to the total number of voice segments, then the audio quality evaluation result can be obtained in the following manner
[0094] If the ratio of the number of abnormal speech segments (that is, the number of speech segments in the second cluster set) to the total number of speech segments is less than the second preset threshold, it is determined that the audio quality of the audio file meets the standard; The ratio of the total number of segments is greater than or equal to the second preset threshold, and it is determined that the audio quality of the audio file does not meet the standard.
[0095] In practical applications, preferably, the second preset threshold is 30%.
[0096] Of course, it can be understood that the higher the first preset threshold or the lower the second preset threshold, it means that the corpus requires higher audio quality. In practical applications, the first preset threshold and the second preset threshold can be set according to actual needs. Preset threshold.
[0097] Taking the audio file numbered 2042-S0-A as an example, the first preset threshold is set to 70%. According to the statistical results and the preset threshold, it can be known that the normal type of voice segment accounts for 30.5%, which is less than the first preset threshold. If the threshold is set to 70%, it is determined that the audio quality of the audio file does not meet the standard, and there is an abnormal situation.
[0098] In this embodiment, the voice segment corresponding to the long audio file is detected by calling the VAD tool to obtain the effective voice duration of each voice segment, and further, according to the effective voice duration of each voice segment corresponding to the audio file and each voice segment Corresponding corpus text, obtain the speech rate value of the speaker corresponding to each speech segment, obtain the grouping result according to the preset speech rate level range and the speech rate value of the speaker corresponding to the speech segment, according to the preset aggregation strategy and grouping result , Obtain the aggregation result, and then obtain the statistical result according to the number of voice segments in the first cluster set, the number of voice segments in the second cluster set, and the total number of voice segments, and obtain the audio file quality evaluation based on the statistical results and preset conditions result. In this embodiment, the VAD tool is used to automatically detect the voice segment to obtain the effective voice duration corresponding to the voice segment, and then perform engineering automated analysis based on the speaker’s speech speed and preset rules, which can effectively improve the efficiency of audio quality evaluation. And effectively improve the coverage of test samples.
[0099] In addition, the method in this embodiment can effectively ensure the accuracy of the audio quality detection result by reducing the influence of human factors on the audio quality evaluation.
[0100] The audio quality evaluation method provided by the embodiment of the present invention is applied to a certain corpus of es-ES (Spain-Spanish), and the library contains 236 corpus texts (corresponding to 236 audio files). The abnormal types of audio files include 90. After testing and verifying, the number of audio files with audio quality problems is 81, with an accuracy rate of 90.0%; there are 146 audio files of normal type, and after testing, there is only 1 audio file with audio quality problems. , Accounting for only 0.7%. It can be seen that the method provided by the embodiment of the present invention can automatically perform engineering detection and analysis, improve efficiency, and effectively ensure accuracy.
[0101] image 3 It is a schematic flowchart of Embodiment 1 of the audio quality evaluation device provided by the present invention. Such as image 3 As shown, the audio quality evaluation device 30 provided in this embodiment includes: a first acquisition module 31, a statistical analysis module 32, and an evaluation module 33.
[0102] The first obtaining module 31 is configured to obtain the speech rate value of the speaker corresponding to each voice segment according to the effective voice duration of each voice segment corresponding to the audio file and the corpus text corresponding to each voice segment.
[0103] The statistical analysis module 32 is configured to perform statistical analysis according to the speech speed value of the speaker corresponding to the speech segment and preset rules to obtain statistical results.
[0104] Optionally, the statistical results include: the number of normal type speech segments, the number of abnormal type speech segments, the ratio of the number of normal type speech segments to the total number of speech segments, and the ratio of the number of abnormal type speech segments to the total number of speech segments One or more of.
[0105] The evaluation module 33 is configured to obtain the quality evaluation result of the audio file according to the statistical result and preset other conditions.
[0106] The device of this embodiment can be used to execute figure 1 The implementation principles and technical effects of the technical solutions of the illustrated method embodiments are similar, and will not be repeated here.
[0107] Figure 4 It is a schematic structural diagram of the second embodiment of the audio quality evaluation device provided by the present invention. Such as Figure 4 As shown, the device 40 of this embodiment is image 3 On the basis of the illustrated embodiment, it further includes: a second obtaining module 34.
[0108] Among them, the second acquisition module 34 is configured to acquire the pronunciation of the speaker corresponding to each voice segment in the first acquisition module 31 according to the effective voice duration of each voice segment corresponding to the audio file and the expected text corresponding to each voice segment. Before the speed value, call the VAD tool to detect all voice segments corresponding to the audio file, and obtain the effective voice duration of each voice segment.
[0109] Optionally, in some embodiments, the statistical analysis module 32 includes: a first grouping sub-module 321, an aggregation sub-module 322, and a calculation sub-module 323.
[0110] Among them, the first grouping sub-module 321 is configured to obtain the grouping result according to the preset speech rate level range and the speech rate value of the speaker corresponding to the speech segment.
[0111] The aggregation sub-module 322 is used to obtain aggregation results according to a preset aggregation strategy and grouping results. The aggregation results include a first cluster set and a second cluster set. Among them, the voice segments contained in the first cluster set are all normal , The speech segments included in the second cluster set are all abnormal types.
[0112] The calculation module 323 is configured to obtain statistical results according to the number of voice segments in the first cluster set, the number of voice segments in the second cluster set, and the total number of voice segments.
[0113] Optionally, in some embodiments, if the statistical result is the ratio of the number of normal-type speech segments to the total number of speech segments, the evaluation module 33 is mainly used to obtain the quality evaluation result of the audio file in the following manner:
[0114] If the ratio of the number of normal speech segments to the total number of speech segments is greater than or equal to the first preset threshold, it is determined that the audio quality of the audio file meets the standard; if the ratio of the number of normal speech segments to the total number of speech segments is less than the first preset Threshold, which determines that the audio quality of the audio file is not up to standard.
[0115] If the statistical result is the ratio of the number of abnormal speech segments to the total number of speech segments, the evaluation module 33 is mainly used to obtain the quality evaluation result of the audio file in the following manner:
[0116] If the ratio of the number of abnormal speech segments to the total number of speech segments is less than the second preset threshold, it is determined that the audio quality of the audio file meets the standard; if the ratio of the number of abnormal speech segments to the total number of speech segments is greater than or equal to the second preset Threshold, which determines that the audio quality of the audio file is not up to standard.
[0117] Optionally, in some embodiments, it further includes: a setting module 35 ( Figure 4 Not shown in ), specifically used to set the preset speaking rate level range according to the language type and the pronunciation habits of the speaker.
[0118] The device of this embodiment can be used to execute figure 2 The implementation principles and technical effects of the technical solutions of the illustrated method embodiments are similar, and will not be repeated here.
[0119] Figure 5 This is a schematic structural diagram of Embodiment 1 of the electronic device provided by the present invention. Such as Figure 5 As shown, the electronic device 50 of this embodiment includes: a memory 51 and a processor 52;
[0120] The memory 51 may be an independent physical unit, and may be connected to the processor 52 through a bus 53. The memory 51 and the processor 52 may also be integrated together and implemented by hardware.
[0121] The memory 51 is used to store the implementation of the above method embodiment, and the processor 52 calls the program to execute the operation of the above method embodiment.
[0122] Optionally, when part or all of the methods in the foregoing embodiments are implemented by software, the foregoing electronic device 50 may also only include the processor 52. The memory 51 for storing programs is located outside the electronic device 50, and the processor 52 is connected to the memory through a circuit/wire for reading and executing the programs stored in the memory.
[0123] The processor 52 may be a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), or a combination of a CPU and NP.
[0124] The processor 52 may further include a hardware chip. The aforementioned hardware chip may be an Application-Specific Integrated Circuit (ASIC), a Programmable Logic Device (Programmable Logic Device, PLD) or a combination thereof. The aforementioned PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general array logic (Generic Array Logic, GAL) or any combination thereof.
[0125] The memory 51 may include a volatile memory (Volatile Memory), such as a random-access memory (Random-Access Memory, RAM); the memory may also include a non-volatile memory (Non-volatile Memory), such as a flash memory (Flash Memory) , Hard Disk Drive (HDD) or Solid-state Drive (SSD); the storage may also include a combination of the above types of storage.
[0126] The present invention also provides a program product, for example, a computer-readable storage medium, the readable storage medium includes a program, and the program is executed by a processor to execute the above method.
[0127] A person of ordinary skill in the art can understand that all or part of the steps in the foregoing method embodiments can be implemented by a program instructing relevant hardware. The aforementioned program can be stored in a computer readable storage medium. When the program is executed, the steps including the foregoing method embodiments are executed; and the foregoing storage medium includes: ROM, RAM, magnetic disk, or optical disk and other media that can store program codes.
[0128] Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention, but not to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand: It is still possible to modify the technical solutions described in the foregoing embodiments, or equivalently replace some or all of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention. range.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.