A sample extraction method and device for a video classification problem

A technology for video classification and extraction methods, applied in video data retrieval, video data query, instruments, etc., can solve problems such as unfavorable model training, uneven sample quality, insufficient coverage of clips, etc., to enhance sample quality and improve training quality. Effect

Pending Publication Date: 2019-05-07
BOE TECH GRP CO LTD
11 Cites 5 Cited by

AI-Extracted Technical Summary

Problems solved by technology

However, since the above sample is equivalent to a video clip, it is likely that the clip cannot fully cover the content information of the entire video....
View more

Method used

In summary, the sample extraction method and device of a kind of video classification problem that the embodiment of the present invention provides, the embodiment of the present invention is to analyze the video data obtained in advance, obtain corresponding continuous single-frame image, and then Feature images are extracted from continuous single-frame images to form samples, so that the composed samples do not contain redundant image data. Compared with the existing technology, it solves the problem of uneven sample quality and unfavorable model training. The embodiment of the present invention is to extract the feature video frame contained in a video data to form a sample, so as to ensure that the sample can cover the entire content information of the video data, and at the same time avoid redundant image data in the sample, so as to enhance the quality of the sample, and also contribute to Improve the training quality of video classification models. In addition, after obtaining a sample composed of feature images, the optical flow image corresponding to the sample can also be generated, so that the sample, the corresponding optical flow image, and the image data combining the above two can be added to the training sample as a data enhancement strategy , and then increase the proportion of samples that can cover the content information of the entire video data in the training samples, so as to improve the accuracy of the training video classification model.
It should be noted that, for the above-mentioned step of extracting feature images, its core is to select feature images by comparing the structural similarities between multiple single-frame images within the range of multiple single-frame images corresponding to video data, Since the number of single-frame images corresponding to video data can be very large (for example: more than 24 frames per second), so a large number of single-frame images can be divided into multiple image groups for comparison within each image group The structural similarity between single-frame images, and then determine the feature image corresponding to each image group, and finally obtain the feature image corresponding to the video data by collecting the feature images corresponding to each group, so that the original huge workload is decomposed into multiple tasks, simplifying the process of extracting feature images. In addition, the image grouping method proposed in the embodiment of the present invention is to divide consecutive multiple single-...
View more

Abstract

The invention discloses a sample extraction method and device for a video classification problem, and relates to the technical field of video classification. the method for extracting the samples fromthe video data is optimized so as to ensure that the samples can cover the whole content information of the video data, meanwhile, redundant image data in the samples are avoided, the sample qualityis enhanced, and the training quality of a video classification model can be improved. The technical scheme includes; acquiring video data; Analyzing the video data to obtain a plurality of continuoussingle-frame images corresponding to the video data; extracting Feature images from the plurality of continuous single-frame images to form a sample, wherein the feature images are used for summarizing content information of the video data, and the sample does not contain redundant image data. The method is applied to extracting the samples for training the video classification model from the video data.

Application Domain

Video data queryingCharacter and pattern recognition

Technology Topic

Training qualitySingle frame +3

Image

  • A sample extraction method and device for a video classification problem
  • A sample extraction method and device for a video classification problem
  • A sample extraction method and device for a video classification problem

Examples

  • Experimental program(1)

Example Embodiment

[0072] Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited by the embodiments set forth herein. Rather, these embodiments are provided so that the present invention will be more thoroughly understood, and will fully convey the scope of the present invention to those skilled in the art.
[0073] The embodiment of the present invention provides a sample extraction method for a video classification problem, such as figure 1 As shown in the figure, the method extracts a feature image that can summarize the content information of the video data from a plurality of consecutive single-frame images corresponding to the video data, so as to form a sample, thereby ensuring the quality of the sample. The embodiment of the present invention provides the following specific step:
[0074] 101. Acquire video data.
[0075] In the embodiment of the present invention, video data is equivalent to a sample data source, and is used to select samples from it and add training data, so as to complete the purpose of training a video classification model based on deep learning. Therefore, videos that are more helpful for training the model should be obtained. Therefore, in this embodiment of the present invention, video data is not randomly and arbitrarily obtained from a massive data source, but video data is extracted from a preprocessed video library. For example, if you focus on action classification in video classification, you can obtain sample data sources for training video classification models from the UCF101 action library.
[0076] 102. Analyze the video data to obtain multiple consecutive single-frame images corresponding to the video data.
[0077] Among them, a single frame image refers to a still picture, and a frame is a single image picture of the smallest unit in a video image, which is equivalent to each frame of the video image, and then continuous frames form the video image.
[0078] In the embodiment of the present invention, the video data is decomposed into a series of continuous single-frame still pictures by analyzing the video data, so as to analyze the entire video data by analyzing the still pictures of each single frame. For example, a piece of video data can be parsed into images corresponding to 24 frames per second, and for this embodiment of the present invention, the method for parsing the video into multiple single-frame images is not limited.
[0079] 103. Extract characteristic images from multiple consecutive single-frame images to form samples.
[0080] Among them, the feature image is used to summarize the content information of the video data, such as: Figure 2a The shown video data and its corresponding optical flow information can be seen from the analysis of Figure 2 that the content information of this segment of video data can be summarized according to the 1st, 3rd, and 7th single-frame images in this segment of video data, and then such single-frame images can be summarized. The frame image is the characteristic image corresponding to this video data, and for this dynamic video data, the difference between the 2nd, 4th, 5th, 6th, and 8th single frame images is very small. The five single-frame images are almost the same, and such a single-frame image is the redundant image data corresponding to the video data of this segment.
[0081] For the embodiment of the present invention, within the range of multiple consecutive single-frame images corresponding to the video data, each feature image is significantly different from other single-frame images, and then such multiple feature images are extracted and composed of The samples can summarize the content information of the video data and do not contain redundant image data.
[0082] For example, to further elaborate on the above example, change the feature image from Figure 2a extracted from, composed of Figure 2b The schematic diagram of the effect of the sample shown, each single frame image in this sample has obvious differences, and accordingly, according to the characteristic images contained in the sample, an obvious and more effective optical flow map can also be obtained.
[0083] An embodiment of the present invention provides a sample extraction method for a video classification problem. The embodiment of the present invention analyzes the acquired video data in advance to obtain a corresponding continuous single-frame image, and then extracts a feature image composition from the continuous single-frame image. samples so that redundant image data is not included in the composed samples. Compared with the prior art, it solves the problem that the sample quality is uneven and is not conducive to model training. The embodiment of the present invention is to extract the characteristic video frame contained in a video data to form a sample, so as to ensure that the sample can cover the entire content information of the video data, and at the same time, it also avoids redundant image data contained in the sample, so as to enhance the quality of the sample and also help Improve the training quality of video classification models.
[0084] In order to describe the above embodiments in more detail, the embodiments of the present invention also provide another sample extraction method for video classification problems, such as image 3 As shown in the figure, after obtaining the sample composed of characteristic images, the method can also generate the optical flow image corresponding to the sample, so that the sample, the corresponding optical flow image and the image data combining the above two can be used as a data enhancement strategy Add to the training samples, and then increase the proportion of samples that can cover the content information of the entire video data in the training samples, so as to help improve the accuracy of the training video classification model. The embodiment of the present invention provides the following specific steps:
[0085] 201. Acquire video data.
[0086] In this embodiment of the present invention, for this step, please refer to step 101, and details are not repeated here.
[0087] 202. Analyze the video data to obtain multiple consecutive single-frame images corresponding to the video data.
[0088] In this embodiment of the present invention, for this step, please refer to step 102, and details are not repeated here.
[0089] 203. Extract characteristic images from multiple consecutive single-frame images to form samples.
[0090] The feature image is used to summarize the content information of the video data, and the sample does not contain redundant image data.
[0091] In the embodiment of the present invention, the specific steps of extracting feature image composition samples from multiple consecutive single-frame images may be as follows:
[0092] First, the continuous multiple single-frame images corresponding to the video data are evenly divided into multiple image groups, and one image group includes multiple continuous single-frame images arranged in chronological order. Then, the characteristic images in each image group are determined by comparing the structural similarity between the single-frame images in the image group, and these characteristic images are extracted to form samples.
[0093] It should be noted that the core of the above-mentioned step of extracting feature images is to select feature images by comparing the structural similarities between multiple single-frame images within the range of multiple single-frame images corresponding to the video data. The number of corresponding single-frame images can be very large (for example: more than 24 frames per second), so the large number of single-frame images can be divided into multiple image groups to compare single-frame images within each image group. The structural similarity between each other, and then determine the feature images corresponding to each image group, and finally obtain the feature images corresponding to the video data by collecting the feature images corresponding to each group, so that the original huge workload is decomposed into multiple task to simplify the process of extracting feature images. In addition, the image grouping method proposed by the embodiment of the present invention divides a plurality of consecutive single-frame images into one group according to the time sequence of the video data corresponding to the single-frame images, and the number of images in each image group is the same, instead of arbitrarily combining the images The single-frame images corresponding to the video data are randomly combined into image groups, so as to ensure that the feature images corresponding to different image groups are also significantly different from each other, so as to ultimately avoid redundant image data in the feature images corresponding to the video data, thereby improving the performance of the video data. The accuracy of extracting feature images corresponding to video data.
[0094] Specifically, in the embodiment of the present invention, the specific steps for determining the feature image corresponding to the image group by comparing the structural similarity between the single-frame images in the image group may be as follows:
[0095] The first step is to select any single-frame image in the image group as a benchmark image, to compare the benchmark image with other single-frame images in the image group to see if there is structural similarity, so that in the embodiment of the present invention, the image group in the image group is similar in structure. Each single-frame image of the image can be used as a benchmark image in turn, and then the comparison between multiple single-frame images in the image group is completed.
[0096] In the second step, according to the arrangement order of the multiple single-frame images in the image group, the first H single-frame images and the last H single-frame images of the benchmark image are obtained to form an image set, where H is a positive integer, and 2H+1 is less than one The total number of single-frame images contained in the group image group. The structural similarity between the benchmark image and each single frame image in the image set is calculated respectively, and multiple similarity values ​​corresponding to the benchmark image are obtained. An average operation is performed on the plurality of similarity values ​​corresponding to the benchmark image to obtain an average value of the similarity corresponding to the benchmark image.
[0097]It should be noted that, although for the embodiment of the present invention, each single-frame image in the image group can be used as the benchmark image in turn, but because redundant image data are usually continuous single-frame images, for example, for two consecutive images It is almost difficult for the human eye to distinguish a single frame image, so when comparing the structural similarity between the single frame images in the image group, it is not necessary to perform each single frame image and the rest of the single frame images successively. comparison, which will involve redundant comparison operations, so the embodiment of the present invention proposes an optimization scheme, that is, in the image group, a benchmark image is only compared with its previous H single-frame images and the next H single-frame images successively. Yes. In the embodiment of the present invention, while ensuring that technical effects are achieved, but still considering the problem of saving computing costs, H is optimized to be a positive integer between 1 and 10.
[0098] In the embodiment of the present invention, according to the statements of the first step and the second step above, the following formula (1) can be used to obtain the average value of the similarity corresponding to a benchmark image in the image group:
[0099]
[0100] Among them, i is the benchmark image in the image group, that is, the ith frame image, H is the maximum adjacent frame threshold, and Ssim(f(i), f(i+H)) is to find the ith frame image and the i+Hth image Structural similarity value between frame images, SSIM(i) is to find the average similarity between the i-th frame image and the single frame image within the threshold range of H adjacent frames.
[0101] Further, it should be noted that the above is only a specific implementation method exemplified in the embodiment of the present invention, because the core essence of the embodiment of the present invention is to compare a single frame image with the single frame in the range before and after the image group. Structural similarity between images, based on this, the average value of the structural similarity between the single frame image and other single frame images in this range is further calculated, so when comparing the structural similarity, a single frame image can be compared. The similarity between the multi-frame images in the same range before and after it can also be compared between the single-frame image and the multi-frame images in different ranges before and after. In the image group of frame images, the 15th frame image is selected as the benchmark image, and the structural similarity between the benchmark image and its first 5 frames and the last 9 frames is compared.
[0102] The third step is to sort the average values ​​of the similarity corresponding to the multiple benchmark images from small to large, and select the top M from the average values ​​of the similarity corresponding to the multiple benchmark images according to the order from small to large. A similarity average value, M is a positive integer and less than the total number of single-frame images included in a group of images, and a benchmark image that matches the selected similarity average value is determined as a feature image.
[0103] In the embodiment of the present invention, after obtaining the average value of the similarity corresponding to a benchmark image in the image group, the same method is used. Since each single-frame image in the image group can be selected as the benchmark image in turn, the corresponding , the average similarity of the corresponding multiple benchmark images in the image group can be obtained. Since the value of the average similarity value is smaller, it indicates that the benchmark image and its first H and last H single-frame images are more structurally different. Therefore, in the embodiment of the present invention, M minimum similarity averages are selected from the similarity averages corresponding to the multiple benchmark images of the image group, and then according to the M minimum similarity averages, multiple In the benchmark image, the most dissimilar benchmark image is selected for comparison with the previous H and the latter H single-frame images, and the characteristic image corresponding to the image group is obtained.
[0104] In the embodiment of the present invention, according to the above-mentioned third step, the following formula (2) can be used to obtain M minimum values ​​selected from the average values ​​of similarity corresponding to multiple benchmark images respectively:
[0105]
[0106] Among them, K is the number of image groups, H is the number of benchmark images in the image group, the average value of similarity corresponding to each benchmark image is SSIM(n), and SMALL(.) is to select M from H frames of benchmark images. A benchmark image with the smallest average similarity is used as a feature image, and F is a sample composed of feature images corresponding to multiple image groups.
[0107] For the embodiment of the present invention, the feature images corresponding to the image groups are determined through the above-mentioned first to third steps, and then samples corresponding to the video data can be formed according to the feature images corresponding to each image group.
[0108] Further, for the embodiment of the present invention, for a video data, by setting different positive integer values ​​for the above H and M, correspondingly, the feature images extracted in one image group may be different, so the final composed video The samples corresponding to the data can also be different. Therefore, after extracting the characteristic images corresponding to each image group to form samples, you can also mark these characteristic images, so that the next time you perform the next extraction of the characteristic images corresponding to each image group to form samples, they will not be marked from the image group The feature image is extracted from the single frame image of , and under the premise that the above H and M are set to different positive integer values, the possibility of the same single frame image being selected as the feature image again can be avoided, so that for each The sub-composed samples contain feature images that are as different as possible, thereby increasing the diversity of the samples.
[0109] Further, the embodiment of the present invention also proposes an optimization method for preventing the same single-frame image from being selected as a feature image again. Specifically, the method is as follows:
[0110] After the feature images corresponding to each image group are extracted this time to form a sample, the feature images extracted this time are marked. When extracting the feature images corresponding to each image group to form a sample next time, multiply the average value of the similarity corresponding to the marked feature image by the growth coefficient, which is used to double the average value of the similarity corresponding to the feature image . Furthermore, when selecting the average similarity of the M minimum values ​​from the averages of the similarities corresponding to the plurality of benchmark images, the similarity of the benchmark images that were selected as the feature images last time is averaged by multiplying the above-mentioned method by the increasing coefficient. The value is doubled to avoid a landmark image that was previously selected as a feature image from being selected again.
[0111] 204. Arrange the feature images in chronological order according to the optical flow information corresponding to the feature images on the time axis.
[0112] 205. Generate an optical flow image corresponding to the sample according to the arrangement order of the feature images.
[0113] In this embodiment of the present invention, the above steps 204-205 describe that a corresponding optical flow image is generated from a sample composed of a feature image.
[0114] 206. Add the optical flow image to the training sample, and/or add the feature image to the training sample.
[0115] Among them, the training samples are used for the video classification model based on deep learning.
[0116] In the embodiment of the present invention, the obtained video data corresponding sample, the optical flow image corresponding to the sample, and the image data combining the above two can be added to the training sample as a data enhancement strategy, so that the training sample can cover the entire The sample ratio of the content information of the video data is increased, so as to improve the accuracy of the training video classification model.
[0117] In addition, the corresponding samples of the video data obtained in the embodiments of the present invention can also be directly used as training samples to complete the training of the video classification model, or used as enhanced samples to test the trained video classification model.
[0118] In order to achieve the above object, according to another aspect of the present invention, an embodiment of the present invention further provides a storage medium, where the storage medium includes a stored program, wherein when the program runs, the device where the storage medium is located is controlled to execute The sample extraction method for the video classification problem described above.
[0119] In order to achieve the above object, according to another aspect of the present invention, an embodiment of the present invention further provides a processor, where the processor is configured to run a program, wherein, when the program runs, the sample of the video classification problem described above is executed Extraction Method.
[0120] Further, as a response to the above figure 1 2. The implementation of the method shown in FIG. 2, an embodiment of the present invention provides a sample extraction device for a video classification problem. This apparatus embodiment corresponds to the foregoing method embodiments. For ease of reading, this apparatus embodiment will not repeat the details in the foregoing method embodiments one by one, but it should be clear that the apparatus in this embodiment can correspondingly implement the foregoing method embodiments. the entire contents of the example. The device is applied to extract samples from video data for training a video classification model, specifically as Figure 4 As shown, the device includes:
[0121] an acquisition unit 31 for acquiring video data;
[0122] The parsing unit 32 is configured to parse the video data acquired by the acquiring unit 31 to obtain a plurality of continuous single-frame images corresponding to the video data;
[0123] The extraction unit 33 is used for extracting characteristic image composition samples from the continuous multiple single-frame images parsed by the analyzing unit 32, and the characteristic image is used to summarize the content information of the video data, and the sample does not contain Redundant image data.
[0124] further, as Figure 5 As shown, the device also includes:
[0125] a sorting unit 34, configured to arrange the feature images in chronological order according to the optical flow information corresponding to the feature images on the time axis;
[0126] A generating unit 35, configured to generate an optical flow image corresponding to the sample according to the arrangement order of the feature images arranged by the sorting unit 34;
[0127] An adding unit 36, configured to add the optical flow image to the training samples; and/or,
[0128] The adding unit 36 ​​is further configured to add the feature image to the training sample.
[0129] further, as Figure 5 As shown, the extraction unit 33 includes:
[0130] A decomposition module 331, configured to evenly divide a plurality of continuous single-frame images corresponding to the video data into a plurality of image groups, and the image groups include a plurality of continuous single-frame images arranged in chronological order;
[0131] A determination module 332, configured to determine the feature image corresponding to the image group by comparing the structural similarity between the single-frame images in the image group;
[0132] Extract characteristic images corresponding to each of the image groups to form samples.
[0133] further, as Figure 5 As shown, the determining module 332 includes:
[0134] The selection submodule 3321 is used to select any single frame image as the benchmark image in the image group;
[0135] The acquisition sub-module 3322 is configured to acquire the first H single-frame images and the last H single-frame images of the benchmark image to form an image set according to the arrangement order of the multiple single-frame images in the image group, and the H is a positive value. Integer, 2H+1 is less than the total number of single-frame images contained in a group of images;
[0136] The calculation submodule 3323 is used to calculate the structural similarity between the benchmark image and each single-frame image in the image set respectively, and obtain a plurality of similarity values ​​corresponding to the benchmark image;
[0137] The calculation sub-module 3323 is further configured to perform an averaging operation on a plurality of similarity values ​​corresponding to the benchmark images to obtain an average similarity value corresponding to the benchmark images;
[0138]The sorting sub-module 3324 is used to sort the average values ​​of similarity corresponding to each of the benchmark images calculated by the plurality of the calculation sub-modules 3323 according to the numerical value from small to large;
[0139] The selection sub-module 3321 is further configured to, according to the sorting sub-module 3324, sort the numerical values ​​from small to large, and select the first M similarity averages from the similarity averages corresponding to the plurality of benchmark images respectively, The M is a positive integer and is less than the total number of single-frame images included in a group of images;
[0140] The determination sub-module 3325 is configured to determine the benchmark image that matches the selected average similarity degree as the feature image.
[0141] further, as Figure 5 As shown, the extraction unit 33 further includes:
[0142] Marking module 333, configured to mark the feature images extracted this time after the feature image composition sample corresponding to each of the image groups is extracted this time;
[0143] The extraction module 334 is configured to extract a characteristic image from a single frame of images in the image group that is not marked by the marking module when the feature image composition sample corresponding to each image group is extracted next time.
[0144] further, as Figure 5 As shown, the determining module 332 further includes:
[0145] Marking sub-module 3326 is used to mark the feature images extracted this time after the feature image composition sample corresponding to each of the image groups is extracted this time;
[0146] The processing sub-module 3327 is used to multiply the average value of the similarity corresponding to the marked feature image by a growth coefficient when the feature image composition sample corresponding to each of the image groups is extracted next time, and the growth coefficient is is used to double the average value of the similarity corresponding to the feature image.
[0147] To sum up, the embodiment of the present invention provides a sample extraction method and device for video classification problem. The embodiment of the present invention analyzes the acquired video data in advance to obtain a corresponding continuous single-frame image, and then extracts the image from the continuous single-frame image. Feature image composition samples are extracted from the frame images, so that the composed samples do not contain redundant image data. Compared with the prior art, it solves the problem that the sample quality is uneven and is not conducive to model training. The embodiment of the present invention is to extract the characteristic video frame contained in a video data to form a sample, so as to ensure that the sample can cover the entire content information of the video data, and at the same time, it also avoids redundant image data contained in the sample, so as to enhance the quality of the sample and also help Improve the training quality of video classification models. In addition, after obtaining the sample composed of feature images, the optical flow image corresponding to the sample can also be generated, so that the sample, the corresponding optical flow image and the image data combining the above two can be added to the training sample as a data enhancement strategy , and further increase the proportion of the training samples that can cover the content information of the entire video data, so as to improve the accuracy of the training video classification model.
[0148] The sample extraction device for the video classification problem includes a processor and a memory, and the above-mentioned acquisition unit, analysis unit, and extraction unit are all stored in the memory as program units, and the processor executes the above-mentioned program units stored in the memory to realize the corresponding Function.
[0149] The processor contains a kernel, and the kernel calls the corresponding program unit from the memory. The kernel can be set to one or more, and the method of extracting samples from video data can be optimized by adjusting the kernel parameters to ensure that the samples can cover the entire content information of the video data, while avoiding redundant image data in the samples to enhance the sample quality. Helps improve the training quality of video classification models.
[0150] Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read only memory (ROM) or flash memory (flash RAM), the memory including at least one memory chip.
[0151] An embodiment of the present invention provides an electronic device, which is characterized by comprising: a memory, a processor, and a program stored in the memory and running on the processor, when the processor executes the program, the video classification problem is implemented sample extraction method.
[0152] An embodiment of the present invention provides a computer-readable storage medium on which a computer program is stored, and is characterized in that, when the program runs, the sample extraction method for a video classification problem is implemented.
[0153] The devices in this article can be servers, PCs, PADs, mobile phones, and so on.
[0154] The present application also provides a computer program product, which, when executed on a data processing device, is suitable for executing a program code initialized with the following method steps: acquiring video data; parsing the video data to obtain the corresponding video data A plurality of consecutive single-frame images; extract characteristic images from the continuous plurality of single-frame images to form samples, the characteristic images are used to summarize the content information of the video data, and the samples do not contain redundant images data.
[0155] As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
[0156] The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce in the process of realization Figure 1 process or processes and/or blocks Figure 1 A means for the functions specified in a block or blocks.
[0157] These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The device is implemented in the process Figure 1 process or processes and/or blocks Figure 1 the function specified in a box or boxes.
[0158] These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that Directives are provided for implementing the process in Figure 1 process or processes and/or blocks Figure 1 The steps of the function specified in the box or boxes.
[0159] In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
[0160] Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory in the form of, for example, read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
[0161] Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer-readable media does not include transitory computer-readable media (traHsitory media), such as modulated data signals and carrier waves.
[0162] It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed, or which are inherent to such a process, method, article of manufacture, or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture or apparatus that includes the element.
[0163] It will be appreciated by those skilled in the art that the embodiments of the present application may be provided as a method, a system or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
[0164] The above are merely examples of the present application, and are not intended to limit the present application. Various modifications and variations of this application are possible for those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application shall be included within the scope of the claims of the present application.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Auxiliary training system for standing attack and defense project

ActiveCN110393907AImprove training quality
Owner:中国体育国际经济技术合作有限公司

Balance training device for rehabilitation department

PendingCN112587880AImprove the effect of rehabilitation trainingImprove training quality
Owner:张四喜

Enemy arresting training device

ActiveCN113144566AImprove grade authenticityImprove training quality
Owner:河北佳威科技发展有限公司

Laser type discus instrument vibration training and information storing and monitoring device

InactiveCN104548541AImprove training qualityImprove explosive power
Owner:HEILONGJIANG UNIV

Movable type flexible rotary friction ball machine

InactiveCN107930074AImprove training qualityImprove convenience
Owner:CHENGDU WENDA MAOYUAN TECH

Classification and recommendation of technical efficacy words

  • Improve sample quality
  • Improve training quality

Enemy arresting training device

ActiveCN113144566AImprove grade authenticityImprove training quality
Owner:河北佳威科技发展有限公司

Balance training device for rehabilitation department

PendingCN112587880AImprove the effect of rehabilitation trainingImprove training quality
Owner:张四喜

Movable type flexible rotary friction ball machine

InactiveCN107930074AImprove training qualityImprove convenience
Owner:CHENGDU WENDA MAOYUAN TECH

Auxiliary training system for standing attack and defense project

ActiveCN110393907AImprove training quality
Owner:中国体育国际经济技术合作有限公司

Laser type discus instrument vibration training and information storing and monitoring device

InactiveCN104548541AImprove training qualityImprove explosive power
Owner:HEILONGJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products