Video testing method and device, electronic equipment and storage medium
A test method and video technology, which is applied in the field of image processing and multimedia testing, can solve problems such as video failure, video freeze or abnormality, and achieve the effect of improving precision and accuracy
Pending Publication Date: 2020-12-08
BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD +1
6 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
In the process of playing video, the terminal equipment is often affected by various factors, resulting in video...
Method used
Above-mentioned technical scheme, by carrying out the black screen test according to the quantity of the black screen image frame included in the image frame of the playing area to be tested last time, can effectively avoid the black screen misjudgment situation caused by the special display effect of the video resource. For example, even if the video to be tested is recorded as the first 20 seconds of the currently playing video, by recording the last few frames of the video to be tested corresponding to the image frames of the playback area to be tested, it is possible to effectively avoid images with special display effects on the first few frames frame for a black screen test. That is, the black screen test is only performed on partial image frames of the image frames of the playing area to be tested last time.
Above-mentioned technical scheme, by carrying out the identification of abnormal mark according to recording each image frame of playing area to be tested corresponding to video to be tested, to test abnormal state, can effectively avoid missing instantaneous abnormality, thereby improves the precision of abnormal test.
Above-mentioned technical scheme, by determining that current playing video is playing state when a plurality of image frames to be tested corresponding to video to be tested are different image frames according to recording video to be tested, can effectively solve...
Abstract
The invention discloses a video testing method and device, electronic equipment and a storage medium, relates to the technical field of image processing, and further relates to the technical field ofmultimedia testing, and the method comprises the steps: carrying out the screen recording of a currently played video, and obtaining a recorded to-be-tested video; performing framing processing on therecorded to-be-tested video to obtain a plurality of to-be-tested image frames; cutting the video playing area of the to-be-tested image frame to obtain a plurality of to-be-tested playing area imageframes; and testing the to-be-tested playing area image frame, and outputting video test data. According to the embodiment of the invention, the precision and accuracy of video testing can be improved.
Application Domain
Television systems
Technology Topic
Process engineeringComputer graphics (images) +4
Image
Examples
- Experimental program(1)
Example Embodiment
[0026] The following description of exemplary embodiments of the present application is made with reference to the accompanying drawings, including various details of the embodiments of the present application to facilitate understanding, which should be regarded as merely exemplary. Therefore, those skilled in the art should realize that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of this application. Similarly, for clarity and conciseness, the description of well-known functions and structures is omitted in the following description.
[0027] In one example, Figure 1 It is a flow chart of a video testing method provided by the embodiment of the present application. This embodiment can be applied to the case of testing a video by using an image frame to be tested that only includes a video playing area. The method can be executed by a video testing device, which can be realized by software and/or hardware, and can generally be integrated in an electronic device. The electronic device may be a computer device or the like. Corresponding, such as Figure 1 As shown, the method includes the following operations:
[0028] S110, screen recording the currently played video to obtain the recorded video to be tested.
[0029] The currently played video can be the video currently being played by the terminal device. The terminal device can be, for example, an electronic device such as a smart phone, a tablet computer, a television device or a personal computer, and the embodiment of the present application does not limit the device type of the terminal device for playing video. Meanwhile, if the terminal device has the image processing function, it can also be used as an electronic device for video testing, which is not limited by the embodiment of this application. The recorded video to be tested is a short video obtained by screen recording the currently playing video, which can be used as sample data for video testing.
[0030] In the embodiment of the application, before testing the video, the video can be played first, and the currently played video is recorded on the screen to obtain the recorded video to be tested. The time for recording the video to be tested can be set according to actual requirements, such as 10 seconds, 20 seconds, 30 seconds, etc., which is not limited by the embodiment of the present application.
[0031] S120, frame that recorded video to be teste to obtain a plurality of image frames to be tested.
[0032] Wherein that image frame to be teste can be an image frame obtained by frame the recorded video to be tested. The so-called framing process is to frame the video into a single frame image.
[0033] Accordingly, after the recorded video to be tested is obtained, the recorded video to be tested can be processed by framing, and the recorded video to be tested can be split into a plurality of image frames to be tested. It can be understood that the split image frames to be tested can be combined into a complete recorded video to be tested.
[0034] S130, cutting the video playing area of the image frame to be tested to obtain a plurality of image frames of the playing area to be tested.
[0035] Among them, the image frames of the playing area to be tested are the image frames composed of the video playing areas of each image frame to be tested.
[0036] Understandably, in the process of video playing in the device, there are probably non-video playing areas in the video playing screen, such as interactive areas (including comments, likes, collection and sharing functions, etc.) or related resource recommendation areas (such as recommending related videos), etc. Since video testing is based on the recognition or similarity calculation of each image of the image frame to be tested, if the complete image frame to be tested including the non-video playing area is taken as the test object in the process of testing the currently playing video, the influence of the non-video playing area will be introduced, resulting in the decrease of the accuracy and accuracy of video testing.
[0037] For example, suppose that there is a black screen fault in the currently playing video, and because the non-video playing area in the image frame to be tested is usually a bright area, especially when the video playing area occupies a small proportion of the video playing screen, the overall brightness of the whole image frame to be tested will be bright, and then the test result obtained is that the image frame to be tested does not have a black screen state, that is, the black screen fault is missed, which reduces the accuracy and accuracy of video testing.
[0038] According to the embodiment of the application, after the image frames to be tested are obtained, each image frame to be tested is cut out to obtain the image frames of the video playing area to be tested which only include the video playing area of the image frames to be tested, so that video testing can only be carried out for video playing resources, thereby improving the precision and accuracy of video testing.
[0039] S140, testing the image frame of the playing area to be tested, and outputting video test data.
[0040] Among them, the video test data is the test data obtained by testing the image frame of the playing area to be tested.
[0041] Accordingly, after obtaining the image frame of the play area to be tested, which only includes the video play area, the test can be performed according to the image frame of the play area to be tested, and the video test data can be output.
[0042] It should be noted that, at present, when testing video, it is mainly through screen capture of the currently playing video to test various states of the video according to the captured single or two screen images. For example, when testing whether a video is in a pause or play state, two screen images captured at intervals are compared to determine whether the video is in a play state or a pause. When the similarity between the two screens is high, the video is considered to be in a pause state; When the similarity between the two screens is low, the video is considered to be playing. This kind of video test obviously has a drawback: the error is huge. When the video is in the playing state and the change of the playing picture is very small, it is easy to get the wrong test result in the suspended state.
[0043] However, the embodiment of the present application performs the video test based on the recorded video clips, and can perform the video test based on a plurality of consecutive image frames of the playing area to be tested, which can effectively avoid the problem that the test result is wrong due to too few images.
[0044] According to the embodiment of the application, after the recorded video to be tested is recorded on the screen, the recorded video to be tested is subjected to frame processing to obtain a plurality of image frames to be tested, and then the video playing area of the image frames to be tested is cut to obtain a plurality of image frames in the playing area to be tested, and video test data is output, so that the problem of low test precision and accuracy in the existing video test methods can be solved, and the precision and accuracy of video testing can be improved.
[0045] In one example, Figure 2 It is a flowchart of a video testing method provided by the embodiment of the present application. Based on the technical schemes of the above embodiments, the embodiment of the present application has been optimized and improved, and has given a variety of specific and optional implementation ways for testing the image frames of the playing area to be tested.
[0046] such as Figure 2 A video testing method is shown, which includes:
[0047] S210, screen recording the currently played video to obtain the recorded video to be tested.
[0048] S220, frame that recorded video to be teste to obtain a plurality of image frames to be tested.
[0049] S230: Cut the video playing area of the image frame to be tested to obtain a plurality of image frames of the playing area to be tested.
[0050] S240: test the image frame of the playing area to be tested, and output video test data.
[0051] Accordingly, S240 may specifically include the following operations:
[0052] S241: perform high similarity matching on the image frames of each playing area to be tested, and determine the stuck state, playing state and pause state of the currently played video according to the high similarity matching result.
[0053] Specifically, when testing the stuck state, playing state and pause state of the image frames in the playing area to be tested, high similarity matching can be performed on each image frame in the playing area to be tested, and the stuck state, playing state and pause state of the currently playing video can be determined according to the high similarity matching result. Among them, high similarity matching is to calculate the image similarity between the image frames of each to-be-tested playing area. When the image similarity between two to-be-tested playing area image frames is greater than or equal to the set similarity threshold, it is determined that the two to-be-tested playing area image frames are high similarity matching. Optionally, the set similarity threshold can be set according to actual requirements, such as 95% or 99%, etc. The embodiment of this application does not limit the specific value of the set similarity threshold. Accordingly, the calculation of image similarity can adopt structural similarity measurement method, cosine similarity method or calculation according to the fingerprint information of the image, etc. The embodiment of this application does not limit the specific calculation method of image similarity.
[0054] In an optional embodiment of the present application, determining the stuck state of the currently playing video according to the high similarity matching result may include: counting the number of the same image frames in the image frames of the playing area to be tested; If the number of the same image frames exceeds the threshold set by the pre-judgment jam, it is determined that the currently played video has a pre-judgment jam state; If the relevant stuck data of the predicted stuck state meets the stuck determination condition, it is determined that the currently played video has a stuck state.
[0055] Among them, the same image frame can be an image frame whose image similarity is greater than or equal to the set similarity threshold. The preset threshold value can be used to judge the preset stuck state, and its value can be set according to the actual demand, such as 3, 5 or 8, etc. The embodiment of this application does not limit the specific value of the preset threshold value. The pre-stuck state is a short-term stuck state. The associated stuck data can be stuck-related data such as the number of stuck times or the length of stuck time, and the embodiment of the present application does not limit the data type of the associated stuck data. The stuck determination condition may be a condition for determining that the currently playing video is stuck, such as predicting that the occurrence times of the stuck state exceed a certain number, or that the maximum stuck time of the predicted stuck state exceeds a certain time threshold, etc., which is not limited by the embodiment of the present application.
[0056] Accordingly, when judging whether the currently playing video is stuck or not, the number of the same image frames in the image frames of each playing area to be tested can be counted. If the number of the same image frames exceeds the threshold set by pre-judgment stagnation, it is determined that the currently played video has a pre-judgment stagnation state. That is, when successive frames of the image frames of the play area to be tested are the same, it is regarded as a one-time jam. Accordingly, after the jam occurs, it can be further determined whether the currently playing video is stuck or not according to the associated jam data and the jam determination condition.
[0057]It should be noted that when testing videos in the prior art, the method of judging the stuck state is that the similarity of two adjacent screenshots is higher than a set threshold and the stuck state is determined. However, when the adjacent two screenshots are playing normally, but there is little change in the images, the similarity will also be higher than the threshold, and then there will be a case of Caton misjudgment.
[0058] In this embodiment of the application, in order to solve the problem of stuck misjudgment, the judgment mode of one-time stuck is adopted when a plurality of consecutive frames of images of the area to be tested are the same, which can effectively avoid the stuck misjudgment caused by small image changes.
[0059] In an optional embodiment of the present application, determining that the currently playing video has a stuck state may include: counting the related stuck data of the predicted stuck state, wherein the related stuck data includes the predicted stuck times and the predicted stuck time of a single time; If that predict stuck times exceed the preset stuck times threshold, determining that the currently played video has a stuck state; And/or, if the target single predicted stuck time in the single predicted stuck time exceeds the preset stuck time threshold, it is determined that the currently played video is stuck.
[0060] Among them, the number of predicted stuck times is also the number of predicted stuck states. The duration of a single predicted stuck state is the duration of a single predicted stuck state. The threshold value of preset stuck times can be set according to actual requirements, such as 3, 6 or 8, and the threshold value of preset stuck time can also be set according to actual requirements, such as 2 seconds, 3 seconds or 5 seconds. The embodiment of the present application does not limit the specific values of the preset stuck times threshold and the preset stuck time threshold. The target single predicted stuck time can be the longest single predicted stuck time.
[0061] It can be understood that, in some cases, although the predicted stuck state may appear in the currently playing video, the predicted stuck state often has little impact on the playing quality of the video, and it will not even be noticed by the video viewing users. Therefore, in order to ensure the accuracy of the stuck test, it is possible to determine whether the currently playing video has a stuck state or not according to the related stuck data of the predicted stuck state after determining the predicted stuck state of the car repair. Specifically, when the predicted number of stuck times exceeds the preset threshold of stuck times, it means that there are more stuck times, and at this time, it can be determined that the currently played video is stuck; When the target single predicted stuck time in the single predicted stuck time exceeds the preset stuck time threshold, it means that a certain stuck time is longer, and it can be determined that the currently played video is stuck.
[0062] In the above scheme, after it is determined that the currently played video has a pre-determined stuck state, it is further judged whether the currently played video has a stuck state according to the pre-determined stuck times and the pre-determined stuck time of a single time, which can avoid the problem of misjudgment caused by short-term stuck.
[0063] In an optional embodiment of the present application, the determination of the playing state of the currently playing video according to the high similarity matching result may include: counting the number of different image frames in the image frames of the playing area to be tested; If the number of different image frames in the image frames of the area to be tested exceeds the playing setting threshold within a unit preset time, it is determined that the currently played video is in the playing state.
[0064] Among them, different image frames can be image frames with image similarity less than the set similarity threshold. The unit preset time can be set according to actual requirements, such as 1 second, etc. The embodiment of the present application does not limit the specific value of the unit preset time.
[0065] Accordingly, determining the playing state of the currently playing video can be: counting the number of different image frames in the image frames of the playing area to be tested. If the number of different image frames in the image frames of the play area to be tested exceeds the play setting threshold within a unit preset time, such as more than 30 frames within one second (one second can include 50 images in total), it is determined that the currently played video is in the play state.
[0066] It should be noted that in the prior art, when testing videos, the method of judging the playing state is that when the similarity of two screen images captured at different time intervals is lower than a certain threshold, the playing state is determined. However, when the time interval between the above two screenshots is short, and they are in the normal playing state, but the changes in the images are small, the similarity will be higher than the threshold, and the playing state will be determined as the pause state at this time, that is, the misjudgment of playing will occur.
[0067] Accord to that technical scheme, the current playing video is determine as the playing state according to the fact that a plurality of image frames of the play area to be tested corresponding to the recorded video to be tested are different image frames, so that the problem of playing misjudgment existing in testing the playing state only according to two screen pictures intercepted in different time intervals can be effectively solved.
[0068] In an optional embodiment of the present application, determining the pause state of the currently playing video according to the high similarity matching result may include: counting the number of the same image frames in the image frames of the playing area to be tested; If the ratio of the same image frame to the image frame of the playing area to be tested exceeds a pause setting threshold, it is determined that the currently played video is in a pause state.
[0069] Among them, the pause setting threshold can be set according to actual requirements, such as 98% or 99%, that is, the closer the pause setting threshold is to 1, the better the effect. The embodiment of the present application does not limit the specific value of the pause setting threshold.
[0070] Accordingly, determining the pause state of the currently playing video can be: counting the number of the same image frames in the image frames of the playing area to be tested. If the ratio of the same image frame to the image frame of the playing area to be tested exceeds the pause setting threshold, such as 99%, it is determined that the currently played video is in a pause state. Considering that some video images may have pause prompts, such as advertisement pop-ups, during the pause process, it is not appropriate to set the pause threshold to 100% to avoid the situation that the pause state cannot be recognized.
[0071] It should be noted that in the prior art, when testing videos, the method of judging the pause state is that when the similarity of two screen images captured at different time intervals is higher than a certain threshold, the pause state is determined. However, when the time interval between the above two screenshots is short, when the first screenshot is just paused, and the second screenshot is paused with advertisement pop-up, the similarity will be lower than the threshold, and then the pause state will be determined as the playing state, that is, the pause misjudgment will occur.
[0072] According to the technical scheme, by determining that the currently played video is in the playing state according to the ratio of the same image frame included in the image frame of the playing area to be tested and the image frame of the playing area to be tested, the problem of pause misjudgment caused by only short-time screen switching can be effectively solved.
[0073] It can be seen that in the above methods for determining the stuck state, playing state and pause state of the currently playing video, all the image frames of the playing area to be tested are used as the basis, not just one or two image frames of the playing area to be tested, which can effectively reduce the test error and avoid the rigidity of abnormal factors.
[0074] S242, image recognition is performed on the image frames of each playing area to be tested, and the black screen state and abnormal state of the currently playing video are determined according to the image recognition result.
[0075] Specifically, when testing the black screen state and abnormal state of the image frames in the play area to be tested, image recognition can be performed on each play area image frame to be tested, and the black screen state and abnormal state of the currently played video can be determined according to the image recognition result. Among them, the image recognition can be to recognize the brightness of the image, or to recognize the specific logo in the image, which is not limited by the embodiment of the present application.
[0076]In an optional embodiment of the present invention, the image recognition of each of the image frames of the to-be-tested playing area may include: image recognition of the light and dark degree of a set number of the last image frames of the to-be-tested playing area to determine the light and dark degree of the last image frame of the to-be-tested playing area; Determining the black screen state of the currently playing video according to the image recognition result may include: determining the number of the last black screen image frames according to the light and dark degree of the last playing area image frames to be tested; If it is determined that the number of the last continuous black screen image frames exceeds the black screen setting threshold, it is determined that the currently played video has a black screen state.
[0077] Among them, the set number can be set according to actual requirements, such as 10, 20 or 30, etc., and the embodiment of this application does not limit the specific value of the set number. The last image frame of the playing area to be tested is the image frame at the back of the playing area to be tested, such as the last 20 images in the playing area to be tested. The last black screen image frame is also the black screen image frame in the last image frame of the playing area to be tested. The black screen setting threshold can be set according to actual requirements, such as 8, 18 or 27, etc., and the embodiment of this application does not limit the specific value of the black screen setting threshold. However, it can be understood that the value of the threshold set by the black screen is smaller than the value of the set number.
[0078] Accordingly, determining the black screen state of the currently playing video can be as follows: recognizing the brightness of a set number of the last image frames in the playing area to be tested, and determining the brightness of the last image frames in the playing area to be tested, so as to determine the number of the last black screen image frames included in the last image frames according to the brightness of the last image frames in the playing area to be tested. If it is determined that the number of the last consecutive black screen image frames exceeds the black screen setting threshold, it is determined that the currently played video is in a black screen state.
[0079] It should be noted that in the prior art, when testing videos, the method of judging the black screen state is to directly intercept a screen picture, and judge whether it is black screen or not according to the brightness value of the intercepted screen picture. However, the above-mentioned black screen testing methods are easily affected by video resources. For example, if the currently playing video itself has the effect of black screen transition at the beginning stage, when the intercepted screen is one of the black screen transition images, the normal black transition images will be misjudged as black screen images, that is, the black screen misjudgment will occur.
[0080] According to the technical scheme, the black screen test is carried out according to the number of the black screen image frames included in the last playing area image frame to be tested, so that the black screen misjudgment caused by the special display effect of video resources can be effectively avoided. For example, even if the recorded video to be tested is the first 20 seconds of the currently played video, it can effectively avoid the black screen test of the image frames with special display effects of the previous frames by recording the last few image frames of the playing area to be tested corresponding to the video to be tested. That is, the black screen test is only based on the local image frame of the last playing area image frame to be tested.
[0081] In an optional embodiment of the present invention, the image recognition of each image frame of the to-be-tested playing area may include: image recognition of abnormal marks of each image frame of the to-be-tested playing area; Wherein, the abnormal identification comprises abnormal images and/or abnormal key texts; Determining the abnormal state of the currently playing video according to the image recognition result may include: if it is determined that the abnormal identifier exists in the image frame of the to-be-tested playing area, determining that the currently playing video is in an abnormal state.
[0082] Among them, the abnormal logo is the logo that prompts the video to appear abnormal. Optionally, the anomaly identification may include an anomaly image and/or an anomaly key text. Among them, the abnormal image can be an abnormal pop-up image, and the abnormal key text can be "abnormal playing", "unable to obtain video resources" or "unable to play the current video", etc. The embodiment of the present application does not limit the specific content of the abnormal image and the abnormal key text.
[0083] Accordingly, determining the abnormal state of the currently playing video can be as follows: carrying out image recognition on abnormal marks such as abnormal images and/or abnormal key texts of the image frames of each to-be-tested playing area. If it is determined that there is an abnormal mark in the image frame of the playing area to be tested, it is determined that the currently played video is in an abnormal state.
[0084] It should be noted that in the prior art, when testing videos, the method of judging the abnormal state is to directly intercept a screen picture, and perform image recognition according to the intercepted screen picture to judge whether there is an abnormal mark in the screen picture. However, the above-mentioned abnormal testing methods are easy to miss the instantaneous abnormal situation, which leads to abnormal detection.
[0085] Accord to that technical scheme, the abnormal state can be tested by identify the abnormal marks according to the image frames of each play area to be tested corresponding to the recorded video to be tested, so that the omission of instantaneous abnormalities can be effectively avoided, and the accuracy of abnormal testing can be improved.
[0086] S243: according to the failure priority, the video test data of the image frames of each to-be-tested playing area are summarized to obtain initial failure analysis data.
[0087] S244: simultaneously outputting the initial fault analysis data and the video test data.
[0088] Among them, the fault priority is the priority defined for each fault state. Illustratively, it is assumed that the fault state includes a stuck state, a black screen state and an abnormal state. Among them, the black screen state has the highest fault priority, which is the first priority, the stuck state has the second highest fault priority, and the abnormal state has the lowest fault priority, which is the third priority. The initial fault analysis data is the fault conclusion obtained by summarizing and analyzing the video test data according to the fault priority.
[0089] In the embodiment of the application, after the video test data is obtained by testing the image frame of the playing area to be tested, the video test data can be summarized according to the failure priority to obtain the initial failure analysis data. For example, suppose the video test data are: black screen appears once, stuck once, and stuck time is 2 seconds. Because the failure priority of black screen is higher than that of stuck, and the stuck time is short, the initial failure analysis data can be: video failure: black screen. Accordingly, after the initial fault analysis data is obtained, the initial fault analysis data and video test data can be output at the same time for reference. Among them, the initial fault analysis data can be used as the summary data of the complete video test data to provide fast video test results, while the video test data is the completed test data.
[0090] According to the technical scheme, the image frames of the playing area to be tested are tested to determine the stuck state, the playing state, the pause state, the black screen state and the abnormal state of the currently played video, and after the video test data is obtained, the video test data of all the image frames of the playing area to be tested are summarized and processed according to the fault priority to obtain the initial fault analysis data, so that the initial fault analysis data and the video test data can be output for reference at the same time, and the precision and accuracy of the video test can be improved.
[0091] In one example, Figure 3 It is a structural diagram of a video testing device provided by the embodiment of the present application. The embodiment of the present application can be applied to the case of testing a video by using an image frame to be tested which only includes a video playing area. The device is realized by software and/or hardware, and is specifically configured in an electronic device. The electronic device may be a computer device or the like.
[0092] such as Figure 3 A video testing device 300 is shown, which includes a recording and testing video acquisition module 310, an image frame acquisition module 320, an image frame acquisition module 330 and a video testing module 340. Among them,
[0093] The recorded video to be tested acquisition module 310 is used for screen recording the currently played video to obtain the recorded video to be tested;
[0094] The image frame to be teste acquisition module 320 is used for frame that recorded video to be tested to obtain a plurality of image frames to be teste;
[0095]The image frame acquisition module 330 of the area to be tested is used for cutting the video playing area of the image frame to be tested to obtain a plurality of image frames of the area to be tested;
[0096] The video testing module 340 is used for testing the image frame of the playing area to be tested and outputting video testing data.
[0097] According to the embodiment of the application, after the recorded video to be tested is recorded on the screen, the recorded video to be tested is subjected to frame processing to obtain a plurality of image frames to be tested, and then the video playing area of the image frames to be tested is cut to obtain a plurality of image frames in the playing area to be tested, and video test data is output, so that the problem of low test precision and accuracy in the existing video test methods can be solved, and the precision and accuracy of video testing can be improved.
[0098] Optionally, the video testing module 340 is specifically used for performing high similarity matching on each image frame of the playing area to be tested, and determining the stuck state, playing state and pause state of the currently playing video according to the high similarity matching result.
[0099] Optionally, the video testing module 340 is specifically used to count the number of the same image frames in the image frames of the playing area to be tested; If the number of the same image frames exceeds the threshold set by the pre-judgment jam, it is determined that the currently played video has a pre-judgment jam state; If the relevant stuck data of the predicted stuck state meets the stuck determination condition, it is determined that the currently played video has a stuck state.
[0100] Optionally, the video testing module 340 is specifically used for counting the related stuck data of the predicted stuck state, wherein the related stuck data includes the predicted stuck times and the predicted stuck time of a single time; If that predict stuck times exceed the preset stuck times threshold, determining that the currently played video has a stuck state; And/or, if the target single predicted stuck time in the single predicted stuck time exceeds the preset stuck time threshold, it is determined that the currently played video is stuck.
[0101] Optionally, the video testing module 340 is specifically used to count the number of different image frames in the image frames of the playing area to be tested; If the number of different image frames in the image frames of the area to be tested exceeds the playing setting threshold within a unit preset time, it is determined that the currently played video is in the playing state.
[0102] Optionally, the video testing module 340 is specifically used to count the number of the same image frames in the image frames of the playing area to be tested; If the ratio of the same image frame to the image frame of the playing area to be tested exceeds a pause setting threshold, it is determined that the currently played video is in a pause state.
[0103] Optionally, the video testing module 340 is specifically used to perform image recognition on each image frame of the playing area to be tested, and determine the black screen state and abnormal state of the currently playing video according to the image recognition result.
[0104] Optionally, the video testing module 340 is specifically used for: performing image recognition on the light and dark degree of a set number of the last image frames in the play area to be tested, and determining the light and dark degree of the last image frames in the play area to be tested; Determining the number of the last black screen image frames according to the light and dark degree of the image frames of the last to-be-tested playing area; If it is determined that the number of the last continuous black screen image frames exceeds the black screen setting threshold, it is determined that the currently played video has a black screen state.
[0105] Optionally, the video testing module 340 is specifically used for: performing image recognition on the abnormal marks of the image frames of the playing areas to be tested; Wherein, the abnormal identification comprises abnormal images and/or abnormal key texts; If it is determined that the abnormal identifier exists in the image frame of the playing area to be tested, it is determined that the currently played video is in an abnormal state.
[0106] Optionally, the video test module 340 is specifically used for: summarizing the video test data of each image frame of the playing area to be tested according to the failure priority to obtain initial failure analysis data;
[0107] And simultaneously outputting the initial fault analysis data and the video test data.
[0108] The above video testing device can execute the video testing method provided by any embodiment of this application, and has corresponding functional modules and beneficial effects. For technical details not described in detail in this embodiment, please refer to the video test method provided by any embodiment of this application.
[0109] As the above-mentioned video testing device is a device that can execute the video testing method in the embodiment of the present application, based on the video testing method introduced in the embodiment of the present application, those skilled in the art can understand the specific implementation of the video testing device in this embodiment and its various variations, so how the video testing device realizes the video testing method in the embodiment of the present application will not be described in detail here. As long as the devices used by those skilled in the art to implement the video test method in the embodiment of this application belong to the scope of this application.
[0110] In an example, the application also provides an electronic device and a readable storage medium.
[0111] Figure 4 Is a schematic structural diagram of an electronic device used to implement the video testing method of the embodiment of the present application. such as Figure 4 Shown is a block diagram of an electronic device of a video testing method according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices can also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices and other similar computing devices. The components shown herein, their connections and relationships, and their functions are only examples, and are not intended to limit the implementation of the present application described and/or claimed herein.
[0112] such as Figure 4 As shown, the electronic device includes one or more processors 401, a memory 402, and interfaces for connecting various components, including a high-speed interface and a low-speed interface. The components are connected to each other by different buses, and can be installed on a common motherboard or in other ways as required. The processor can process instructions executed in the electronic device, including instructions stored in or on the memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, if necessary, multiple processors and/or multiple buses can be used with multiple memories and multiple memories. Similarly, multiple electronic devices can be connected, and each device provides some necessary operations (for example, as a server array, a group of blade servers, or a multiprocessor system). Figure 4 Take one processor 401 as an example.
[0113] The memory 402 is the non-transitory computer readable storage medium provided in this application. Wherein the memory stores instructions executable by at least one processor to enable the at least one processor to execute the video test method provided by the application. The non-transitory computer readable storage medium of the present application stores computer instructions for making a computer execute the video test method provided by the present application.
[0114] As a non-transitory computer readable storage medium, the memory 402 can be used to store non-transitory software programs, non-transitory computer executable programs and modules, such as program instructions/modules corresponding to the video test method in the embodiment of the present application (for example, attached Figure 3 The recorded to-be-tested video acquisition module 310, the to-be-tested image frame acquisition module 320, the to-be-tested playing area image frame acquisition module 330 and the video testing module 340 are shown). The processor 401 executes various functional applications and data processing of the server by running non-instantaneous software programs, instructions and modules stored in the memory 402, that is, realizes the video testing method in the above method embodiment.
[0115] The memory 402 may include a storage program area and a storage data area, wherein the storage program area may store an operating system and application programs required by at least one function; The data area can store data created by the use of electronic devices that implement the video test method, etc. In addition, the memory 402 may include a high-speed random access memory and a non-transient memory, such as at least one disk memory device, a flash memory device, or other non-transient solid-state memory devices. In some embodiments, the memory 402 may optionally include memories remotely located relative to the processor 401, and these remote memories may be connected to electronic devices that implement the video test method through a network. Examples of the above networks include, but are not limited to, the Internet, intranet, local area network, mobile communication network and combinations thereof.
[0116] The electronic device implementing the video testing method may further include an input device 403 and an output device 404. The processor 401, the memory 402, the input device 403 and the output device 404 may be connected by a bus or other means, Figure 4 Take bus connection as an example.
[0117] The input device 403 can receive input digital or character information, and generate key signal input related to user settings and function control of the electronic device that implements the video test method, such as touch screen, keypad, mouse, track pad, touch pad, pointing stick, one or more mouse buttons, trackball, joystick and other input devices. The output device 404 may include a display device, an auxiliary lighting device (for example, LED), a tactile feedback device (for example, vibration motor), and the like. The display device may include, but is not limited to, a liquid crystal display (LCD), a light emitting diode (LED) display, and a plasma display. In some embodiments, the display device may be a touch screen.
[0118] Various embodiments of the systems and technologies described herein can be implemented in digital electronic circuit systems, integrated circuit systems, application specific ASIC (Application Specific Integrated Circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that can be executed and/or interpreted on a programmable system including at least one programmable processor, which can be a special-purpose or general-purpose programmable processor that can receive data and instructions from and transmit data and instructions to a storage system, at least one input device, and at least one output device.
[0119] These computing programs (also referred to as programs, software, software applications, or codes) include machine instructions of programmable processors, and these computing programs can be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, device, and/or device (e.g., magnetic disk, optical disk, memory, programmable logic device (PLD)) for providing machine instructions and/or data to a programmable processor, including machine-readable medium that receives machine instructions as machine-readable signals. The term "machine readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
[0120] To provide interaction with users, the systems and technologies described herein can be implemented on a computer, which has a display device (for example, CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to users; And a keyboard and a pointing device (for example, a mouse or a trackball) through which a user can provide input to a computer. Other kinds of devices can also be used to provide interaction with users; For example, the feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); And the input from the user can be received in any form (including acoustic input, voice input or tactile input).
[0121] The systems and technologies described herein can be implemented in a computing system including a back-end component (e.g., as a data server), a computing system including a middleware component (e.g., an application server), or a computing system including a front-end component (e.g., a user computer with a graphical user interface or a web browser through which users can interact with the embodiments of the systems and technologies described herein), or include such back-end components, middleware components, or front-end components. The components of the system can be connected to each other by digital data communication in any form or medium (e.g., communication network). Examples of communication include: local area network (LAN), wide area network (WAN) and the Internet.
[0122] A computer can include a client and a server. Clients can be smart phones, laptops, desktop computers, tablets, smart speakers, etc., but they are not limited to this. Servers can be independent physical servers, server clusters or distributed systems composed of multiple physical servers, or cloud servers that provide basic cloud computing services such as cloud computing, cloud services, cloud databases, cloud storage, etc. Client and server are usually far away from each other and usually interact through a communication network. The relationship between the client and the server is generated by computer programs running on corresponding computers and having a client-server relationship with each other.
[0123] According to the embodiment of the application, after the recorded video to be tested is recorded on the screen, the recorded video to be tested is subjected to frame processing to obtain a plurality of image frames to be tested, and then the video playing area of the image frames to be tested is cut to obtain a plurality of image frames in the playing area to be tested, and video test data is output, so that the problem of low test precision and accuracy in the existing video test methods can be solved, and the precision and accuracy of video testing can be improved.
[0124] It should be understood that steps can be reordered, added, or deleted using the various forms of processes shown above. For example, the steps described in the present application can be executed in parallel, sequentially or in different orders, so long as the desired results of the technical solutions disclosed in the present application can be achieved, there is no restriction here.
[0125] The above specific embodiments do not limit the scope of protection of this application. Those skilled in the art should understand that various modifications, combinations, subcombinations and substitutions can be made according to design requirements and other factors. Any modification, equivalent substitution and improvement made within the spirit and principle of this application shall be included in the scope of protection of this application.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Passive equalization control method and system of electric vehicle battery management system
ActiveCN113022374AGuaranteed Balanced EfficiencyImprove precision and accuracy
Owner:DONGFENG MOTOR GRP
Vertical-multilayer-control-based internal wave making system and control method thereof
Owner:SUN YAT SEN UNIV
Method, device, equipment and medium for carrying out wireless sensing measurement among multiple pieces of equipment
Owner:CHENGDU XGIMI TECH CO LTD
Information processing method and device, equipment and storage medium
PendingCN114547350AImprove precision and accuracy
Owner:BEIJING SENSETIME TECH DEV CO LTD
Positioning ball shooting displacement device for football training
Owner:ZHENGZHOU PRESCHOOL EDUCATION COLLEGE
Classification and recommendation of technical efficacy words
- Improve precision and accuracy
Method for identifying clothes image, and labeling method and device of clothes image
Owner:BAIDU ONLINE NETWORK TECH (BEIJIBG) CO LTD
Filter bank training method and system and image key point positioning method and system
InactiveCN103955719AImprove precision and accuracygood distinction
Owner:INST OF INFORMATION ENG CAS
Method and device for correcting defective pixel of image
InactiveCN107800980AImprove precision and accuracy
Owner:ZHEJIANG DAHUA TECH
Visual detection system and method for through hole of transparent plate
Owner:湖南科创信息技术股份有限公司 +1