Image processing method and device, storage medium and electronic equipment
An image processing and image technology, applied in the field of image processing, can solve the problems of low correction accuracy of shadow correction algorithm and color shadow, and achieve the effect of improving accuracy and precision
Pending Publication Date: 2022-05-13
GUANGDONG OPPO MOBILE TELECOMM CORP LTD
0 Cites 0 Cited by
AI-Extracted Technical Summary
Problems solved by technology
In addition, due to the different wavelengths of various colors, the refraction angle of the lens is also different,...
Method used
As can be seen from the above, the embodiment of the present application provides a kind of electronic equipment, described electronic equipment is for the latest frame image in the image frame sequence output by the camera, obtains the first initial correction of the image according to the calculation of the shadow correction algorithm After the parameter table, the first initial correction parameter table is processed to improve the accuracy of the correction parameters, and the current frame image is corrected by using the processed first target correction parameter table to improve the accuracy of image lens shading correction .
As can be seen from the above, the image processing device proposed in the embodiment of the present application, for the latest frame image in the image frame sequence output by the camera, after calculating the first initial correction parameter table of the image according to the shadow correction algorithm, the The first initial correction parameter table is processed to improve the accuracy of the correction parameters, and the current frame image is corrected using the processed first target correction parameter table to improve the accuracy of image lens shading correction.
As can be seen from the above, the image processing method proposed by the embodiment of the present invention, for the latest frame image in the image frame sequence output by the camera, after calculating a plurality of first initial correction parameter tables of the image according to the shadow correction algorithm , based on the change trend between adjacent calibration parameter tables, the multiple first initial calibration parameter tables are respectively corrected to eliminate calculation errors, and then the corrected multiple first initial calibration parameter tables are interpolated to expand the correction The amount of information contained in the parameters, the current frame image is corrected by using the first target correction parameter table obtained by the correction process and the interpolation process, which improves the accuracy of image lens shading correction.
As can be seen from the above, the image processing method proposed by the embodiment of the present invention, for the latest frame image in the image frame sequence output by the camera, after calculating the first initial correction parameter table of the image according to the shadow correction algorithm, use The initial correction parameter table of the historical frame image before the frame image performs interpolation processing on the first initial correction parameter table to expand the amount of information contained in the correction parameter, and then, based on the change trend between adjacent correction parameter tables, the interpolation processing The subsequent first initial correction parameter table is corrected to eliminate calculation errors, and the current frame image is corrected using the first target correction parameter table obtai...
Abstract
The embodiment of the invention discloses an image processing method and device, a storage medium and electronic equipment, and the method comprises the steps: obtaining a latest frame of image in an image frame sequence outputted by a camera, and taking the latest frame of image as a current frame of image; calculating a first initial correction parameter table of the current frame image according to a shadow correction algorithm; processing the first initial correction parameter table to obtain a first target correction parameter table; and performing correction processing on the current frame image according to the first target correction parameter table. On the basis, the first initial correction parameter table is processed to improve the accuracy of the correction parameters, and the processed first target correction parameter table is used for correcting the current frame image, so that the accuracy of image lens shadow correction is improved.
Application Domain
Image enhancementTelevision system details +5
Technology Topic
Correction algorithmEngineering +4
Image
Examples
- Experimental program(1)
Example Embodiment
[0026] The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments in this application, all other embodiments obtained by those skilled in the art without creative efforts shall fall within the protection scope of this application.
[0027] Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor a separate or alternative embodiment that is mutually exclusive of other embodiments. It is explicitly and implicitly understood by those skilled in the art that the embodiments described herein may be combined with other embodiments.
[0028] The embodiment of the present application provides an image processing method, and the execution body of the image processing method may be the image processing apparatus provided by the embodiment of the present application, or an electronic device integrated with the image processing apparatus, wherein the image processing apparatus may adopt hardware or implemented in software. The electronic device may be a smart phone, a tablet computer, a palmtop computer, a notebook computer, or a desktop computer and other devices.
[0029] see Figure 1a , Figure 1a This is a first schematic flowchart of the image processing method provided by the embodiment of the present application. The specific process of the image processing method provided by the embodiment of the present application may be as follows:
[0030] In 101, the latest frame image in the image frame sequence output by the camera is acquired as the current frame image.
[0031] The image processing solution of the present application can be applied to scenes such as photography or video recording. For example, in a photographing scene, after starting the camera, the camera collects images according to preset exposure parameters and exposure time intervals, and processes the collected images for display in the viewfinder frame. The continuous image frame sequence constitutes a preview screen. After the user triggers the photographing instruction, the latest frame image in the image frame sequence can be processed and then output. For another example, in a video recording scenario, after the camera is started, a recording instruction is received, the camera collects images according to preset exposure parameters and frame rates, and outputs a video, which is composed of a continuous sequence of image frames. Whether it is the preview of the photo or the output of the video, it is essentially the output of the image frame sequence, and each frame of the image in the image frame sequence can be processed according to the solution of the embodiments of the present application to eliminate brightness shadows and color shadows.
[0032] After the camera is started, it collects images according to preset exposure parameters and exposure time intervals, and each time a frame of image is output, the latest frame of image can be used as the current frame of image.
[0033] In 102, a first initial correction parameter table of the current frame image is calculated according to a shading correction algorithm.
[0034] After the current frame image is acquired, the first initial correction parameter table of the frame image is calculated according to the shading correction algorithm, wherein the correction parameter may also be called LSC (Lens Shading Correction, lens shading correction) information, that is, used for LSC processing Information. For example, using the grid correction method, assuming that the resolution of the current frame image is M*N, the current frame image is divided into grids according to the m*n grid, so as to divide the current frame image into multiple grid areas. The pixel data in each grid area is used to calculate the correction parameters corresponding to the grid area. The correction parameters corresponding to all grids form a correction parameter table with m rows and n columns, and the correction parameters calculated from the current frame image are marked as the first Table of initial calibration parameters. The correction parameter table calculated from the historical frame image before the current frame image in the image frame sequence is recorded as the second initial correction parameter table. In addition, hereinafter, the processed first initial correction parameters are denoted as the first target correction parameter table, and the processed second initial correction parameters are denoted as the second target correction parameter table. When "first" or "second" is not indicated, it is generally indicated generally.
[0035] It should be noted that, when calculating the correction parameter table of the image, a correction parameter table is obtained by calculating the data of each channel respectively. For example, if the format of the current frame image is RAW format, the image includes the following four channels: R (red) channel, GR (green-red) channel, GB (green-blue) channel, and B (blue) channel. According to the pixel data of each channel, the corresponding correction parameter table of the channel is calculated according to the shading correction algorithm. Assuming m=17, n=13, the first initial correction parameter table of size 17*13*4 can be obtained after calculation.
[0036] Among them, the RAW image is the original data that the image sensor converts the captured light source signal into a digital signal, which is an unprocessed and uncompressed format. In other embodiments, the images in the image frame sequence may also be images in other formats, such as YUV format.
[0037] In 103, the first initial correction parameter table is processed to obtain a first target correction parameter table.
[0038] After the first initial correction parameter table is acquired, the first initial correction parameter table is processed. For example, the step of processing the first initial correction parameter table may include: performing correction processing on the first initial correction parameter table to eliminate errors in the correction parameters in the first initial correction parameter table; or, performing interpolation on the first initial correction parameter processing to expand the amount of information contained in the correction parameters; or, first perform correction processing on the first initial correction parameter table, and then perform interpolation processing on the corrected first initial correction parameter table; or, first perform a correction on the first initial correction parameter table Interpolation processing is performed, and correction processing is performed on the interpolated first initial correction parameter table.
[0039] In some embodiments, interpolation processing is performed on the first initial correction parameter table, for example, interpolation processing is performed according to a plurality of first initial correction parameter tables of the current frame image to expand the correction parameters, for example, 17*13* The first initial correction parameter table of 4 is expanded to a first target correction parameter table of 34*26*4; or, interpolation processing is performed on the first initial correction parameter table according to the correction parameter table of a plurality of historical frame images of the current frame image . After interpolation processing, the expansion of LSC information is realized, thereby improving the accuracy of shading correction.
[0040] For example, in some embodiments, the step processes the first initial correction parameter table to obtain the first target correction parameter table, including:
[0041]a1, obtain the second initial correction parameter table corresponding to a plurality of historical frame images before the current frame image in the image frame sequence, wherein, the plurality of second initial correction parameter tables and the first initial correction parameter table correspond to different cutting positions;
[0042] a2. According to the corresponding cutting position, use a plurality of second initial correction parameter tables to perform interpolation processing on the first initial correction parameter table to obtain a first target correction parameter table.
[0043] In this embodiment, for consecutive images in the sequence of image frames, when calculating the initial correction parameter table, a plurality of preset cutting positions are alternately used to perform grid division on the images. For example, the preset four cutting positions are respectively the upper left part of the image, the upper right part of the image, the lower left part of the image, and the lower right part of the image.
[0044] Please refer to Figure 1b shown, Figure 1b It is a schematic diagram of the cutting method in the shadow correction algorithm. Assuming that the resolution of the acquired image is 1920×980, the images in the image frame sequence are P1, P2, P3,... y row pixels of the edge ( Figure 1b The shadow part of P1 in the middle), the remaining part is the upper left part of the image, the resolution is (1920-y)×(980-x), and the remaining upper left part is divided according to the m*n grid to get m*n pieces In the grid area, the calculation of the first initial correction parameter table is performed. For P2, the pixels in the x column of the left edge of the image and the y row of pixels in the lower edge of the image are removed ( Figure 1b The shadow part of P2 in the middle), the remaining part is the upper right part of the image, the resolution is (1920-y)×(980-x), and the remaining upper left part is divided according to the m*n grid to get m*n pieces In the grid area, the calculation of the first initial correction parameter table is performed, and so on. When calculating the initial correction parameter table of each frame of image, the calculation is performed cyclically according to the above four cutting positions, so that the initial value of each adjacent four images is calculated. The calibration parameter table corresponds to different cutting positions.
[0045] Based on the above different cutting positions, for the first initial correction parameter table of the current frame image, use multiple second initial correction parameter tables of the three historical frame images before the frame image to perform interpolation processing to obtain the first target correction parameter table , wherein, during the interpolation processing, interpolation is performed according to the cutting positions corresponding to the four initial correction parameter tables. Assuming that the sizes of the first initial correction parameter table and the second initial correction parameter table are both 17*13*4, after interpolation processing, a first target correction parameter table with a size of 34*26*4 is obtained.
[0046] Next, in order to facilitate the description of the specific method of the interpolation processing, the image is divided in a less meshed way to calculate the correction parameter table, assuming that m=n=4. then see Figure 1c , Figure 1c Another schematic diagram of the cutting method in the shading correction algorithm. As shown in the figure, the four cutting positions corresponding to the four frames of images Pn-3, Pn-2, Pn-1, and Pn are the upper left part of the image, the upper right part of the image, the lower left part of the image, and the lower right part of the image. The initial correction tables corresponding to the images Pn-3, Pn-2, Pn-1, and Pn are Ln-3, Ln-2, Ln-1, and Ln, respectively. Ln' is obtained by using Ln-3, Ln-2, and Ln-1 to interpolate Ln. As shown in the figure, the size of Ln' is 8 × 8, and any four adjacent correction parameters in Ln' are respectively From Ln-3, Ln-2, Ln-1, Ln.
[0047] For another example, in another embodiment, the step of calculating the first initial correction parameter of the current frame image according to the shading correction algorithm includes: calculating, according to the shading correction algorithm, corresponding to the current frame image, based on a plurality of different cutting positions. A first initial calibration parameter table. The step of processing the first initial correction parameter table to obtain the first target correction parameter table includes: performing interpolation processing on a plurality of first initial correction parameter tables according to the corresponding cutting positions to obtain the first target correction parameter table.
[0048] In this embodiment, instead of using the second initial correction parameter table of the historical frame image to interpolate the first initial correction parameter table of the current frame image, only for the current frame image, in the same way as the previous embodiment, in the The same frame image is segmented at multiple different positions, and multiple different first initial correction parameter tables are obtained by calculation, and the four first initial correction parameter tables are used for interpolation processing to obtain the first target correction parameter table, and the specific cutting position The implementation and interpolation methods are the same as those in the previous embodiment, and are not repeated here.
[0049] The foregoing embodiments illustrate the implementation of the interpolation processing by way of example, and the following describes the solution of the correction processing.
[0050] Wherein, in some embodiments, a correction process is performed on the first initial correction parameter, and the correction process does not change the size of the first initial correction parameter table, but corrects the parameters in the table to eliminate calculation errors. For example, use the second target correction parameter table of the previous frame image of the current frame image to modify the first initial correction parameter table, such as calculating the average value of the two, to obtain the first target correction parameter table; or, according to multiple frames The second target correction parameter table of the historical frame image performs correction processing on the first initial correction parameter table to obtain a first target correction parameter table. Since the camera will continuously collect images to output multiple frames of images to form an image frame sequence, and the content of adjacent images in the image frame sequence is similar and shows a certain trend of change, the historical frame before the current frame image is used. The correction parameter table of the image corrects the correction parameter table of the current frame image, which can not only reduce the error of the first initial correction parameter table, but also enrich the information contained in the first initial correction parameter table, and improve the use of the first initial correction parameter. The accuracy of the table, which in turn improves the accuracy of lens shading correction on the image.
[0051] For example, in one embodiment, the step processes the first initial correction parameter table to obtain the first target correction parameter table, including:
[0052] b1. Obtain the second target correction parameter table corresponding to the historical frame image located before the current frame image in the image frame sequence;
[0053] b2. Perform correction processing on the first initial correction parameter table according to the second target correction parameter table to obtain a first target correction parameter table.
[0054] Wherein, in one shooting, the initial correction parameter table of each frame of image in the image frame sequence and the target correction parameter table after correction processing will be stored in the cache. For the current frame image, after calculating the first initial correction parameter table, use the second target correction parameter table from the second target correction parameter table of the historical frame image of a frame before the current frame image in the image frame sequence in the cache, using the second target correction The parameter table modifies the first initial calibration parameter table. For example, the average value of the correction parameters at each corresponding position in the second target correction parameter table and the first initial correction parameter table is calculated to obtain the first target correction parameter table. Alternatively, calculate the difference between the correction parameters at each corresponding position in the second target correction parameter table and the first initial correction parameter table, and for the correction parameters whose difference is greater than the preset difference, calculate their average value, and use the average value The first target correction parameter table is obtained by substituting the original correction parameter value at the corresponding position in the first initial correction parameter table.
[0055] For another example, in another embodiment, the step of processing the first initial correction parameter table to obtain the first target correction parameter table includes: acquiring the first target correction parameter table corresponding to a plurality of historical frame images located before the current frame image in the image frame sequence. A preset number of second target calibration parameter tables.
[0056] Correction processing is performed on the first initial correction parameter table according to a plurality of second target correction parameter tables to obtain a first target correction parameter table, including:
[0057] c1. Calculate and obtain a prediction and correction parameter table according to a first preset number of second target correction parameter tables and a preset first time-series neural network model.
[0058] c2. Perform correction processing on the first initial correction parameter table according to the prediction correction parameter table to obtain a first target correction parameter table.
[0059] In this embodiment, the change trend between multiple consecutive correction parameter tables of consecutive multi-frame images when shooting the same scene is learned by using the first time series neural network model, and the first initial correction parameter table is calculated by using the current frame image at the same time. , according to the first preset number of second target correction parameter tables corresponding to a plurality of historical frame images before the current frame image in the image frame sequence, input into the pre-trained first time series neural network model, and obtain a prediction correction parameter table , for example, assuming that the first preset number is M, then obtain the second target correction parameter tables corresponding to the consecutive M frames of images located before the current frame image in the image frame sequence, then obtain M second target correction parameter tables, and set the The M second target correction parameter tables are input into the pre-trained first time-series neural network model, where, in some embodiments, M may be 5-10, for example, M=8. Correction processing is performed on the first initial correction parameter table using the predicted correction parameter table to obtain a first target correction parameter table. For example, the average value of the predicted correction parameter table and the first initial correction parameter table is calculated to obtain the first target correction parameter table.
[0060] Wherein, the first time series neural network model may be a neural network model capable of learning the changing trend between sequence data, such as a recurrent neural network model, a long short-term memory neural network model, or the like.
[0061] In addition, in the first few frames of camera startup, before there is not enough data to train the first time series neural network model, the method of calculating the average value of the second target correction parameter table of the previous frame and the first initial correction table of the current frame is adopted. , and perform correction processing on the first initial correction table. When there is enough historical frame data, a training sample can be constructed by using the second target correction parameter table corresponding to the historical frame image, and the pre-built first time series neural network model can be trained to determine model parameters. And in the process of continuing to shoot after the first training is completed, the model parameters can be updated continuously or at intervals.
[0062] For example, as an implementation manner, before the step of acquiring the latest frame image in the image frame sequence output by the camera as the current frame image, the method further includes:
[0063] Acquiring a second preset number of second target correction parameter tables corresponding to a plurality of historical frame images in the image frame sequence; constructing several training samples according to the second preset number of second target correction parameter tables, and using several training samples to train the first A temporal neural network model to update model parameters.
[0064] Wherein, the second preset number is generally greater than or equal to the first preset number. For example, acquiring the second target correction parameter tables corresponding to the consecutive N frames of images located before the current frame image in the image frame sequence, then obtaining N second target correction parameter tables, and constructing a training program according to the N second target correction parameter tables samples, where, in some embodiments, N≧20, such as, in one embodiment, N=30. Taking an application scenario as an example, the camera module records video at a frame rate of 60 frames per second. After 0.5 seconds of shooting, 30 frames of images and their corresponding 30 second target correction parameter tables can be obtained. Each consecutive 11 second target correction parameter tables can be used as a training sample, wherein the first 10 consecutive second target correction parameter tables are used as input data, and the last second target correction parameter table is used as output data. The 1st to 11th second target correction parameter table constitutes the first training sample, the 2nd to 12th second target correction parameter table constitutes the second training sample, and the 3rd to 13th second target correction parameter table constitutes the third training samples, and so on, until the 20th-30th second target correction parameter table constitutes the 20th training sample, and use these 20 training samples to train the pre-built first time series neural network model for training to determine the model parameters .
[0065]It can be understood that using the corrected correction parameter table to train the model is more accurate than using the initial correction parameter table to train the model; in addition, the more training samples used, the more accurate the obtained model parameters. Therefore, as the shooting time is longer and there are more historical frame images, each time one or more frames of images are acquired, the training samples can be rebuilt, and the model can be retrained to update the model parameters.
[0066] For example, after shooting a 3-second video, there are already 180 frames of images in the image frame sequence, corresponding to 180 second target correction parameter tables, you can use the 121-180th second target correction parameter table to construct 50 training samples, using this 50 training samples are used to train the pre-built first time series neural network model for training to update model parameters.
[0067] For another example, after the prediction and correction parameter table is obtained, it is determined whether the error between the prediction and correction parameter table and the first initial correction parameter table is greater than a preset threshold. If not, the model parameters do not need to be updated. If the error with the first initial correction parameter table is greater than the preset threshold, the training sample is rebuilt, and the model is retrained to update the model parameters.
[0068] Wherein, in addition to the above-mentioned correction processing or interpolation processing on the first initial correction parameter table, in some embodiments, the first initial correction parameter table may be corrected and then subjected to interpolation processing, or, the first initial correction parameter table may also be subjected to correction processing or interpolation processing. An initial correction parameter table is subjected to interpolation processing and then correction processing to obtain a first target correction parameter table. The specific correction processing and interpolation processing methods may refer to the solutions provided in the above embodiments.
[0069] In 104, correction processing is performed on the current frame image according to the first target correction parameter table.
[0070] After completing the interpolation processing and/or correction processing on the first initial correction parameter table to obtain the first target correction parameter table, use the first target correction parameter table to perform correction processing on the current frame image, so as to eliminate the optical properties of the lens in the image. Luminance shading and color shading produced by the feature.
[0071] During specific implementation, the present application is not limited by the execution order of the described steps, and certain steps may also be performed in other sequences or simultaneously under the condition of no conflict.
[0072] It can be seen from the above that, in the image processing method provided by the embodiment of the present application, for the latest image frame in the image frame sequence output by the camera, after calculating the first initial correction parameter table of the image according to the shadow correction algorithm, the first initial correction parameter table is obtained. Correction processing is performed on the initial correction parameter table to eliminate calculation errors, and/or interpolation processing is performed on the first initial correction parameter table to expand the amount of information contained in the correction parameters, and the first target correction parameters obtained by the interpolation processing and/or correction processing are used. The table performs correction processing on the current frame image, which improves the accuracy of image lens shading correction.
[0073] see figure 2 , figure 2 This is a second schematic flowchart of the image processing method provided by the embodiment of the present invention. Methods include:
[0074] In 201, the latest frame image in the image frame sequence output by the camera is acquired as the current frame image.
[0075] In 202, based on a plurality of different cutting positions, a plurality of first initial correction parameter tables corresponding to the current frame image are respectively calculated according to a shadow correction algorithm.
[0076] After the camera is started, it collects images according to preset exposure parameters and exposure time intervals, and each time a frame of image is output, the latest frame of image can be used as the current frame of image. After the current frame image is acquired, the first initial correction parameter table of the frame image is calculated according to the shading correction algorithm. Wherein, for the current frame image, four corresponding first initial correction parameter tables are calculated and obtained based on a plurality of cutting positions, for example, four preset cutting positions.
[0077] For example, the preset four cutting positions are respectively the upper left part of the image, the upper right part of the image, the lower left part of the image, and the lower right part of the image. For the current frame image Pn, assuming that the resolution of the acquired image is 1920×980, the first first initial correction parameter table is obtained by calculating the cutting position as the upper left part of the image. Specifically, remove the x-column pixel points on the right edge of the image and the y-row pixel points on the bottom edge of the image, and the remaining part is the upper left part of the image with a resolution of (1920-y)×(980-x). According to m*n grids, m*n grid areas are obtained, the first initial correction parameter table is calculated, and then the second first initial correction parameter table is calculated according to the cutting position for the upper right part of the image. Specifically, remove the pixel points of the x column at the left edge of the image and the pixel points of the y row at the lower edge of the image, divide the remaining upper left part according to the m*n grid to obtain m*n grid areas, and perform the first initial correction parameter. table calculation. Based on a similar calculation method, the first initial correction parameter table corresponding to the lower left part of the image and the lower right part of the image is obtained by calculation.
[0078] In 203, multiple second initial correction parameter tables corresponding to multiple historical frame images located before the current frame image in the image frame sequence are acquired.
[0079] In 204, a prediction correction parameter table is calculated according to a plurality of second initial correction parameter tables and a preset second time-series neural network model, wherein the second time-series neural network model is corresponding to the historical frame images in the image frame sequence The second initial correction parameter table is obtained by training.
[0080] In 205, correction processing is performed on the plurality of first initial correction parameter tables respectively according to the prediction correction parameter table.
[0081] Next, the above-mentioned four first initial correction parameter tables are respectively corrected through the pre-trained second time series neural network model.
[0082] Wherein, the second time series neural network model may be a neural network model, such as a recurrent neural network model, a long short-term memory neural network model, etc., which can learn the changing trend between sequence data. Its implementation principle is similar to that of the first time series neural network model, the difference is that the training data and input data of the second time series neural network model are the initial correction parameter table, and the training data and input data of the first time series neural network model are processed In the target correction parameter table obtained by processing, the application principle and parameter update principle of the two models are the same, and will not be repeated here.
[0083] It should be noted that, in one shot, the multiple initial correction parameter tables of each frame of the image frame sequence will mark the corresponding cutting positions and then store them in the cache.
[0084] It can be understood that when using the second time series neural network model to correct the initial correction parameter table, the first initial correction parameter table of each cutting position is corrected respectively, taking the cutting position as the upper left part of the image as an example, obtain the current Multiple second initial correction parameter tables (or multiple second initial correction parameter tables after correction processing) of the same cutting position of consecutive multiple historical frame images before the frame image, input into the trained second time series neural network model, obtain the prediction correction parameter table corresponding to the cutting position, and use the prediction correction parameter table to correct the first initial correction parameter table corresponding to the cutting position. After four corrections, four corrected first initial correction parameter tables are obtained.
[0085] Wherein, when training the second time series neural network model, multiple correction parameter tables in one training sample correspond to the same cutting position. Since the correction processing does not affect the size of the correction parameter table, in order to improve the accuracy of the model, the initial correction parameter table after the correction processing can be used to train the model.
[0086] In 206 , according to the corresponding cutting positions, interpolation processing is performed on the plurality of first initial correction parameter tables after the correction processing, to obtain a first target correction parameter table.
[0087] After the correction processing of the four first initial correction parameter tables is completed, interpolation processing is performed on the four corrected first initial correction parameter tables to obtain a first target correction parameter table. For specific interpolation methods, please refer to Figure 1c The shown scheme will not be repeated again.
[0088] In 207, the current frame image is corrected according to the first target correction parameter table.
[0089] After the first target correction parameter table is obtained, the current frame image is corrected by using the first target correction parameter table, so as to eliminate brightness shadows and color shadows in the image due to the optical characteristics of the lens.
[0090] It can be seen from the above that, for the image processing method proposed in the embodiment of the present invention, for the latest image frame in the image frame sequence output by the camera, after calculating and obtaining a plurality of first initial correction parameter tables of the image according to the shadow correction algorithm, based on the phase The change trend between the adjacent correction parameter tables, the correction processing is performed on the multiple first initial correction parameter tables respectively to eliminate the calculation error, and then the corrected multiple first initial correction parameter tables are subjected to interpolation processing to expand the correction parameters included. The amount of information, the correction processing is performed on the current frame image by using the first target correction parameter table obtained by the correction processing and the interpolation processing, which improves the accuracy of the image lens shading correction.
[0091] see image 3 , image 3 This is a third schematic flowchart of the image processing method provided by the embodiment of the present invention. Methods include:
[0092] In 301, the latest frame image in the image frame sequence output by the camera is acquired as the current frame image.
[0093] In 302, a first initial correction parameter table of the current frame image is calculated according to a shading correction algorithm.
[0094] After the camera is started, it collects images according to preset exposure parameters and exposure time intervals, and each time a frame of image is output, the latest frame of image can be used as the current frame of image. After the current frame image is acquired, the first initial correction parameter table of the frame image is calculated according to the shading correction algorithm.
[0095] In 303, acquire multiple second initial correction parameter tables corresponding to multiple historical frame images located before the current frame image in the image frame sequence, wherein the multiple second initial correction parameter tables and the first initial correction parameter table correspond to different cutting position.
[0096] In 304, according to the corresponding cutting positions, use a plurality of second initial correction parameter tables to perform interpolation processing on the first initial correction parameter table.
[0097] In this embodiment, for consecutive images in the sequence of image frames, when calculating the initial correction parameter table, a plurality of preset cutting positions are alternately used to perform grid division on the images. For example, the preset four cutting positions are respectively the upper left part of the image, the upper right part of the image, the lower left part of the image, and the lower right part of the image. When calculating the initial correction parameter table of each frame of image, the calculation is performed cyclically according to the above four cutting positions, so that the initial correction parameter table of each adjacent four images corresponds to different cutting positions respectively. For specific implementation, please refer to Figure 1b shown, and will not be repeated here.
[0098] Based on the above-mentioned different cutting positions, for the first initial correction parameter table of the current frame image, interpolation processing is performed using multiple second initial correction parameter tables of the three historical frame images before the frame image. Wherein, during the interpolation processing, interpolation is performed according to the cutting positions corresponding to the four initial correction parameter tables. Assuming that the sizes of the first initial correction parameter table and the second initial correction parameter table are both 17*13*4, after interpolation processing, a first target correction parameter table with a size of 34*26*4 is obtained. For the specific interpolation method, please refer to the above Figure 1c The corresponding scheme is not repeated here.
[0099] In 305, a first preset number of second target correction parameter tables corresponding to a plurality of historical frame images located before the current frame image in the image frame sequence are acquired.
[0100]In 306, a prediction correction parameter table is obtained by calculation according to a first preset number of second target correction parameter tables and a preset first time series neural network model.
[0101] In 307 , modifying the interpolated first initial correction parameter table according to the prediction correction parameter table to obtain a first target correction parameter table.
[0102] After the interpolation processing of the first initial correction parameter table is completed, the correction processing is performed next.
[0103] In this embodiment, the change trend between multiple consecutive correction parameter tables of consecutive multi-frame images when shooting the same scene is learned by using the first time series neural network model, and the first initial correction parameter table is calculated by using the current frame image and performed At the same time of the interpolation processing, according to the first preset number of second target correction parameter tables corresponding to a plurality of historical frame images located before the current frame image in the image frame sequence, input into the pre-trained first time series neural network model to obtain a Prediction correction parameter table. Correction processing is performed on the first initial correction parameter table after interpolation processing by using the preset correction parameter table to obtain a first target correction parameter table.
[0104] Among them, it should be noted that the training sample is constructed by using the second target correction parameter table of the historical frame image to train the first time series neural network model. The adopted manner is similar, which will not be repeated here.
[0105] It can be understood that each frame of image in the image frame sequence is subjected to interpolation processing and correction processing according to the scheme of this embodiment, and the finally obtained second target correction parameter table is stored in the cache. The size of the second target correction parameter table is the same as that of the first initial correction parameter table after interpolation processing.
[0106] In 308, correction processing is performed on the current frame image according to the first target correction parameter table.
[0107] After the first target correction parameter table is obtained, the current frame image is corrected by using the first target correction parameter table, so as to eliminate brightness shadows and color shadows in the image due to the optical characteristics of the lens.
[0108] It can be seen from the above that, for the image processing method proposed by the embodiment of the present invention, for the latest frame image in the image frame sequence output by the camera, after calculating the first initial correction parameter table of the image according to the shadow correction algorithm, the frame image is used. The initial correction parameter table of the previous historical frame image performs interpolation processing on the first initial correction parameter table to expand the amount of information contained in the correction parameters, and then, based on the change trend between adjacent correction parameter tables, the interpolation processing An initial correction parameter table is corrected to eliminate calculation errors, and the first target correction parameter table obtained by interpolation and correction is used to correct the current frame image, thereby improving the accuracy of image lens shading correction.
[0109] In an embodiment, an image processing apparatus is also provided. see Figure 4 , Figure 4 This is a schematic structural diagram of an image processing apparatus 400 provided in an embodiment of the present application. The image processing apparatus 400 is applied to electronic equipment, and the image processing apparatus 400 includes an image acquisition module 401, a parameter calculation module 402, a parameter adjustment module 403 and an image correction module 404, as follows:
[0110] The image acquisition module 401 is used to acquire the latest frame image in the image frame sequence output by the camera, as the current frame image;
[0111] A parameter calculation module 402, configured to calculate a first initial correction parameter table of the current frame image according to a shadow correction algorithm;
[0112] A parameter adjustment module 403, configured to process the first initial correction parameter table to obtain a first target correction parameter table;
[0113] The image correction module 404 is configured to perform correction processing on the current frame image according to the first target correction parameter table.
[0114] It should be noted that the image processing apparatus provided in the embodiments of the present application and the image processing methods in the above embodiments belong to the same concept, and any of the methods provided in the image processing method embodiments can be implemented by the image processing apparatus. For details of the process, please refer to the embodiment of the image processing method, which will not be repeated here.
[0115] It can be seen from the above that, for the image processing device proposed in the embodiments of the present application, for the latest image frame in the image frame sequence output by the camera, after calculating the first initial correction parameter table of the image according to the shading correction algorithm, The initial correction parameter table is processed to improve the accuracy of the correction parameters, and the processed first target correction parameter table is used to perform correction processing on the current frame image, thereby improving the accuracy of image lens shading correction.
[0116] The embodiments of the present application also provide an electronic device. The electronic device may be a smart phone, a tablet computer or the like. see Figure 5a , Figure 5a This is a first structural schematic diagram of the electronic device provided in the embodiment of the present application. The electronic device 500 includes a processor 501 , a memory 502 and a camera 510 . The processor 501 is electrically connected to the memory 502 and the camera 510 .
[0117] The processor 501 is the control center of the electronic device 500, uses various interfaces and lines to connect various parts of the entire electronic device, executes the electronic device by running or calling the computer program stored in the memory 502 and calling the data stored in the memory 502. Various functions of the device and processing data, so as to carry out the overall monitoring of the electronic device.
[0118] Memory 502 may be used to store computer programs and data. The computer program stored in the memory 502 contains instructions executable in the processor. A computer program can be composed of various functional modules. The processor 501 executes various functional applications and data processing by calling the computer program stored in the memory 502 .
[0119] In this embodiment, the processor 501 in the electronic device 500 loads the instructions corresponding to the processes of one or more computer programs into the memory 502 according to the following steps, and the processor 501 executes the instructions stored in the memory 502 . A computer program in , which implements various functions:
[0120] Obtain the latest frame image in the image frame sequence output by the camera as the current frame image;
[0121] Calculate the first initial correction parameter table of the current frame image according to the shadow correction algorithm;
[0122] processing the first initial correction parameter table to obtain a first target correction parameter table;
[0123] Correction processing is performed on the current frame image according to the first target correction parameter table.
[0124] In some embodiments, see Figure 5b , Figure 5b This is a second schematic structural diagram of the electronic device provided in the embodiment of the present application. The electronic device 500 further includes: a radio frequency circuit 503 , a display screen 504 , a control circuit 505 , an input unit 506 , an audio circuit 507 , a sensor 508 and a power supply 509 . The processor 501 is electrically connected to the radio frequency circuit 503 , the display screen 504 , the control circuit 505 , the input unit 506 , the audio circuit 507 , the sensor 508 and the power supply 509 respectively.
[0125] The radio frequency circuit 503 is used for transmitting and receiving radio frequency signals to communicate with network equipment or other electronic equipment through wireless communication.
[0126] The display screen 504 may be used to display information entered by or provided to the user and various graphical user interfaces of the electronic device, which may consist of images, text, icons, video, and any combination thereof.
[0127] The control circuit 505 is electrically connected to the display screen 504 for controlling the display screen 504 to display information.
[0128] Input unit 506 may be used to receive input numbers, character information, or user characteristic information (eg, fingerprints), and generate keyboard, mouse, joystick, optical, or trackball signal input related to user settings and function control. The input unit 506 may include a fingerprint identification module.
[0129] The audio circuit 507 can provide an audio interface between the user and the electronic device through speakers and microphones. Among them, the audio circuit 507 includes a microphone. The microphone is electrically connected to the processor 501 . The microphone is used for receiving voice information input by the user.
[0130] The sensor 508 is used to collect external environment information. The sensor 508 may include one or more of an ambient brightness sensor, an acceleration sensor, a gyroscope, and the like.
[0131] Power supply 509 is used to power various components of electronic device 500 . In some embodiments, the power supply 509 may be logically connected to the processor 501 through a power management system, so as to implement functions such as managing charging, discharging, and power consumption through the power management system.
[0132] Although not shown in the figure, the electronic device 500 may also include a camera, a Bluetooth module, and the like, which will not be repeated here.
[0133] In this embodiment, the processor 501 in the electronic device 500 loads the instructions corresponding to the processes of one or more computer programs into the memory 502 according to the following steps, and the processor 501 executes the instructions stored in the memory 502 . A computer program in , which implements various functions:
[0134] Obtain the latest frame image in the image frame sequence output by the camera as the current frame image;
[0135] Calculate the first initial correction parameter table of the current frame image according to the shadow correction algorithm;
[0136] processing the first initial correction parameter table to obtain a first target correction parameter table;
[0137] Correction processing is performed on the current frame image according to the first target correction parameter table.
[0138] It can be seen from the above that the embodiment of the present application provides an electronic device, which, for the latest frame of image in the image frame sequence output by the camera, obtains the first initial correction parameter table of the image by calculating according to the shadow correction algorithm. , process the first initial correction parameter table to improve the accuracy of the correction parameters, and use the processed first target correction parameter table to correct the current frame image, thereby improving the accuracy of image lens shading correction.
[0139] An embodiment of the present application further provides a storage medium, where a computer program is stored in the storage medium, and when the computer program runs on a computer, the computer executes the image processing method described in any one of the foregoing embodiments.
[0140] It should be noted that those of ordinary skill in the art can understand that all or part of the steps in the various methods of the above embodiments can be completed by instructing relevant hardware through a computer program, and the computer program can be stored in a computer-readable storage medium , the storage medium may include, but is not limited to: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
[0141]In addition, the terms "first", "second", "third" and the like in this application are used to distinguish different objects, rather than to describe a specific order. Furthermore, the terms "comprising" and "having" and any variations thereof are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or modules is not limited to the listed steps or modules, but some embodiments also include unlisted steps or modules, or some embodiments Other steps or modules inherent to these processes, methods, products or devices are also included.
[0142] The image processing method, device, storage medium, and electronic device provided by the embodiments of the present application have been described in detail above. The principles and implementations of the present application are described herein using specific examples, and the descriptions of the above embodiments are only used to help understand the methods and core ideas of the present application; meanwhile, for those skilled in the art, according to the Thoughts, there will be changes in specific embodiments and application scopes. To sum up, the contents of this specification should not be construed as limiting the application.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Method for estimating residual capacity of iron-lithium phosphate power cell
Owner:CHONGQING UNIV
CDMA receiver capable of estimation of frequency offset in high precision
Owner:LENOVO INNOVATIONS LTD HONG KONG
Training method for two-dimensional face recognition model based on deep convolutional neural network
Owner:北京品恩科技股份有限公司
Classification and recommendation of technical efficacy words
- improve accuracy
- High precision
Golf club head with adjustable vibration-absorbing capacity
Owner:FUSHENG IND CO LTD
Direct fabrication of aligners for arch expansion
Owner:ALIGN TECH
Stent delivery system with securement and deployment accuracy
Owner:BOSTON SCI SCIMED INC
Method for improving an HS-DSCH transport format allocation
Owner:NOKIA SOLUTIONS & NETWORKS OY
Catheter systems
Owner:ST JUDE MEDICAL ATRIAL FIBRILLATION DIV
Method for forecasting short-term power in wind power station
Owner:NORTH CHINA ELECTRIC POWER UNIV (BAODING) +1
Numerical control machine tool wear monitoring method
Owner:HUAZHONG UNIV OF SCI & TECH +1
Advertisement recommendation method and apparatus
Owner:SHENZHEN TENCENT COMP SYST CO LTD
Power line patrol unmanned aerial vehicle navigation system based on GPS RTK technology
Owner:GUANGDONG POWER FLY AIR TECH DEV CO LTD