Airborne vision enhancement method and device

A technology of visual enhancement and processing, which is applied in the field of visual enhancement, can solve problems such as lack of comprehensive consideration, and achieve the effects of improving perception, good visual effect, and convenient design

Inactive Publication Date: 2011-05-18
CHINESE AERONAUTICAL RADIO ELECTRONICS RES INST
4 Cites 14 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0007] In view of the limitations of the prior art, if the algorithm is not selected and optimized according to the characteristics of the airborne video image, and the data characteristics of different image sensors are not comprehensively considered for adaptive video superposition and fusion processing, the purpo...
View more

Method used

[0068] The morphological filtering of the input image is based on the geometric structure characteristics of the signal (image), and the predefined structural elements are used to match or partially modify the signal to achieve the purpose of extracting the signal and suppressing noise. Assume that A represents an image, B represents a structural element, represents an expansion operation, and represents an erosion operation.
[0094] Wherein Wi is a template element value, and Pi is an input image pixel value. The Sobel operator is easy to implement in space. The Sobel edge detector not only produces better edge detection results, but also because the Sobel operator introduces local averages, it is less affected by noise. When using a large neighborhood,...
View more

Abstract

The invention discloses an airborne vision enhancement method and an airborne vision enhancement device. The method comprises the following steps of: performing histogram statistical analysis on an input digital image to generate an adaptive threshold value, and dividing into different sectors according to the characteristics of a histogram to generate different fusion weight coefficients; performing morphological filtering and edge detection calculation according to the characteristics of different images; and performing vision enhancement on a calculation result according to a video superposition and fusion algorithm. The device comprises a threshold value generation module, a morphological calculation module, an edge detection module and a superposition and fusion module. Compared with the conventional vision enhancement method, the airborne vision enhancement method has the advantages that: superposition and fusion calculation can be adaptively performed for different characteristics of an airborne vision; the method is suitable for hardware implementation; and the airborne vision can be effectively enhanced in real time, and a better visual effect is achieved.

Application Domain

Image enhancement

Technology Topic

Self adaptiveHistogram +9

Image

  • Airborne vision enhancement method and device
  • Airborne vision enhancement method and device
  • Airborne vision enhancement method and device

Examples

  • Experimental program(1)

Example Embodiment

[0052] In order to make it easy to understand the technical means, creative features, objectives and effects achieved by the present invention, the present invention will be further explained below in conjunction with specific drawings.
[0053] See figure 1 , The airborne visual enhancement method in this embodiment includes the following steps:
[0054] S101: Perform gray scale conversion on the input digital image.
[0055] The formula used to convert a color image to a grayscale image in this step is:
[0056] Grey = 0.3R + 0.59G + 0.11B,
[0057] Grey is the gray value of the converted image, and R, G, and B are the red, green and blue components of the original color image.
[0058] Since the human eye has the highest sensitivity to green, the next to red, and the lowest to blue, the most reasonable gray-scale image can be obtained using this formula.
[0059] S102: Perform histogram statistics and sector division.
[0060] Divide the grayscale histogram of the input image into 16 sectors. Each sector contains a grayscale value of 16, and increases from 0 to 255 in turn; if the grayscale value is between 0-15, it is the first sector Zone, the gray value is between 16 and 31 is the second sector...The gray value is between 240 and 255 is the sixteenth sector.
[0061] S103: Count the maximum value, the second largest value, the third largest value and the corresponding sectors.
[0062] S104: It is judged whether the adjacent sectors are adjacent.
[0063] S105: If it is an adjacent sector, take the median of the maximum value and the third maximum value as the threshold.
[0064] S106: If it is not a neighboring sector, take the median of the maximum value and the next maximum value as the threshold.
[0065] Since the image histogram reflects the area of ​​pixels with different gray values ​​(continuous image) or the ratio of the number of pixels (discrete image) in the frame, it expresses some information of the image. Reflects the number of times that different gray values ​​appear in the image, and its typical application is to select the threshold for image binarization. When the histogram of the image presents an approximately symmetrical double-peak shape, it is best to select the valley point between the two peaks as the threshold for binarization.
[0066] For example, when calculating double peaks, the gray scale of the sector where the maximum value and the next-largest value are located is 242 and 238 respectively, and they are located in adjacent sectors, then 240 will be mistakenly regarded as the threshold value, so the method of the present invention takes the input image The grayscale histogram is equally divided into 16 sectors, and the histogram statistics are performed to obtain the maximum value, the second largest value, the third largest value and the corresponding sector; determine whether the sector corresponding to the largest value and the second largest value Adjacent area, if not, take the median of the maximum value and the average value of the sector where the second largest value is located as the threshold; if it is an adjacent area, take the median value of the maximum value and the average value of the sector where the third largest value is located As a threshold.
[0067] S107: Perform morphological filtering processing on the grayscale image.
[0068] Morphological filtering of the input image is based on the geometric structure characteristics of the signal (image), using pre-defined structural elements to match or locally modify the signal to achieve the purpose of extracting the signal and suppressing noise. Suppose that A is used to represent the image, B is the structural element, Represents the expansion operation, Represents the corrosion operation.
[0069] The expansion operation is defined as follows:
[0070]
[0071] The corrosion calculation is defined as follows:
[0072]
[0073] Corrosion and then expansion is called an open operation:
[0074]
[0075] The closed operation is defined as follows:
[0076]
[0077] Since the high-hat transformation reflects the gray-scale peak value in the original image, and the low-hat transformation reflects the gray-scale valley value in the original image, the following operation is used to obtain the enhanced image, and the original image and the result of the high-hat transformation are added to the low The result of the cap transformation is subtracted, namely:
[0078] Image_Enhanced = Image_original + TopHat-BotHat
[0079] Where Image_Enhanced is the enhanced image, Image_original is the original image, TopHat is the result of top hat transformation, and BotHat is the result of low hat transformation, defined as:
[0080]
[0081] Where A represents an image, B represents a structural element, Represents the morphological opening operation, Represents morphological closed operation.
[0082] S108: Perform image binarization according to the threshold.
[0083] The formula used for this step is as follows:
[0084]
[0085] Where f(x, y) is the original image, g(x, y) is the image after binarization, and T is the threshold.
[0086] S109: At the same time, the gray image is processed by the Sobel operator.
[0087] In two-dimensional image processing, common operator templates are: Roberts operator template, Sobel operator template, Prewitt edge operator template, Laplacian operator template, etc. The calculation process is to use the template as a window, move this window in the scanning order, and let the center of the template traverse all the pixels on the image in turn. The template algorithm has the characteristics of simple processing, but the amount of calculation is relatively large. For example, a 3×3 template requires 9 multiplications and 8 additions to complete the template operation of one pixel. For N×N images, 9(N-2) is required 2 Multiplication and 8 (N-2) 2 Sub-addition, the algorithm complexity is O(N 2 ). For high-definition images, if software is still used for calculation, it can no longer meet the requirements of real-time processing.
[0088] In the embodiment of the present invention, considering the real-time performance of the calculation, the Sobel operator is used to extract the edges in the horizontal direction and the vertical direction, and the rich multiplication and addition resources inside the FPGA are used to pipeline the calculation of the convolution result.
[0089] In the embodiment of the present invention, the sobel operator is:
[0090]
[0091] Based on the above principle, the edge detection template operation in this embodiment is as figure 2 As shown,
[0092] The template calculation result P is calculated using the following formula:
[0093] .
[0094] Where W i Is the template element value, P i Is the pixel value of the input image. Sobel operator is easy to implement in space. Sobel edge detector not only produces better edge detection effect, but also because Sobel operator introduces local averaging, it is less affected by noise. When using a large neighborhood, the anti-noise characteristics will be better, but doing so will increase the amount of calculation, and the resulting edges will be thicker. The Sobel operator uses the gray-scale weighting algorithm of the adjacent points on the top and bottom of the pixel to perform edge detection based on the phenomenon that the extreme value is reached at the edge point. Therefore, the Sobel operator has a smoothing effect on noise and provides more accurate edge direction information. However, due to the influence of local averaging, it will also detect many false edges at the same time, and the edge positioning accuracy is not high enough. Therefore, the non-maximum value suppression processing algorithm is used in the back end to make the edges clearer.
[0095] S110: Perform non-maximum value suppression processing.
[0096] S111, output different weights according to corresponding sectors;
[0097] The weight values ​​in this embodiment are shown in the following table:
[0098] Sector number
[0099] S112: Perform video superimposition and fusion output according to different weights, and complete the scene enhancement processing.
[0100] This step is mainly based on the fusion calculation of the weight generated by the previous stage and the image result. The formula used is:
[0101]
[0102] Among them, IN0 is the edge detection calculation result, IN1 is the binarization calculation result, Is the weight of the corresponding sector.
[0103] See image 3 The onboard visual enhancement processing device provided by the present invention mainly includes a visual screen input interface 301, a visual enhancement module 302 for realizing visual enhancement, a video output interface 309, and various display terminals 310.
[0104] The visual enhancement module 302 includes an image acquisition and timing control module 303, a frame memory management module 304, a threshold generation module 305, a morphology calculation module 306, an edge detection module 307, and a superposition fusion module 308.
[0105] The operating process of the visual enhancement processing device based on the above scheme is as follows:
[0106] Different standard video sources such as (DVI, VGA, PAL, NTSC) and non-standard video sources provide input signals to the visual enhancement module 302 through the video input interface 301, and the video input interface 301 converts the signals of different standard interfaces into digital RGB signals It is sent to the scene enhancement module 302 for scene enhancement processing.
[0107] The image acquisition and timing control module 303 in the vision enhancement module 302 collects the data of RGB color components, line and field synchronization, data validity and other signals, and stores them in the internal buffer; the frame memory management module 304 performs external frame memory and vision Enhance module 302 internal data exchange and scheduling; threshold generation module 305 feeds back to the register based on histogram statistics, accumulates through counters, compares the maximum value and sector, and generates the corresponding threshold; morphology calculation module 306 and edge detection The module 307 uses the convolution template to perform multiplication and addition operations in a pipeline, and completes the calculation of the entire frame of image data through the continuous movement of the sliding window; the superimposition and fusion module 308 samples the pre-calculation results in real time, and cooperates with the timing control unit to generate the final pixel data and Sync signal.
[0108] After the visual enhancement module 302 completes the calculation, the image RGB data is output to the video output interface 309, and finally output to the various display terminals 310.
[0109] In the embodiment of the present invention, the hardware of the device used may preferably be a programmable logic device, such as an FPGA.
[0110] See Figure 4 , Which is a diagram of the internal circuit architecture of the FPGA of the airborne visual enhancement processing device in the embodiment of the present invention. Based on this diagram, the specific implementation process of the FPGA of this embodiment is as follows:
[0111] For example, when the input resolution is 1024×768, the timing control module 401 reads two rows of image data (each row 1024×8bit) from the output terminal Grey of the gray conversion module 402 and saves them in the FIFO memory 403 to form a 3×3 Template.
[0112] At the same time, the histogram statistics 404 is performed on the gray image, the statistics result 405 is latched and accumulated, and the adaptive threshold value is calculated 406.
[0113] The image pixel dot clock is used to continuously read in new pixel data to perform template operation 407, and what is obtained is the result of template operation on 1024 data in the second row.
[0114] Finally, the binary image 408 is output according to the threshold.
[0115] Every time a new pixel value is read in, the data in the shift register group also moves one pixel position to the right, which is equivalent to sliding the window pixel by pixel along the row direction of the image data under the control of the clock. When 1024 pixels in the second row are processed, the data in the first row is discarded, the data in the second row flows into the FIFO and becomes the data in the first row, the data in the third row becomes the data in the second row, and then the fourth row is read The data of the row composes the data of the third row of the 3×3 window, and the result of performing the template operation on the data of the third row is obtained after the calculation.
[0116] For the convolution operation of the template, make full use of the pipeline technology to perform matrix multiplication and addition operations in parallel, and the result of the template operation can be obtained after a short delay.
[0117] The threshold generation module receives the input pixel data in parallel, uses the histogram to analyze the gray average value of the entire frame image and the most frequently occurring gray value, and uses the adaptive threshold generation algorithm to obtain the frame image threshold as the basis for image binarization , The adaptive threshold calculation method can adjust the parameters according to the changes of each frame of image, which is especially suitable for the drastic changes of airborne video sequence images.
[0118] The specific implementation effects of the present invention are as follows Figure 5 with Image 6 Shown, where Figure 5 Is an effect diagram of the edge detection calculation result in the airborne visual enhancement method of the present invention, Image 6 It is an effect diagram of the morphological calculation result in the airborne visual enhancement method of the present invention. It can be seen from the figure that the present invention can clearly extract the edge information of the flight runway and effectively enhance the visual effect.
[0119] The above shows and describes the basic principles, main features and advantages of the present invention. Those skilled in the industry should understand that the present invention is not limited by the foregoing embodiments. The foregoing embodiments and descriptions only illustrate the principles of the present invention. Without departing from the spirit and scope of the present invention, the present invention may have Various changes and improvements, these changes and improvements fall within the scope of the claimed invention. The scope of protection claimed by the present invention is defined by the appended claims and their equivalents .

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Splicing structure of irregular combined board decorative wall surface

InactiveCN109235816Agood visual effectReduce material loss
Owner:SHANGHAI ARCHITECTURAL DESIGN & RES INST

Anti-leakage aluminum-magnesium-manganese roof system and construction method

PendingCN114164999AStrong sense of hierarchygood visual effect
Owner:杭州勇峰实业有限公司

Classification and recommendation of technical efficacy words

  • good visual effect
  • adaptable

Image quality adjustment method, smart television and memory medium

ActiveCN108810649Agood visual effect
Owner:SHENZHEN SKYWORTH RGB ELECTRONICS CO LTD

Image nonlinearity enhancement method based on histogram subsection transformation

InactiveCN102831592Agood visual effecthigh magnification
Owner:THE 41ST INST OF CHINA ELECTRONICS TECH GRP

Electronic equipment and unlocking method thereof

InactiveCN104360814Arich picturegood visual effect
Owner:PHICOMM (SHANGHAI) CO LTD

Measuring system of three-dimensional shape of strong reflecting surface based on high dynamic strip projector

ActiveCN102937425AadaptableOvercome the influence of background light on measurement
Owner:BEIHANG UNIV

Production method of low-cost X52 pipeline steel and pipeline steel

ActiveCN103572025ASimple production process controladaptable
Owner:攀钢集团西昌钢钒有限公司 +1
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products