Convolutional neural network-based synthetic aperture focused imaging depth assessment method

A convolutional neural network and synthetic aperture focusing technology, which is applied to biological neural network models, neural architectures, instruments, etc., can solve problems such as consuming a lot of time, achieve the effects of reducing complexity, shortening computing time, and enhancing scalability

Active Publication Date: 2018-08-21
SHAANXI NORMAL UNIV
View PDF3 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In addition, existing methods require other information in the scene as input, such as image information from multiple perspectives or im...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Convolutional neural network-based synthetic aperture focused imaging depth assessment method
  • Convolutional neural network-based synthetic aperture focused imaging depth assessment method
  • Convolutional neural network-based synthetic aperture focused imaging depth assessment method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0058] Taking 704 images collected from 44 scenes in the campus to generate 8766 synthetic aperture images as an example, the synthetic aperture focusing imaging depth evaluation method based on convolutional neural network is as follows: figure 1 As shown, the specific steps are as follows:

[0059] (1) Build a multi-layer convolutional neural network

[0060] For the input image of the network, the uniform size is 227×227×3, where 227×227 is the resolution of the input image, and 3 is the pixel information of the input image with three color channels.

[0061] The convolutional neural network consists of 5 convolutional layers, 3 pooling layers and 3 fully connected layers. The specific parameters are as follows:

[0062] conv1: (size: 11, stride: 4, pad: 0, channel: 96)

[0063] pool1: (size: 3, stride: 2, pad: 0, channel: 96)

[0064] conv2: (size: 5, stride: 1, pad: 2, channel: 256)

[0065] pool2: (size: 3, stride: 2, pad: 0, channel: 256)

[0066] conv3: (size: 3, ...

Embodiment 2

[0110] Taking 704 images collected from 44 scenes in the campus to generate 8766 synthetic aperture images as an example, the steps of the synthetic aperture focusing imaging depth evaluation method based on convolutional neural network are as follows:

[0111] (1) Build a multi-layer convolutional neural network

[0112] The steps of constructing a multi-layer convolutional neural network are the same as those in Embodiment 1.

[0113] (2) Collect and generate synthetic aperture images

[0114] Use a camera array composed of 8 camera levels to shoot the target object, collect images of each camera located at different angles of view, and use formula (5) to get the projection to the reference plane π r Image

[0115] W ir =H i ·F i (5)

[0116] where F i For the images captured by each camera, W ir for F i Projected to the plane π after affine transformation r image of H i from F i projected onto the reference plane π r The affine matrix of , where i is 1,2,...,N, ...

Embodiment 3

[0129] Taking 704 images collected from 44 scenes in the campus to generate 8766 synthetic aperture images as an example, the steps of the synthetic aperture focusing imaging depth evaluation method based on convolutional neural network are as follows:

[0130] (1) Build a multi-layer convolutional neural network

[0131] The steps of constructing a multi-layer convolutional neural network are the same as those in Embodiment 1.

[0132] (2) Collect and generate synthetic aperture images

[0133] Use a camera array composed of 16 camera levels to shoot the target object, collect images of each camera located at different angles of view, and use formula (5) to get the projection to the reference plane π r Image

[0134] W ir =H i ·F i (5)

[0135] where F i For the images captured by each camera, W ir for F i Projected to the plane π after affine transformation r image of H i from F i projected onto the reference plane π r The affine matrix of , where i is 1,2,...,...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a convolutional neural network-based synthetic aperture focused imaging depth assessment method. The method comprises the following steps of: constructing a multilayer convolutional neural network; acquiring and generating synthetic aperture images; classifying the synthetic aperture images; training the constructed convolutional neural network; and judging focusing degreesof the synthetic aperture images. According to the method, single synthetic aperture images are taken as inputs, and a convolutional neural network deep learning tool is adopted to extract focusing features in the synthetic aperture images, so that relatively high judging correctness is provided for the synthetic aperture images with relatively small focusing part areas; and compared with existing method, the method is capable of effectively reducing the calculation complexity, shortening the calculation time, improving the judging correctness and enhancing the expandability, and can be usedfor automatic focusing of synthetic aperture images.

Description

technical field [0001] The invention belongs to the technical field of image processing and pattern recognition, and in particular relates to a synthetic aperture focusing imaging depth evaluation method based on a convolutional neural network. Background technique [0002] The existing camera can adjust the focal length, and the image of the object on the focal plane is clear, on the contrary, the image of the object on the non-focus plane is blurred. Whether the object is on the focal plane is the key to judging whether the image is in focus. Synthetic aperture imaging using camera arrays composed of multiple cameras is becoming more and more possible. However, in the field of synthetic aperture imaging, finding a way to measure the degree of focus has attracted the attention of many researchers. [0003] The existing measurement focus method uses the gradient algorithm between pixel values ​​and the local statistics of pixel values ​​to realize the judgment of the image ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06N3/04
CPCG06N3/045G06F18/214G06F18/24
Inventor 裴炤张艳宁沈乐棋马苗郭敏
Owner SHAANXI NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products