Automatic recognition and intelligent positioning method for seismic damages to reinforced concrete structure based on computer vision
A reinforced concrete, computer vision technology, applied in computer parts, calculation, character and pattern recognition, etc., to meet the needs of online monitoring and early warning and real-time data processing, improve automation, improve efficiency and the effect of
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
specific Embodiment approach 1
[0017] Specific implementation mode one: the method for automatic identification and intelligent positioning of reinforced concrete structure earthquake damage based on computer vision in this embodiment, such as figure 1 shown, including:
[0018] Step 1. Downsample the input image, manually mark the damaged area of the downsampled image with a rectangular frame according to the preset damage type, obtain data representing the position and size of the rectangular frame, and mark the damage area according to the damage type Label the damaged area.
[0019] For example, in one embodiment, a color image may be down-sampled to 640x640x3 to reduce computational cost. MATLAB parallel program imageLabeler can be used to generate rectangular box labels.
[0020] There are many choices for the data representing the position and size of the rectangular box, for example, you can use the horizontal / vertical coordinates of the pixel of the upper left corner of the rectangle and the le...
specific Embodiment approach 2
[0028] Specific implementation mode two: the difference between this implementation mode and specific implementation mode one is that step one specifically includes:
[0029] Step 11. Down-sample the input image, set 4 types of damage, namely concrete cracking, concrete spalling, steel bar exposure, and steel bar buckling, and use a rectangular frame to map the damage area in the down-sampled image according to the 4 damage types Perform manual marking to obtain the coordinates of the upper left corner of the rectangular frame and the pixel values of length and width, and label the damaged area according to the type of damage.
[0030] Step 1 and 2: Rotate the input image counterclockwise by 90 degrees, 180 degrees, 270 degrees, flip it horizontally, and flip it vertically to obtain rotated or flipped images respectively, and process the obtained images in steps one by one.
[0031] The beneficial effect of this embodiment is that the training samples are expanded, which can...
specific Embodiment approach 3
[0033] Specific embodiment three: the difference between this embodiment and specific embodiment one or two is: in step two, the structure of each layer of deep neural network is:
[0034] L0 layer: The input has a width of 32 and a depth of 3. Perform the convolutional layer operation, the width of the convolutional layer operation is 7, the depth is 3, the number is 16, the step is 1, and the zero padding is 0.
[0035] L1 layer: The input has a width of 26 and a depth of 16. Perform activation layer operations.
[0036] L2 layer: The input width is 26, the depth is 16, and the regularization layer operation is performed.
[0037] L3 layer: The input has a width of 26 and a depth of 16. Perform the convolutional layer operation. The width of the convolutional layer operation is 5, the depth is 16, the number is 32, the step is 1, and the zero padding is 0.
[0038] L4 layer: The input has a width of 22 and a depth of 32. Perform activation layer operations.
[0039] L5 l...
PUM

Abstract
Description
Claims
Application Information

- Generate Ideas
- Intellectual Property
- Life Sciences
- Materials
- Tech Scout
- Unparalleled Data Quality
- Higher Quality Content
- 60% Fewer Hallucinations
Browse by: Latest US Patents, China's latest patents, Technical Efficacy Thesaurus, Application Domain, Technology Topic, Popular Technical Reports.
© 2025 PatSnap. All rights reserved.Legal|Privacy policy|Modern Slavery Act Transparency Statement|Sitemap|About US| Contact US: help@patsnap.com