Check patentability & draft patents in minutes with Patsnap Eureka AI!

Time-frequency domain combined panoramic segmentation convolutional neural network and application

A convolutional neural network, time-frequency domain technology, applied in the field of convolutional neural network, driverless and autonomous robot scenarios, can solve the recognition accuracy limitation, does not take into account the frequency characteristics of panoramic images, loses high-frequency information of instance objects, etc. problems, to avoid traffic accidents, facilitate accurate analysis, and improve performance

Active Publication Date: 2021-10-22
DALIAN NATIONALITIES UNIVERSITY
View PDF7 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The patent "A Predictive Optimization Method for Image Panoramic Segmentation Based on Convolution" (Publication No.: CN109801297A) discloses a convolutional neural network based on time-domain panoramic segmentation. The network starts from the spatial relationship and occludes the priority of the instance target. Sorting solves the occlusion problem between instances, but this patent does not take into account the frequency characteristics of the panoramic image, and loses the high-frequency information of the instance object, which may limit the recognition accuracy of the network for the instance object

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Time-frequency domain combined panoramic segmentation convolutional neural network and application
  • Time-frequency domain combined panoramic segmentation convolutional neural network and application
  • Time-frequency domain combined panoramic segmentation convolutional neural network and application

Examples

Experimental program
Comparison scheme
Effect test

example

[0059] Instance Links: A link network that transforms the input into instance features.

[0060] Semantic Links: A network of links that transforms inputs into semantic features.

[0061] 2. Network Architecture

[0062] In general, the time-frequency domain joint panoramic segmentation convolutional neural network includes four parts: frequency domain transformation network, time domain transformation network, time-frequency domain joint network and segmentation fusion network.

[0063] (1) Preprocessing structure

[0064] The preprocessing structure is a shared network of the frequency domain transformation network and the time domain transformation network, which is used to perform preliminary preprocessing operations on the input image. The preprocessing structure consists of a four-layer residual network, and each layer corresponds to output a residual feature. After the input image is preprocessed, a four-layer residual feature R (R 1 , R 2 , R 3 , R 4 ), and then...

Embodiment 1

[0138] Panoramic segmentation map under different coefficient combinations

[0139] In this implementation example, the image is input to the coefficient distribution combination as C 1 、C 2 、C 3 、C 4 、C 5 and C 6 In the time-frequency domain joint panoramic segmentation convolutional neural network, the panoramic segmentation results are obtained as follows: Figure 4 .

Embodiment 2

[0141] Panoramic Segmentation in Simple Scenes

[0142] In this implementation example, a scene with a simple foreground and background environment is input into the time-frequency domain joint panoramic segmentation convolutional neural network to obtain a panoramic segmentation result. Panoramic segmentation results of simple scenes such as Figure 5 .

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a time-frequency domain combined panoramic segmentation convolutional neural network and application, and belongs to the field of deep learning image processing. Comprising: a frequency domain transformation network that transforms input into frequency domain information and extracts high and low frequency features of an image; the time domain transformation network that is used for transforming the input into time domain information and extracting instance features and semantic features in the image; the time-frequency domain joint network that is used for sequentially distributing product coefficients for the high-frequency features, the low-frequency features, the instance features and the semantic features correspondingly output by the frequency domain transformation network and the time domain transformation network respectively; and the segmentation fusion network that is used for fusing the foreground and background segmentation results to generate a panoramic segmentation result. The invention has the advantages that the invention can be applied to multiple fields such as autonomous automobiles, auxiliary driving, robots and public safety skyeye monitoring systems.

Description

technical field [0001] The invention belongs to the field of deep learning image processing, and specifically relates to a convolutional neural network that combines two perspective analysis panorama segmentation algorithms in the time domain and the frequency domain, and is suitable for unmanned driving and autonomous robot scenarios. Background technique [0002] In recent years, major breakthroughs have been made in the field of unmanned driving and robotics due to the rapid development of deep learning. Based on the powerful scene understanding function, panoramic segmentation technology has gradually become an important means of computer vision environment perception. However, the traditional time-domain convolutional neural network can only perform indiscriminate feature extraction on images from the perspective of the spatial domain, ignoring the significant difference between the foreground and background in the image due to different frequencies, resulting in the acc...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/34G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/253Y02T10/40
Inventor 毛琳任凤至杨大伟张汝波
Owner DALIAN NATIONALITIES UNIVERSITY
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More