A Video Semantic Segmentation Method Based on Optical Flow Feature Fusion

A feature fusion and semantic segmentation technology, applied in the field of video processing, can solve the problems of low semantic segmentation accuracy of video semantic segmentation algorithm, large computational load of video semantic segmentation algorithm, and high segmentation delay, so as to improve the ability of semantic representation and save computing time. , the effect of increasing the speed

Active Publication Date: 2022-08-05
UNIV OF ELECTRONICS SCI & TECH OF CHINA
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] First, in the field of autonomous driving applications, there are many and complex instances in video data, resulting in low semantic segmentation accuracy of video semantic segmentation algorithms
[0005] Second, compared with the image semantic segmentation task, the video semantic segmentation task processes a larger amount of data, resulting in a larger amount of calculation for the video semantic segmentation algorithm and high segmentation delay.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Semantic Segmentation Method Based on Optical Flow Feature Fusion
  • A Video Semantic Segmentation Method Based on Optical Flow Feature Fusion
  • A Video Semantic Segmentation Method Based on Optical Flow Feature Fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] like figure 1 As shown, a video semantic segmentation method based on optical flow feature fusion provided by the present invention includes the following steps:

[0058] Step 1, determine that the current video frame image of the video sequence is a key frame image or a non-key frame image; if it is a key frame image, then perform step 2, if it is a non-key frame image, then perform step 3;

[0059] Step 2, extracting the high-level semantic feature map of the fusion position-dependent information and channel-dependent information of the current video frame image;

[0060] Step 3, obtaining the high-level semantic feature map of the current video frame image by calculating the optical flow field;

[0061] Step 4: Upsampling the high-level semantic feature maps obtained in steps 2 and 3 to obtain a semantic segmentation map.

[0062] The features and performances of the present invention will be further described in detail below in conjunction with the embodiments.

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video semantic segmentation method based on optical flow feature fusion, which includes the following steps: Step 1: Determine whether a current video frame image of a video sequence is a key frame image or a non-key frame image; if it is a key frame image, execute Step 2, if it is a non-key frame image, perform step 3; Step 2, extract the high-level semantic feature map of the fusion position-dependent information and channel-dependent information of the current video frame image; Step 3, obtain the current video frame by calculating the optical flow field. The high-level semantic feature map of the image; step 4, up-sampling the high-level semantic feature map obtained in steps 2 and 3 to obtain a semantic segmentation map. The method of the present invention integrates the idea of ​​optical flow field and attention mechanism, which can improve the speed and accuracy of video semantic segmentation.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a video semantic segmentation method based on optical flow feature fusion. Background technique [0002] With the increasing market demand for automotive active safety and intelligence, more and more companies and research institutions have begun to focus on the research and development of autonomous driving systems. The environment perception technology in the automatic driving system, as the eyes and ears of the automatic driving vehicle, provides support for the behavioral decision-making system of the automatic driving. In autonomous driving environment perception technology, fast and accurate semantic segmentation of real-time video data collected by vehicle cameras is a crucial technology. [0003] The core problem of autonomous vehicle semantic segmentation of real driving scenes is to extract road semantic information, and to improve the segmentation speed of th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06V20/70G06V20/40G06V20/56G06V10/26G06V10/62G06V10/44G06V10/74G06V10/80G06V10/82
CPCG06V20/46G06V20/48G06V20/49G06V20/56G06V10/454G06V10/267G06F18/22G06F18/253
Inventor 周世杰王蒲程红蓉刘启和廖永建潘鸿韬
Owner UNIV OF ELECTRONICS SCI & TECH OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products