Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Method for converting infrared video into visible light video in unmanned driving

An unmanned driving and video conversion technology, applied in the field of video conversion, can solve the problems of small amount of data, large inter-domain spacing, and retain infrared images, etc., to achieve the effect of increasing mutual information distance, optimizing detail generation, and alleviating style drift

Active Publication Date: 2021-11-23
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] As another example, most of the infrared and visible light data sets available on the Internet for unmanned driving tasks are aimed at images, and they do not have continuous and corresponding infrared and visible light video data. Although VOT2019, FLIR, and KAIST data sets provide corresponding Infrared and visible light data of the scene, but their data volume is small, and the collection scene of the data set is relatively single. Among them, although VOT2019 provides 60 video clips, they are infrared and visible light data in the monitoring scene, which cannot be effectively collected. Applied to unmanned driving video tasks; FLIR has only a single video clip and cannot provide diverse data; although KAIST has a large amount of data, the collection scene is single, the quality of infrared data is not good, and it lacks data diversity
[0008] For example, in the above-mentioned invention patent applications, one category is limited to the visual effect processing of existing visible light images with poor effects, and does not use the advantage information of infrared images; the other category uses image fusion to present visual effects that still retain infrared images. The style of the image cannot express semantic information intuitively like visible light images
[0009] In summary, due to the large distance between domains, it is impossible to solve the conversion between unmanned mid-infrared and visible light videos through traditional mathematical methods for color conversion; and due to the temporal and spatial consistency of videos, the existing style transfer methods for images Can't meet the style transfer work of the video

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for converting infrared video into visible light video in unmanned driving
  • Method for converting infrared video into visible light video in unmanned driving
  • Method for converting infrared video into visible light video in unmanned driving

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0064] For ease of understanding, in this embodiment, two video domains are included: source domain X={x}, target domain Y={y}, and a video sequence x in the source domain is counted as a continuous video frame sequence {x 0 ,x 1 ,...,x t}, abbreviated as Similarly, a video sequence y in the target domain is counted as a continuous video frame sequence {y 0 ,y 1 ,...,y s}, abbreviated as It should be noted that the tth frame in the sequence x is counted as x t , the sth frame in the sequence y is counted as y s , the goal of the method described in this embodiment is to learn two different mappings between the source domain and the target domain, so that given any one of the videos, the corresponding videos belonging to different domains can be generated. For example, given an infrared video, the model can Visible light video of the corresponding scene is generated by mapping.

[0065] The method described in this embodiment first builds a model of video style transf...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for converting an infrared video into a visible light video in unmanned driving. The method comprises the following steps: 1, inputting an infrared source domain video and a visible light target domain video, and outputting a video frame image; 2, initializing parameters; 3, randomly reading in data; 4, generating a corresponding prediction generation video frame; 5, generating a corresponding generated video frame; 6, generating a visible light prediction frame; 7, calculating a loss function; 8, optimizing parameters of the generator, the feature extractor MLP, the predictor and the discriminator; and 9, repeating the steps 3-8 until the maximum iteration number N is reached or model parameters are converged. According to the method, model generation and detail generation of the repeated video frame are optimized from the aspects of content and style, a better model output result can be obtained, good consistency of model output in time and space can be guaranteed, and common problems of style drift, blur, flicker and the like between continuous frames are effectively relieved.

Description

technical field [0001] The invention relates to the technical field of video conversion, in particular to a method for converting unmanned mid-infrared video into visible light video. Background technique [0002] With the development of science and technology, unmanned driving has gradually entered people's lives. Through different on-board sensors, driverless cars can sense the outside world, automatically plan driving routes and perform intelligent driving controls. One of the important steps. In real-world scenarios, human vision and visible light sensor imaging are often affected by light and extreme weather conditions (such as rain, fog, etc.). In this case, some vehicle navigation and monitoring systems use infrared sensors to assist in the collection of visual signals. The principle of thermal imaging enables infrared sensors to obtain good visual signals under the above extreme conditions. However, single-channel infrared thermal imaging is not as easy to underst...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N5/33H04N19/107H04N19/174H04N7/18G06K9/00
CPCH04N5/33H04N19/107H04N19/174H04N7/18
Inventor 李爽刘驰韩秉峰
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products