Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Video stitching method based on artificial intelligence technology

A technology of artificial intelligence and video stitching, applied in the field of video stitching, can solve the problems of low registration rate of video editing, affecting video effect, and low efficiency of video editing

Active Publication Date: 2021-09-10
北博(厦门)智能科技有限公司
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, based on the manual editing method, each video to be edited needs to be manually previewed, and a large amount of equipment resources and human resources need to be invested. Not only the efficiency of video editing is low, but also the registration rate of video editing is low due to human operation errors. There are cracks, split layers, and bends in the video, which affect the overall video effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video stitching method based on artificial intelligence technology
  • Video stitching method based on artificial intelligence technology

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0068] Please refer to figure 1 , Embodiment 1 of the present invention is:

[0069] A video splicing method based on artificial intelligence technology, comprising steps:

[0070] S1. Obtain the standing position and face orientation of the person to be recognized at the current moment, and predict the standing position and face orientation at the next moment.

[0071] Wherein, in this embodiment, step S1 specifically includes the following steps:

[0072] S11. The camera collects the video in the current shooting period, and obtains the current standing position and the current face orientation of the person to be identified; obtains the current standing orientation of the person to be identified from the video in the current shooting period.

[0073] S12. Predict the next standing position in the next shooting cycle according to the movement track change of the current standing position in the video captured in the current shooting cycle; The next standing orientation wi...

Embodiment 2

[0096] Please refer to figure 2 , the second embodiment of the present invention is:

[0097] On the basis of the first embodiment above, this embodiment continuously trains and matches the geometric motion model of the video by adopting a deep learning method, and finally obtains an artificial intelligence model through training. In this embodiment, a convolutional neural network model is used as the artificial intelligence model.

[0098] The convolutional neural network model is a feedforward neural network with strong representation learning ability. Since the convolutional neural network avoids the complicated preprocessing of the image and can directly input the original image, it is widely used in image recognition, object Recognition, behavior cognition, pose estimation and other fields. Its artificial neurons can respond to surrounding units within a part of the coverage area, which can limit the number of parameters and mine local structure.

[0099]In this embod...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video stitching method based on an artificial intelligence technology, and the method comprises the steps: obtaining a standing position and a face orientation of a to-be-recognized person at a current moment, and predicting a standing position and a face orientation at a next moment; selecting a camera and adjusting a shooting angle according to a prediction result, and carrying out shooting; acquiring video streams of a plurality of cameras; and extracting character features from the video streams based on a pre-trained artificial intelligence model, and cutting and sorting the video streams to obtain a stitched video. The camera and the shooting angle are adjusted in advance by predicting the standing position and the face orientation of the person, so that it is guaranteed that the subsequently shot video can have the face features to the greatest extent, shooting of unnecessary cameras is reduced, meanwhile, the artificial intelligence model is used to conduct automatic cutting and sorting on the videos, investment of equipment resources and manpower resources is reduced, the registration rate of video stitching is improved, seamless stitching of videos at different cameras or angles is realized, and a complete video effect can be presented to a great extent.

Description

technical field [0001] The invention relates to the technical field of video splicing, in particular to a video splicing method based on artificial intelligence technology. Background technique [0002] At present, due to the blowout development of self-media, all kinds of information transmission have been transformed from the original media such as newspapers and TV to the current method of using various video apps as the media. However, due to factors such as the venue, weather or angle, each video has great limitations if it is only a single angle or camera position. [0003] With the development of video editing technology, at this stage, the video to be edited is manually previewed, and then the video is edited and spliced ​​based on the human understanding of the edited video to obtain a multi-angle and multi-camera video. However, based on the manual editing method, each video to be edited needs to be manually previewed, and a large amount of equipment resources and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N21/44H04N21/4402H04N21/442H04N5/232G06K9/00
CPCH04N21/44016H04N21/440245H04N21/44218H04N23/611H04N23/695
Inventor 谢衍
Owner 北博(厦门)智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products