Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

318 results about "Scene segmentation" patented technology

Depth extraction method of merging motion information and geometric information

The invention discloses a depth extraction method of merging motion information and geometric information, which comprises the following steps: (1) carrying out scene segmentation on each frame of two-dimensional video image, and separating the static background and the dynamic foreground; (2) processing a scene segmentation chart by binaryzation and filtering; (3) generating a geometric depth chart of the static background based on the geometric information; (4) calculating the motion vector of the foreground object, and converting the motion vector into the motion amplitude; (5) linearly transforming the motion amplitude of the foreground object according to the position of the foreground object, and obtaining a motion depth chart; and (6) merging the motion depth chart and the geometricdepth chart, and filtering to obtain a final depth chart. The method only calculates the motion vector of the separated dynamic foreground object, thereby eliminating the mismatching points of the background and reducing the amount of calculation. Meanwhile, the motion amplitude of the foreground object is linearly transformed according to the position of the foreground object, the motion amplitude is merged into the background depth, thereby integrally improving the quality of the depth chart.
Owner:万维显示科技(深圳)有限公司

Generative adversarial network-based pixel-level portrait cutout method

The invention discloses a generative adversarial network-based pixel-level portrait cutout method and solves the problem that massive data sets with huge making costs are needed to train and optimizea network in the field of machine cutout. The method comprises the steps of presetting a generative network and a judgment network of an adversarial learning mode, wherein the generative network is adeep neural network with a jump connection; inputting a real image containing a portrait to the generative network for outputting a person and scene segmentation image; inputting first and second image pairs to the judgment network for outputting a judgment probability, and determining loss functions of the generative network and the judgment network; according to minimization of the values of theloss functions of the two networks, adjusting configuration parameters of the two networks to finish training of the generative network; and inputting a test image to the trained generative network for generating the person and scene segmentation image, randomizing the generated image, and finally inputting a probability matrix to a conditional random field for further optimization. According tothe method, a training image quantity is reduced in batches; and the efficiency and the segmentation precision are improved.
Owner:XIDIAN UNIV

Improved Euclidean clustering-based scattered workpiece point cloud segmentation method

The invention provides an improved Euclidean clustering-based scattered workpiece point cloud segmentation method and relates to the field of point cloud segmentation. According to the method, a corresponding scene segmentation scheme is proposed in view of inherent disorder and randomness of scattered workpiece point clouds. The method comprises the specific steps of preprocessing the point clouds: removing background points by using an RANSAC method, and removing outliers by using an iterative radius filtering method. A parameter selection basis is provided for online segmentation by adopting an information registration method for offline template point clouds, thereby increasing the online segmentation speed; a thought of removing edge points firstly, then performing cluster segmentation and finally supplementing the edge points is proposed, so that the phenomenon of insufficient segmentation or over-segmentation in a clustering process is avoided; during the cluster segmentation, an adaptive neighborhood search radius-based clustering method is proposed, so that the segmentation speed is greatly increased; and surface features of workpieces are reserved in edge point supplementation, so that subsequent attitude locating accuracy can be improved.
Owner:WUXI XINJIE ELECTRICAL

Segmentation and semantic annotation method of geometry grid scene model

InactiveCN103268635ASolve difficult problems that are difficult to deal with touching objectsSemantic AnnotationCharacter and pattern recognition3D modellingCluster algorithmAutomatic segmentation
The invention relates to the technical field of computer graphics, in particular to a segmentation and semantic annotation method of a geometry grid scene model. The method includes the following steps of building a three-dimensional training set, wherein each three-dimensional model in the training set is required to be a single object; automatically segmenting the scene model, wherein the scene model is segmented into multiple objects according to the training set and on the basis of the clustering hierarchy algorithm; classifying segmentation results, extracting shape characteristics of each object obtained through segmentation, and deciding a class label of the object according to the classification algorithm; collecting the semanteme of the scene model, and collecting the class labels of the objects to obtain a semantic label set of the scene model. Compared with the prior art, the method has the advantages that known shape knowledge in the training set is used in the automatic segmentation method of the scene model for assisting decision making. Therefore, the problem that contact objects are difficult to process during scene segmentation is solved, and semantic annotation of the scene model better fits visual perception of people for scenes.
Owner:BEIJING JIAOTONG UNIV

Automatic driving processing method and apparatus based on scene segmentation and computing device

The invention discloses an automatic driving processing method and apparatus based on scene segmentation and a computing device. The method comprises the following steps: a current frame image in thevideo shot and/or recorded via an image collection device is obtained in real time during vehicle driving processes, the current frame image is input into a second neural network to obtain a scene segmentation result corresponding to the current frame image, the second neural network is obtained after output data of at least one intermediate layer of the pre-trained first neural network is subjected to guiding and training operation, and the first neural network is greater than the second neural network in terms of layer quantity. According to a scene segmentation result, a driving route and/or a driving instruction is determined; autonomous driving control is exerted on a vehicle is according to the determined driving route and/or driving instruction. Via use of the automatic driving processing method and apparatus based on scene segmentation and the computing device, the neural network with a small number of layers after training operation is used for achieving fast and accurate calculation so as to obtain the scene segmentation result, the scene segmentation result used for accurately determining the driving route and/or the driving instruction, and improvement of automatic driving safety can be facilitated.
Owner:BEIJING QIHOO TECH CO LTD

Video semantic scene segmentation method based on convolutional neural network

The invention discloses a video semantic scene segmentation method based on a convolutional neural network, which is mainly divided into two parts, wherein one part is that a convolutional neural network is built on the basis of shot segmentation, and then semantic feature vectors of video key frames are obtained by using the built convolutional neural network; and the other part is that the Bhattacharyya distance between the semantic feature vectors of two shot key frames is calculated by using the time continuity of the front and back key frames according to the semantic feature vectors, andthe semantic similarity of the shot key frames is obtained through measuring the Bhattacharyya distance. Probability estimation values of different semantics are outputted by using the convolutionalneural network to act as a semantic feature vector of the frame. Considering a time sequence problem of scene partition in the continuous time, the shot similarity is compared by combining semantic features of the two shot key frames and the time sequence feature distance between the shots, and thus a final scene segmentation result is obtained. The method disclosed by the invention has certain universality and has a good scene segmentation effect under the condition that training sets are sufficient.
Owner:HUAZHONG UNIV OF SCI & TECH

Road scene segmentation method based on residual network and expanded convolution

The invention discloses a road scene segmentation method based on a residual network and expanded convolution. The method comprises: a convolutional neural network being constructed in a training stage, and a hidden layer of the convolutional neural network being composed of ten Respondial blocks which are arranged in sequence; inputting each original road scene image in the training set into a convolutional neural network for training to obtain 12 semantic segmentation prediction images corresponding to each original road scene image; calculating a loss function value between a set formed by12 semantic segmentation prediction images corresponding to each original road scene image and a set formed by 12 independent thermal coding images processed by a corresponding real semantic segmentation image to obtain an optimal weight vector of the convolutional neural network classification training model. In the test stage, prediction is carried out by utilizing the optimal weight vector of the convolutional neural network classification training model, and a predicted semantic segmentation image corresponding to the road scene image to be subjected to semantic segmentation is obtained. The method has the advantages of low calculation complexity, high segmentation efficiency, high segmentation precision and good robustness.
Owner:ZHEJIANG UNIVERSITY OF SCIENCE AND TECHNOLOGY

Scene-segmentation-based semantic watermark embedding method for video resource

The invention discloses a scene-segmentation-based semantic watermark embedding method for a video resource. According to the scene-segmentation-based semantic watermark embedding method, a video semantic information set, containing content semantic information, control semantic information and optional physical attribute information, is firstly generated, an original video sequence of the video resource is then subjected to segmentation, and a scene video sequence with higher texture complexity and more dramatic interframe change is selected as a target scene video sequence; when the target scene video sequence is subjected to compressed coding, the control semantic information and the physical attribute information are embedded into I frames of each group of picture (GOP), the content semantic information is embedded into non-I frames, and a compressed code stream containing semantic watermarks is then generated; and the semantic information is represented by using the manners of plain texts and mapping codes and then respectively embedded into the non-I frames and I frames of GOPs of compressed codes of the target scene video sequence, so that the embedded amount of the semantic watermarks is increased, the robustness is enhanced, and meanwhile, the remarkable reduction of the quality of the video resource cannot be caused.
Owner:SOUTHWEAT UNIV OF SCI & TECH

Vehicle violation video processing method and device, computer equipment and storage medium

The application relates to a vehicle violation video processing method and device, computer equipment and a storage medium. The method comprises steps: acquiring license plate information of a targetvehicle and multiple frames of video images in a vehicle violation video; detecting each frame of video image by a target detection model to obtain position information of each vehicle in each frame of video image; determining the driving direction of the target vehicle according to the license plate information of the target vehicle and the position information of each vehicle in each frame of video image; performing scene segmentation on each frame of video image by a segmentation model to obtain segmentation results corresponding to all frames of video images, carrying out fusion on the segmentation results to determine final scene information, and determining a guide line type of a lane where the target vehicle is located according to the final scene information; determining whether the guide line type matches the driving direction of the target vehicle or not; and if so, determining that the target vehicle does not break traffic regulations. Therefore, whether the target vehicle breaks traffic regulations can be determined by using multiple frames of video images, so that the examination accuracy is improved.
Owner:上海眼控科技股份有限公司

Image scene segmentation method, device, computing device and computer storage medium

The invention discloses an image scene segmentation method, a device, a computing device and a computer storage medium. The image scene segmentation method is executed on the basis of a well trained scene segmentation network. The method comprises the following steps of acquiring a to-be-segmented image; inputting the to-be-segmented image into a scene segmentation network, wherein at least one layer of a convolution layer is formed in the scene segmentation network; subjecting the first convolution block of the convolution layer to scaling treatment by means of a scale coefficient outputted by a scale regression layer to obtain a second convolution block; carrying out the convolution operation of the convolution layer by utilizing the second convolution block to obtain the output result of the convolution layer, wherein the scale regression layer is a middle convolution layer of the scene segmentation network; and outputting a scene segmentation result corresponding to the to-be-segmented image. According to the technical scheme, the self-adaptive scaling of a sensing field is realized. The scene segmentation result can be quickly obtained through the trained scene segmentation network, and the image scene segmentation accuracy and the processing efficiency are improved.
Owner:BEIJING QIHOO TECH CO LTD

Video processing method and device, electronic equipment and storage medium

ActiveCN110213670AAvoid too long questionsAvoid long questionsSelective content distributionFeature vectorScene segmentation
The invention provides a video processing method and device, electronic equipment and a storage medium. The video processing method comprises the steps of obtaining a to-be-processed video, and dividing the to-be-processed video into a plurality of units of to-be-processed videos; obtaining a scene feature vector and an audio feature vector corresponding to each unit of to-be-processed video; determining scene pre-segmentation points according to the scene feature vectors corresponding to every two adjacent units of to-be-processed videos, and determining audio pre-segmentation points according to the audio feature vectors corresponding to every two adjacent units of to-be-processed videos; performing scene segmentation on the to-be-processed video according to the scene pre-segmentation point, and searching a video clip of which the duration exceeds a set maximum duration threshold from video clips obtained by scene segmentation to serve as a to-be-segmented video clip; and carrying out audio segmentation on the to-be-segmented video clip according to the audio pre-segmentation point to obtain a segmented video clip. According to the invention, the accuracy of splitting is improved, and the requirements of users are better met.
Owner:BEIJING QIYI CENTURY SCI & TECH CO LTD

Method and system for converting two-dimensional video of complex scene into three-dimensional video

The invention discloses a method and a system for converting a two-dimensional video of a complex scene into a three-dimensional video. The method comprises the steps of: carrying out scene segmentation on an input two-dimensional video, segmenting each frame into a foreground part and a background part; judging the type of a foreground target, carrying out three-dimensional reconstruction on the foreground part according to the type of the target; judging whether the background part moves or not, if so, carrying out three-dimensional reconstruction on the background by using a motion-based reconstructing method, if not, carrying out three-dimensional reconstruction on the background by using a tone-based reconstructing method; synthesizing a three-dimensional reconstruction result obtained from the foreground part into a three-dimensional foreground; synthesizing a three-dimensional reconstruction result obtained from the background part into a three-dimensional background; and synthesizing the synthesized three-dimensional foreground and the synthesized three-dimensional background into the three-dimensional video for outputting. The invention can realize high-accuracy three-dimensional conversion on the two-dimensional video of the complex scene, process different types of targets and different types of videos, and generate the three-dimensional video with vivid effect.
Owner:上海易维视科技有限公司

A method for integrating driving scene target recognition and traveling area segmentation

The invention discloses a driving scene target recognition and traveling area segmentation integration method. The method comprises the following steps: S1, a shared feature module obtains parameter configuration information and image information of an input vehicle vision system; S2, the target detection network module outputs the target category, the abscissa of the upper left corner of the target, the ordinate of the upper left corner of the target, the width of the target and the height of the target according to the three target detection features of different sizes inputted in the targetdetection ROI area; S3, carrying out confidence threshold filtering and maximum suppression on that target category, the abscissa of the upper left corn of the target, the ordinate of the upper leftcorner of the target, the width of the target and the height of the target by the target detection network module, and merging and outputting a target detection list; S4, the scene segmentation network module outputting a single-channel passable area binary output map corresponding to the scene segmentation feature according to three scene segmentation features of different sizes inputted in the scene segmentation ROI area. By adopting the invention, the robustness and the accuracy are greatly improved.
Owner:ZHEJIANG LEAPMOTOR TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products