Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

140results about How to "Automatic extraction" patented technology

Picture classification system and method

InactiveCN101635763AAutomatic extractionCharacter and pattern recognitionSubstation equipmentSynchronized Multimedia Integration LanguageClassification methods
The invention discloses a picture classification system applied in communication devices, comprising a setting module, an extracting module, a characteristic identification module and a classifying module, wherein, the setting module is used for setting the class of a picture and the storing position of different classes of pictures and is used for defining the basic characteristics of the picture and setting characteristic correspondence ratio; the extracting module utilizes synchronize multimedia integration language to decode the received MMS and extracts pictures contained in the MMS; the characteristic identification module is used for indentifying the characteristic of the picture; an indentified characteristic is compared with the set basic characteristic, if the correspondence degree of indentified characteristic with one of the basic characteristics reaches a preset characteristic correspondence ratio, the class to which the basic characteristic belongs is confirmed to be the class to which the extracted picture belongs; the classifying module is used for storing the picture to the storing position corresponding to the class of picture according to the confirmed class. The invention also provides a picture classification method. The invention can automatically classify and store pictures received by the communication device.
Owner:SHENZHEN FUTAIHONG PRECISION IND CO LTD +1

Road extracting method based on shape characteristics of roads of remote sensing images

InactiveCN104657978AAutomatic extractionOvercome the disadvantage of only extracting straight road segmentsImage enhancementImage analysisMinimum bounding rectangleGray level
The invention discloses a road extracting method based on shape characteristics of roads of remote sensing images. The road extracting method comprises the following steps: firstly, segmenting partial regions of preprocessed remote sensing images by utilizing the characteristics of consistency of partial gray levels of road surfaces and the large difference of the partial gray levels between a target region and a background; then, combining segmented results with area and the shape characteristics of a Ferret Box minimum external connection rectangle so as to obtain linear and curved road sections, performing regional seed growth on confirmed roads, and connecting large parts of road sections so as to realize the extraction of road information. Compared with the prior art, the road extracting method disclosed by the invention has the effects that not only the linear and curved road sections can be obtained, but also seed points can be automatically extracted, and road nets can be automatically obtained; the extraction of roads cannot be influenced by the rotation of images, the defect that most methods only can extract linear road sections is overcome, and the road extracting method is more intelligent.
Owner:FUZHOU UNIV

An application of machine vision understanding in electric power remote monitoring

The invention relates to an application of machine vision understanding in electric power remote monitoring, belonging to the field of data identification. The application includes performing significant region detection on the image; carrying out edge detection; searching for the smallest square contour to get the dial positioning result; using Gamma correction combined with homomorphic filter toenhance the de-noised image; using the Hough transform to find the straight line; applying the Hough transform to the pointer image after the saliency region is detected; searching the maximum Houghvalue i in the transform space by setting a threshold value, and then inversely transforming the Hough value to the color space of the original image to obtain the straight line equation of the pointer edge; transforming the coordinate of the straight line into the coordinate system with the coordinate of the center point of the dial as the origin; using the slope of the straight line where the pointer is located to convert the angle of inclination of the pointer, and calculating the corresponding reading of the meter according to the corresponding relationship between the angle and the meterreading. The application can be applied to remote data acquisition, automatic analysis and operation management of substation.
Owner:SHANGHAI MUNICIPAL ELECTRIC POWER CO +1

Combined navigation method based on INS (inertial navigation system)/GPS (global position system)/SAR (synthetic aperture radar)

A combined navigation method based on an INS (inertial navigation system)/GPS (global position system)/SAR (synthetic aperture radar) comprises steps as follows: an INS receiver, a GPS receiver and an SAR sensor are mounted in appropriate positions of an unmanned aerial vehicle in parallel; image matching is performed, and position navigation deviation of the INS is obtained and taken as an observed quantity to be input into a filter, and the observed quantity and other observed quantities are jointly filtered; GPS measuring deviation is obtained and taken as an observed quantity to be input into the filter, and the observed quantity and other observed quantities are jointly filtered; the filter integrates the measuring deviation from the GPS and the position navigation deviation, obtained through image matching, from the INS, and a navigation error estimated value is calculated. Topographic features are automatically and reliably extracted from an SAR image, a map image is formed in a digital map, and the SAR topographic features are automatically predicted; topographic deviation is accurately estimated and integrated; capture and matching are automatically initialized; the fault tolerance capability, the autonomy and the reliability of navigation are high.
Owner:HENAN POLYTECHNIC UNIV

Vanishing point based method for automatically extracting and classifying ground movement measurement image line segments

The invention relates to a vanishing point based method for automatically extracting and classifying ground movement measurement image line segments. The method comprises the following steps of: firstly, analyzing the attribute of ground object space parallel characteristic lines of a target and calculating vanishing points of an image of the target on the basis of image sequences acquired by ground movement measurement; secondly, extracting characteristic line segments based on an image edge according to the Hough transform of a polar coordinate parameter area extremum and the continuous change of image greyscale; and then, classifying the extracted plane line segments by using vanishing points; and finally integrating the classified line segments according to a total least square method and conditions of the plane adjacency and overlapping degree of the line segments so as to extract image characteristic line segments which correspond to multiple ground object space parallel lines. The invention can improve the indoor processing capability of a movement measurement system and promote the research on automatic recognition, mapping, three-dimensional reconstruction and the like of multiple ground objects (such as buildings, roads and the like).
Owner:TONGJI UNIV

Calligraphy word stock automatic restoration method and system based on style migration

PendingCN110570481AAutomatic extractionImprove the situation where there is a large deformationTexturing/coloringNeural architecturesCode moduleDiscriminator
The invention provides a calligraphy font library automatic restoration method and system based on style migration. The calligraphy font library automatic restoration method comprises the steps that input fonts and standard style fonts are set; the input font image is input into a coding module, and potential feature information is obtained by the coding module; the conversion module converts thefeature information into feature information of standard style fonts; the decoding module performs processing to obtain a generated font image; the input font image and the generated font image are input into a discriminator, and the probability that the generated font image is a real standard style font is output; similarly, the input font image and the standard style font image are input into adiscriminator to obtain the probability that the standard style font image is a real standard style font; and finally loss functions of the generator and the discriminator are obtained according to the two probabilities. The optimizer adjusts the generator and the discriminator according to the loss functions until the loss functions of the generator and the discriminator converge to obtain a trained generator; a complete font library of standard style fonts can be obtained by adopting the trained generator.
Owner:CHINA UNIV OF GEOSCIENCES (WUHAN)

Three-dimensional mannequin joint center extraction method

The invention discloses a joint center extracting method of a three-dimensional human model , comprising the following steps: the surface grid information of the three-dimensional human model is imported, by which a characteristic point of a tail-end of the three-dimensional human model is extracted; on the basis of the characteristic point of the tail-end, the limbs of the three-dimensional human model are divided; and an approximate direction of each limb in the three-dimensional human model is computed; a group of parallel planes perpendicular to the approximate direction of the limb is used for intercepting the three-dimensional human model to get a sectional profile group; a circularity function of the sectional profile group is computed to get a group of circularity function sequences; a local minimum of the circularity function sequence is computed, by which a deformed profile in the profile group is determined; a profile center of gravity of the deformed profile is computed and taken as the joint center of a relevant position of the three-dimensional human model. The joint center extracting method provided by the invention can be used in the three-dimensional human model with an arbitrary posture, of which the extraction precision of the joint center is comparatively high.
Owner:北京宗元科技有限公司

Adhesive tape sticking device

The invention discloses an adhesive tape sticking device, which comprises a rack, a carrier turntable instrument, an adhesive tape feeder, an adhesive tape sticking instrument, a component discharge instrument and a controller. Specifically, the adhesive tape feeder is used for adhesive tape feeding, the adhesive tape sticking instrument is disposed at the adhesive tape discharge end of the adhesive tape feeder, and a suction nozzle on the adhesive tape sticking instrument can suck an adhesive tape. The carrier turntable instrument is located on the rack, and can convey the carrier thereon to the place right below the suction nozzle so as to stick the adhesive tape by the suction nozzle. The component discharge instrument can send the carrier of an adhesive tape sticked product into a discharge flow channel for discharge. And the controller controls the work of each component. The adhesive tape sticking device provided by the invention realizes automatic peeling and sticking of adhesive tapes and automatic receiving of sticked products, achieves full automatic work, and has no need for manual operation. The adhesive tape sticking speed is fast, and the sticked adhesive tapes are accurate in positions and are firm. Manpower and space are saved, the production efficiency is improved, and production cost is reduced.
Owner:KUNSHAN FULIRUI ELECTRONICS TECH

Code function taste detection method based on deep semantics

The invention relates to a code function taste detection method based on deep semantics, and belongs to the technical field of automatic software reconstruction. The method comprises the following steps: extracting semantic features and digital features in text information and structured information, including model training and model testing. The model training comprises code function representation A, structured feature extraction A and code taste classification A, wherein code function representation B, structured feature extraction B and code taste classification B are included. The code function representation A and the code function representation B are code function representations based on an attention mechanism and an LSTM neural network, wherein the structured feature extractionA and the structured feature extraction B are structured feature extraction based on a convolutional neural network. The code taste classification A and the code taste classification B are function-level code taste detection methods based on deep learning provided by code taste classification of a multi-layer perceptron. Under the condition of short detection time, it can be guaranteed that the detection result has high recall rate and accuracy.
Owner:BEIJING INSTITUTE OF TECHNOLOGYGY

Expression recognition method for optimizing convolutional neural network based on improved particle swarm optimization algorithm

The invention relates to an expression recognition method for optimizing a convolutional neural network based on an improved particle swarm optimization algorithm. The expression recognition method constructs a convolutional neural network suitable for expression recognition, combines the hybrid particle swarm optimization algorithm with a crossover mutation algorithm and a particle swarm optimization algorithm in a genetic algorithm, optimizes the constructed convolutional neural network by using the hybrid particle swarm optimization algorithm, and solves the problems of gradient disappearance and falling into a local optimal solution in the training process of the convolutional neural network, so as to enable the network convergence speed to be increased and higher in the accuracy. Theexpression recognition method comprises the following steps: (1) performing preprocessing of gray normalization and scale normalization on an expression data set; (2) constructing a convolutional neural network suitable for expression recognition; (3) improving a particle swarm optimization algorithm by using a crossover mutation algorithm in the genetic algorithm, (4) optimizing parameters of theconvolutional neural network by using the improved particle swarm optimization algorithm, and (5) training and testing the optimized convolutional neural network by taking a preprocessed expression data set.
Owner:JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products