Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

193 results about "Receptive field" patented technology

A sensory space can be the space surrounding an animal, such as an area of auditory space that is fixed in a reference system based on the ears but that moves with the animal as it moves (the space inside the ears), or in a fixed location in space that is largely independent of the animal's location (place cells). Receptive fields have been identified for neurons of the auditory system, the somatosensory system, and the visual system.

Image semantic division method based on depth full convolution network and condition random field

The invention provides an image semantic division method based on a depth full convolution network and a condition random field. The image semantic division method comprises the following steps: establishing a depth full convolution semantic division network model; carrying out structured prediction based on a pixel label of a full connection condition random field, and carrying out model training, parameter learning and image semantic division. According to the image semantic division method provided by the invention, expansion convolution and a spatial pyramid pooling module are introduced into the depth full convolution network, and a label predication pattern output by the depth full convolution network is further revised by utilizing the condition random field; the expansion convolution is used for enlarging a receptive field and ensures that the resolution ratio of a feature pattern is not changed; the spatial pyramid pooling module is used for extracting contextual features of different scale regions from a convolution local feature pattern, and a mutual relation between different objects and connection between the objects and features of regions with different scales are provided for the label predication; the full connection condition random field is used for further optimizing the pixel label according to feature similarity of pixel strength and positions, so that a semantic division pattern with a high resolution ratio, an accurate boundary and good space continuity is generated.
Owner:CHONGQING UNIV OF TECH

Vehicle front trafficability analyzing method based on convolution nerve network

InactiveCN103279759AHigh-resolutionAvoid the effects of target recognitionCharacter and pattern recognitionNerve networkImage resolution
The invention discloses a vehicle front trafficability analyzing method based on a convolution nerve network. The method comprises the following steps: first, a vidicon arranged in the front of a vehicle is used for collecting a large number of actual vehicle traveling environment images; a Gamma rectification function is used for pre-processing the images; training of the convolution nerve network is conducted. According to the method, a nonlinear function superimposed Gamma rectification method is used for pre-processing the images, so that influence of light illumination of strong changes on identification of objects is avoided, and the image resolution ratio is improved. According to the method, a geometry normalization method is used, so that the resolution ratio difference caused by identifying the distance of an object distance vidicon is reduced. The convolution nerve network LeNet-5 adopted in the method can extract implicit expression characteristics with class distinguishing capacity and is simple in extracting process. The LeNet-5 is combined with a local receptive field, weight share and secondary sampling to ensure robustness of simple geometry deformation, reduce training parameters of the network, and simplify the structure of the network.
Owner:DALIAN UNIV OF TECH

Contour extraction method based on brightness characteristic and contour integrity

InactiveCN104484667AEffective filteringPreserve preliminary profile informationImage enhancementImage analysisPattern recognitionNear neighbor
The invention discloses a contour extraction method based on the brightness characteristic and the contour integrity, and belongs to the crossing field of computer vision and pattern recognition. The contour extraction method aims to completely extract a contour of an object out of a complex environmental background. The contour extraction method includes the step of obtaining a maximum energy response diagram, the step of carrying out non-classic receptive field restraint on brightness characteristic modulation, the step of extracting the contour of the object and carrying out postprocessing on a high-low self-adaptive threshold value and the like based on a probability model, and the step of processing the nearest neighbor orientation consistency connection rupture contour based on the contour integrity. A Gabor filter is used for simulating response of a human simple cell classic receptive field to obtain the maximum Gabor energy response diagram; the brightness characteristic of images is used for restraining the maximum Gabor energy response diagram and rejecting grain and other non-target contours; the postprocessing on the high-low self-adaptive threshold value and the like based on the probability model is carried out on the obtained target contour; rupture points of the nearest neighbor orientation consistency connection contour based on the contour integrity are processed. The contour of the object can be extracted well, and a contour diagram of integrity connection is obtained after the contour of the object is processed.
Owner:HUAZHONG UNIV OF SCI & TECH

Method for extracting characteristic of natural image based on dispersion-constrained non-negative sparse coding

The invention discloses a method for extracting the characteristic of a natural image based on dispersion-constrained non-negative sparse coding, which comprises the following steps of: partitioning an image into blocks, reducing dimensions by means of 2D-PCA, non-negative processing image data, initializing a wavelet characteristic base based on 2D-Gabor, defining the specific value between intra-class dispersion and extra-class dispersion of a sparsity coefficient, training a DCB-NNSC characteristic base, and image identifying based on the DCB-NNSC characteristic base, etc. The method has the advantages of not only being capable of imitating the receptive field characteristic of a V1 region nerve cell of a human eye primary vision system to effectively extract the local characteristic of the image; but also being capable of extracting the characteristic of the image with clearer directionality and edge characteristic compared with a standard non-negative sparse coding arithmetic; leading the intra-class data of the characteristic coefficient to be more closely polymerized together to increase an extra-class distance as much as possible with the least constraint of specific valuebetween the intra-class dispersion and the extra-class dispersion of the sparsity coefficient; and being capable of improving the identification performance in the image identification.
Owner:SUZHOU VOCATIONAL UNIV

Real-time image semantic segmentation method based on lightweight full convolutional neural network

The invention discloses a real-time image semantic segmentation method based on a lightweight full convolutional neural network. The method comprises the following steps of 1) constructing a full convolutional neural network by using the design elements of a lightweight neural network, wherein the network totally comprises three stages of a feature extension stage, a feature processing stage and acomprehensive prediction stage, and the feature processing stage uses a multi-receptive field feature fusion structure, a multi-size convolutional fusion structure and a receptive field amplificationstructure; 2) at a training stage, training the network by using a semantic segmentation data set, using a cross entropy function as a loss function, using an Adam algorithm as a parameter optimization algorithm, and using an online difficult sample retraining strategy in the process; and 3) at a test stage, inputting the test image into the network to obtain a semantic segmentation result. According to the present invention, the high-precision real-time semantic segmentation method suitable for running on a mobile terminal platform is obtained by adjusting a network structure and adapting asemantic segmentation task while controlling the scale of the model.
Owner:NANJING UNIV

Deep learning network construction method and system applicable to semantic segmentation

The invention discloses a deep learning network construction method and system applicable to semantic segmentation. According to the invention, based on the deconvolution semantic segmentation, by considering the characteristic that a conditional random field is quite good for edge optimization, the conditional random field is explained to be a recursion network to be fused in a deconvolution network and end to end trainings are performed, so the parameter learning in the convolution network and the recursion network is allowed to act with each other and a better integration network is trained; through combined training of the deconvolution network and the conditional random field, quite accurate detail and shape information is obtained, so a problem of inaccuracy of image edge segmentation is solved; by use of the strategy of combining the multi-scale input and multi-scale pooling, a problem is solve that a big target is excessively segmented or segmentation of a small target is ignored generated by the single receptive field in the semantic segmentation; and by expanding the classic deconvolution network, by use of the united training of the conditional random field and the multi-feature information fusion, accuracy of the semantic segmentation is improved.
Owner:HUAZHONG UNIV OF SCI & TECH

Text detection method, system and equipment based on multi-receptive field depth characteristics and medium

The invention discloses a text detection method, system and device based on multi-receptive field depth characteristics and a medium, and the method comprises the steps: obtaining a text detection database, and taking the text detection database as a network training database; building a multi-receptive field depth network model; inputting a natural scene text picture and corresponding textbox coordinate true value data in the network training database into a multi-receptive field depth network model for training; calculating an image mask for segmentation through the trained multi-receptive field depth network model to obtain a segmentation result, and converting the segmentation region into a regression textbox coordinate; and counting the textbox size of the network training database, designing a textbox filtering condition, and screening out a target textbox according to the textbox filtering condition. The method fully utilizes the feature learning capability and classification performance of the deep network model, combines the characteristics of image segmentation, has the characteristics of high detection accuracy, high recall rate, strong robustness and the like, and has agood text detection effect in a natural scene.
Owner:SOUTH CHINA UNIV OF TECH

Method for detecting P300 electroencephalogram based on convolutional neural network

The invention discloses a method for detecting a P300 electroencephalogram based on a convolutional neural network, which is used for a brain-computer interface classification algorithm and is capable of effectively solving a small sample problem in the conventional classification algorithm while improving the classification accuracy. Through using a thought of an image recognition field for reference, the method fully utilizes thoughts of a local receptive field and weight sharing of the convolutional neural network to take a typical P300 electroencephalogram acquisition sample as an analogy of a feature image, the sample characteristics are extracted through a continuous convolution process, and through carrying out feature mapping on a down sampling process, feature extraction and feature mapping are continuously performed, so that the sample characteristics are more simplified, meanwhile, through applying the local receptive field and weight sharing, network weighting parameters and computation complexity are greatly reduced to facilitate popularization of the algorithm. The experimental result shows that through the method adopted in the invention, the classification accuracy is effectively improved, the system stability is increased, and the method has better application prospect.
Owner:SHANDONG UNIV

Target detection method and system based on fusion of different-scale receptive field feature layers, and medium

The invention provides a target detection method and system based on different-scale receptive field feature layer fusion and a medium. The method comprises the following steps: a data volume increasing step: carrying out incremental processing on a training data set with a label, increasing the data volume of the training data set, adjusting the training image size of the training data to be thesame as the model input size, and obtaining the training data set after data increase; and a target detection network model building step: taking the classic network model as the network basis of thetarget detector, and replacing the transverse connection in the feature pyramid network FPN with the dense connection to obtain a dense connection FPN target detection network model. The defect that an existing target detection model only uses feature information in part of feature layers to detect a target object is overcome; the feature layers of a plurality of different receptive fields are fused through FPN dense connection, so that feature information required for object detection in a plurality of scale ranges can be obtained, and the feature extraction capability and the target detection performance of the target detector are improved.
Owner:SHANGHAI UNIV

Remote sensing image region of interest detection method based on integer wavelets and visual features

The invention discloses a remote sensing image region of interest detection method based on integer wavelets and visual features, which belongs to the technical field of remote sensing image target identification. The implementing process of the method comprises the following steps: 1, performing color synthesis and filtering and noise reduction preprocessing on a remote sensing image; 2, converting the preprocessed RGB spatial remote sensing image into a CIE Lab color space to obtain a brightness and color feature map, and converting an L component by using integer wavelets to obtain a direction feature map; 3, constructing a Gaussian difference filter for simulating the retina receptive field of a human eye, performing cross-scale combination in combination with a Gaussian pyramid to obtain a brightness and color feature saliency map, and performing wavelet coefficient sieving and cross-scale combination to obtain a direction feature saliency map; 4, synthesizing a main saliency map by using a feature competitive strategy; and 5, partitioning the threshold values of the main saliency map to obtain a region of interest. Due to the adoption of the remote sensing image region of interest detection method, the detection accuracy of a remote sensing image region of interest is increased, and the computation complexity is lowered; and the remote sensing image region of interest detection method can be applied to the fields of environmental monitoring, urban planning, forestry investigation and the like.
Owner:BEIJING NORMAL UNIVERSITY

Adversarial-based lightweight network semantic segmentation method

The invention relates to an adversarial-based lightweight network semantic segmentation method, which is used for solving the problems of low prediction accuracy, low network processing speed and difficulty in meeting the requirement of real-time prediction. The invention provides a lightweight semantic segmentation method based on adversarial from the perspective of improving semantic segmentation speed and precision. The method comprises the following steps: firstly, improving the network information acquisition capability by reducing the number of channels, reducing the parameter quantity in jump connection by utilizing asymmetric convolution, increasing the receptive field of a feature map by utilizing cavity convolution and disturbing the operation of the channels, and constructing alightweight asymmetric encoding and decoding semantic segmentation network; using confrontation ideas, and judging the segmented image and the calibrated semantic label by using a judgment network, designing a judgment loss function and a segmentation loss function, and alternately updating the segmentation network and the judgment network by using a back propagation method until the judgment network cannot distinguish the label and the real label generated by the segmentation network, thereby realizing semantic segmentation of the image. According to the method, the lightweight model and theadversarial idea are utilized, so that the segmentation precision is relatively high while the real-time performance of the segmentation network is ensured.
Owner:BEIJING UNIV OF TECH

Point cloud data classification method based on deep learning

ActiveCN110197223AGuaranteed affine transformation invarianceExcellent division effectCharacter and pattern recognitionPoint cloudData set
The invention discloses a point cloud data classification method based on deep learning. The method provides a multi-scale point cloud classification network, and comprises the steps of firstly, providing a multi-scale local area division algorithm on the basis of completeness, adaptivity, overlap and multi-scale characteristic requirements of the local area division, and obtaining a multi-scale local area by taking the point cloud and the characteristics of different levels as input; and then constructing the multi-scale point cloud classification network comprising a single-scale feature extraction module, a low-level feature aggregation module, a multi-scale feature fusion module and the like. The network fully simulates the action principle of the convolutional neural network, and hasthe basic characteristics that the local receptive field becomes larger and larger and the feature abstraction degree becomes higher and higher along with the increase of the network scale and depth.The method of the invention respectively obtains the 94.71% and 91.73% classification accuracies at the standard public data set ModelNet 10 and ModelNet 40, is in a leading or equivalent level in thesimilar work, and the feasibility and effectiveness of the method are verified.
Owner:BEIFANG UNIV OF NATITIES

Behavior recognition method and device and storage medium

The invention discloses a behavior recognition method and device and a storage medium. According to the scheme, a to-be-detected video is acquired, and a plurality of candidate windows are added to the to-be-detected video; based on the feature extraction network, generating a three-dimensional feature map of the to-be-detected video containing a plurality of candidate windows on a plurality of time domain scales; determining a time domain scale matched with the video clip in the candidate window, obtaining a three-dimensional feature map corresponding to the determined time domain scale, andobtaining a local feature map corresponding to the video clip according to the obtained three-dimensional feature map; and performing behavior recognition according to the local feature map and a preset behavior recognition network, and determining a behavior category corresponding to the behavior feature in the video clip. According to the scheme, the three-dimensional feature maps of the to-be-detected video on multiple time domain scales can be obtained from the to-be-detected video by using the feature extraction network, so that the receptive field of the classifier can adapt to the behavior features of different time lengths, and the accuracy of behavior recognition of multiple time spans is improved.
Owner:TENCENT TECH (SHENZHEN) CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products