Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

117results about How to "Reduce training parameters" patented technology

Improved CNN-based facial expression recognition method

The invention provides an improved CNN-based facial expression recognition method, and relates to the field of image classification and identification. The improved CNN-based facial expression recognition method comprises the following steps: s1, acquiring a facial expression image from a video stream by using a face detection alignment algorithm JDA algorithm integrating the face detection and alignment functions; s2, correcting the human face posture in a real environment by using the face according to the facial expression image obtained in the step s1, removing the background information irrelevant to the expression information and adopting the scale normalization; s3, training the convolutional neural network model to obtain and store an optimal network parameter before extracting feature of the normalized facial expression image obtained in the step s2; s4 loading a CNN model and the optimal network parameters obtained by s3 for the optimal network parameters obtained in the steps3, and performing feature extraction on the normalized facial expression images obtained in the step s2; s5, classifying and recognizing the facial expression features obtained in the step s4 by using an SVM classifier. The method has high robustness and good generalization performance.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

A retinal blood vessel image segmentation method based on a multi-scale feature convolutional neural network

The invention belongs to the technical field of image processing, in order to realize automatic extraction and segmentation of retinal blood vessels, improve the anti-interference ability to factors such as blood vessel shadow and tissue deformation, and make the average accuracy rate of blood vessel segmentation result higher. The invention relates to a retinal blood vessel image segmentation method based on a multi-scale feature convolutional neural network. Firstly, retinal images are pre-processed appropriately, including adaptive histogram equalization and gamma brightness adjustment. Atthe same time, aiming at the problem of less retinal image data, data amplification is carried out, the experiment image is clipped and divided into blocks, Secondly, through construction of a multi-scale retinal vascular segmentation network, the spatial pyramidal cavity pooling is introduced into the convolutional neural network of the encoder-decoder structure, and the parameters of the model are optimized independently through many iterations to realize the automatic segmentation process of the pixel-level retinal blood vessels and obtain the retinal blood vessel segmentation map. The invention is mainly applied to the design and manufacture of medical devices.
Owner:TIANJIN UNIV

Intelligent detection and quantitative recognition method for defect of concrete

The invention discloses an intelligent detection and quantitative recognition method for the defect of concrete. According to the method, a concrete test piece is subjected to impact echo signal sample acquisition, signal noise reduction treatment and characteristic value extraction so as to construct a recognition model for analysis components including feature extraction, defect inspection, defect diagnosis and defect quantification and positioning; and the model is used for detecting and recognizing to-be-detected concrete. The intelligent detection and quantitative recognition method provided by the invention is directed at disadvantages of conventional detection technology for concrete defects and based on theoretical analysis, value simulation and model testing, employs advanced signal processing and artificial intelligence technology and fully digs out characteristic information of a testing signal, thereby establishing the model for intelligent rapid detection and classified recognition based on wavelet analysis and an extreme learning machine; and the model has good classified recognition performance, realizes intelligent rapid quantitative recognition and evaluation of the variety, properties and scope of the defect of concrete and further improves the innovation and application level of non-destructive testing technology for the defect of concrete.
Owner:ANHUI & HUAI RIVER WATER RESOURCES RES INST

BERT-BiGRU-IDCNN-CRF named entity identification method based on attention mechanism

PendingCN112733541ASolve the problem of not being able to characterize polysemyImprove the shortcomings of ignoring local featuresNatural language data processingNeural architecturesFeature vectorNamed-entity recognition
The invention discloses a BERT-BiGRU-IDCNN-CRFnamed entity recognition method based on an attention mechanism. The method comprises the steps: training a BERT pre-training language model through large-scale label-free prediction; on the basis of the trained BERT model, constructing a complete BERT-BiGRU-IDCNN-Attention-CRF named entity recognition model; constructing an entity recognition training set, and training the complete entity recognition model on the training set; inputting an expected material to be subjected to entity recognition into the trained entity recognition model, and outputting a named entity recognition result. According to the method, feature vectors extracted by the BiGRU and IDCNN neural networks are combined, the defect that local features are ignored in the process of extracting global context features by the BiGRU neural network is overcome, and meanwhile, the attention mechanism is introduced, and weight allocation is performed on the extracted features, so that the features playing a key role in entity recognition are enhanced, irrelevant features are weakened, and the recognition effect of named entity recognition is further improved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Dynamic time sequence convolutional neural network-based license plate recognition method

ActiveCN108388896AReduce training parametersSolve the problem of low accuracy rate and wrong recognition resultsCharacter and pattern recognitionNeural architecturesShort-term memoryLeak detection
The invention discloses a dynamic time sequence convolutional neural network-based license plate recognition method. The method comprises the following steps of: reading an original license plate image; carrying out license plate angle correction to obtain a to-be-recognized license plate image; inputting the to-be-recognized license plate image into a previously designed and trained convolutionalneural network so as to obtain a feature image and time sequence information, wherein the feature image comprises all the features of the license plate; and carrying out character recognition, inputting the feature image into a convolutional neural network of a long and short-term memory neural network layer on the basis of time sequence information of the last layer so as to obtain a classification result, and carrying out decoding by utilizing a CTC algorithm so as to obtain a final license plate character result. According to the method, vision modes are directly recognized from original images through using convolutional neural networks, self-learning and correction are carried out, the convolutional neural networks can be repeatedly used after being trained for one time, and the timeof single recognition is in a millisecond level, so that the method can be applied to the scenes needing to recognize license plates in real time. The dynamic time sequence-based long and short-termneural network layer is combined with CTC algorithm-based decoding, so that recognition error problems such as leak detection and repeated detection are effectively avoided, and the algorithm robustness is improved.
Owner:浙江芯劢微电子股份有限公司

Vehicle track prediction method and device

The invention is applicable to the technical field of intelligent transportation and provides a vehicle track prediction method. The method comprises the following steps: acquiring history track dataof multiple vehicles in a preset time period, performing preprocessing on the history track data, thus obtaining a time-space diagram sequence corresponding to the history track data, wherein the time-space diagram sequence comprises time-space diagrams respectively corresponding to moments which are sequentially arranged in the preset time period, and each time-space diagram comprises nodes corresponding to at least three vehicles; and inputting the time-space diagram sequence into a trained prediction model to be processed, thus obtaining a corresponding predicted driving track of each vehicle, wherein the prediction model is obtained by training a Long Short-term Memory (LSTM) network on the basis of sample time-space diagrams corresponding to sample track data of multiple sample vehicles in the same time period and a sample driving track corresponding to each sample vehicle. The invention also provides a vehicle track prediction device and terminal equipment, vehicle track prediction precision and flexibility of the prediction model are improved, and robustness is also enhanced, thereby being better applied to manless driving.
Owner:SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI

Data model dual-drive GFDM receiver and method

The invention discloses a data model dual-drive GFDM receiver and method. The method comprises the steps: respectively obtaining a channel estimation and a signal detection neural network; taking thereal-number result of a matrix comprising transmitted pilot frequency information and a received time domain pilot frequency vector as the input of a channel estimation neural network, and outputtingan estimation of the frequency domain channel state information; obtaining an equivalent channel matrix, taking the real-number result of the equivalent channel matrix and the received time domain signal vector as the input of a signal detection neural network, and outputting the result as the estimation of a GFDM symbol; establishing a demapping neural network, taking an estimation of the GFDM symbol outputted by the signal detection neural network as the input, and outputting the estimation as the estimation of the original bit information; determining the output of the demapping network andthe size of a set threshold, and outputting a detection result of the original bit information according to a determination result. The method has the advantages that the training parameters do not change with the data dimension, the training speed is fast, and the adaptability to different channel environments is strong.
Owner:SOUTHEAST UNIV

Glaucoma fundus image recognition method based on transfer learning

The invention discloses a glaucoma fundus image recognition method based on transfer learning, and the method comprises the following steps: 1, obtaining a glaucoma data set, and carrying out the preprocessing of a glaucoma fundus image; 2, constructing a convolutional neural network R-VGGNet; 3, loading the preprocessed training data set into an R-VGGNet convolutional neural network model to perform iterative training and feature extraction of the model; 4, inputting the extracted features into a softmax classifier to complete classification and recognition of glaucoma, and obtaining a finalrecognition model; 5, loading the test data set into the final identification model, and outputting corresponding classification accuracy. According to the method, a transfer learning thought is introduced, weight parameters obtained through training of a VGG16 network on an ImageNet data set are used for freezing the first 13 layers and releasing the weights of the second 3 layers, a glaucoma data set is used for training a full connection layer and a Softmax classifier, and feature extraction and classification are carried out after fine adjustment; the method meets the requirements of deeplearning, and effectively improves the recognition rate of the glaucoma fundus image.
Owner:SHANGHAI MARITIME UNIVERSITY

Skeleton CT image three-dimensional segmentation method based on multi-view separation convolutional neural network

The invention belongs to the technical field of image processing, provides a three-dimensional CT image segmentation method based on a multi-view separation convolutional neural network, and mainly relates to three-dimensional automatic segmentation of a skeleton in the CT image by using a novel convolutional neural network. The method aims to solve the problems that a neural network using three-dimensional convolution is too large in model, too high in running memory occupation amount and incapable of running on a small-video-memory-capacity display card or embedded device. Meanwhile, in order to improve the capability of the convolutional neural network for utilizing the three-dimensional space context information, a multi-view separation convolution module is introduced, the context information is extracted from the multi-view sub-images of a three-dimensional image by using a plurality of two-dimensional convolution, and the multi-level fusion is carried out, so that the extractionand fusion of the multi-view and the multi-scale context information are realized, and the segmentation precision of the skeleton in the three-dimensional CT image is improved. The average accuracy of the improved network structure is obviously improved, and the number of model parameters is obviously reduced.
Owner:HUAQIAO UNIVERSITY

Human body behavior recognition method and system based on human body skeleton

ActiveCN111950485ASolve the distance problemSolve the problem of not being able to make a connectionCharacter and pattern recognitionNeural architecturesHuman bodyHuman behavior
The invention discloses a human body behavior recognition method and system based on a human body skeleton, and the method comprises the steps: obtaining the behavior movement of the human body skeleton and the corresponding skeleton point coordinates, skeleton point inter-frame coordinate differences and skeleton features, and constructing a training set; sequentially training the graph convolution network and the attention mechanism network based on the human body part according to the training set, and constructing a behavior recognition model according to the trained graph convolution network and attention mechanism network; and recognizing the to-be-recognized human skeleton according to the behavior recognition model, and outputting human behavior actions. According to data such as three-dimensional coordinates of human skeleton joint points, coordinate differences between point frames and skeleton features, a graph convolution network is taken as a main body, an attention mechanism network based on human parts is adopted to assist in searching for skeleton points with better distinguishing ability, human behavior actions are classified and recognized, and the recognition precision is improved.
Owner:中科人工智能创新技术研究院(青岛)有限公司

Human motion recognition method based on plum group characteristics and a convolutional neural network

ActiveCN109614899AThe description is accurate and validOvercome the shortcomings of manual feature extractionCharacter and pattern recognitionHuman bodySomatosensory system
The invention relates to a human motion recognition method based on plum group characteristics and a convolutional neural network, and belongs to the field of computer mode recognition. The method comprises the following steps: S1, data acquisition: extracting human skeleton information by utilizing micro soft body sensing equipment Kinect, and acquiring motion information of an experimenter; s2,extracting plum group characteristics, A plum group skeleton representation method for simulating a relative three-dimensional geometrical relationship between limbs of a human body by utilizing rigidlimb transformation is adopted. human body actions are modeled into a series of curves on the plum group, and then the curve based on the plum group space is mapped into a curve based on the plum algebra space through logarithm mapping in combination with the corresponding relation between the plum group and the plum algebra; and S3, feature classification: fusing the plum group features and theconvolutional neural network, training the convolutional neural network by using the plum group features, and enabling the convolutional neural network to learn and classify the plum group features, thereby realizing human body action recognition. According to the invention, a good identification effect can be obtained.
Owner:北京陟锋科技有限公司

Idle traffic light intelligent control method based on reinforcement learning

The invention relates to an idle traffic light control method based on reinforcement learning, and the method comprises the following steps of employing a SlimYOLOv3 model to sense an environment, analyzing a scene, recognizing all vehicle types of targets in the scene, and positioning the positions of the targets through defining a bounding box around each target; adopting a DQN-based reinforcement learning method to train a traffic light control intelligent agent, a) defining an action space, enabling the traffic lights to randomly select actions according to probabilities, and adopting a greedy algorithm to randomly select the actions according to the probabilities; b) defining a state space, wherein the road surface state observed at any moment is the number of vehicles in different intervals in each direction, and an observation state value is a six-dimensional vector; c) defining a reward function, wherein the penalty weights of the three interval road sections are respectively defined as the specification, and the reward value is the sum of the penalty weights of the road sections; and d) learning a strategy enabling the reward value to be the highest by adopting the DQN-based reinforcement learning method to obtain the traffic light control intelligent agent with high performance.
Owner:TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products