Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

30results about How to "Accurate recognition rate" patented technology

Underwater target feature extraction method based on convolutional neural network (CNN)

The invention provides an underwater target feature extraction method based on a convolutional neural network (CNN). 1, a sampling sequence of an original radiation noise signal is divided into 25 consecutive parts, and each part is set with 25 sampling points; 2, normalization and centralization processing are carried out on a sampling sample of the j-th segment of the data signal; 3, short-time Fourier transform is carried out to obtain a LoFAR graph; 4, a vector is assigned to an existing 3-dimensional tensor; 5, an obtained feature vector is input to a fully-connected layer for classification and calculation of error with label data, whether the loss error is below an error threshold is tested, if the loss error is below the error threshold, network training is stopped, and otherwise, step 6 is entered; and 6, a gradient descent method is used to carry out parameter adjustment layer by layer on the network from back to front, and shifting to the step 2 is carried out. Compared with the traditional convolutional neural network algorithms, the method of the invention carries out a weighted operation of spatial information multi-dimensions on a feature graph layer to compensate for a defect of spatial information losses caused by one-dimensional vectorization of the fully-connected layer.
Owner:HARBIN ENG UNIV

Behavior identification method based on local joint point track space-time volume in skeleton sequence

ActiveCN110555387AStable and accurate recognition rateIncrease costCharacter and pattern recognitionHuman bodyModel extraction
The invention belongs to the technical field of artificial intelligence, and discloses a behavior recognition method based on a local articulation point trajectory space-time volume in a skeleton sequence, and the method comprises the steps: extracting the local articulation point trajectory space-time volume from inputted RGB video data and skeleton articulation point data; extracting image features by using a pre-training model based on the RGB video data set; constructing a codebook for each different feature of each joint point in the training set and encoding the codebook, and connectingthe features of the n joint points in series to form a feature vector; and performing behavior classification and recognition by using an SVM classifier. According to the method, manual features and deep learning features are fused, local features are extracted by using a deep learning method, and fusion of multiple features can achieve a stable and accurate recognition rate. According to the invention, the 2D human body skeleton estimated by the attitude estimation algorithm and the RGB video sequence are used to extract the features, the cost is low, the precision is high, and the method hasimportant significance when applied to a real scene.
Owner:HUAQIAO UNIVERSITY

Intelligent detection system for monitoring real-time state of iron tower and method thereof

The invention provides an intelligent detection system for monitoring the real-time state of an iron tower and a method thereof. The system comprises an iron tower, and further comprises an iron towerintelligent monitoring platform, an intelligent monitoring subsystem and a user interaction interface, wherein four types of sensors are installed on the iron tower, and the four types of sensors comprise an angle sensor for monitoring the tilt angle of the iron tower, a vibration sensor for monitoring the vibration of the iron tower, a Hall current sensor for monitoring the lightning current andan electric leakage detection sensor for detecting electric leakage; a gateway with a gateway monitoring management and configuration management system is also deployed on the iron tower; and the gateway deployed on the iron tower performs collection and transmission to the iron tower intelligent monitoring platform through an Internet of Things communication protocol. The data uploaded by the sensors arranged on the iron tower can be intelligently analyzed, and the state of the iron tower can be monitored and judged in real time; and through video acquisition, whether the iron tower has a dangerous situation is judged in real time, and compared with manual monitoring, the intelligent monitoring has a more accurate recognition rate.
Owner:广州奈英科技有限公司

Android application digital certificate verification vulnerability detection system and method

The invention belongs to the field of application software detection of Android terminals, and particularly relates to an Android application digital certificate verification vulnerability detection system and method, and the system comprises a static detection module, a dynamic detection module and an intermediary agent module. The static detection module is used for discovering potential applications with digital certificate verification vulnerabilities according to static code characteristics of vulnerability applications; the dynamic detection module is used for dynamically executing application triggering vulnerability codes; the intermediary agent module is used for initiating intermediary attacks and trying to decrypt HTTPS flow so as to confirm whether digital certificate verification vulnerabilities really exist in applications or not, and the method makes up for the defects of false alarms caused by single use of static detection and low efficiency caused by single use of dynamic detection through the mode of combining dynamic detection and static detection, effective detection of the application is achieved, and the problems of low efficiency of manual auditing, high cost of large-scale market-level application detection and the like are solved.
Owner:STATE GRID HENAN ELECTRIC POWER ELECTRIC POWER SCI RES INST +1

Sound recognizing and positioning device and method for unmanned aerial vehicle

The invention discloses a sound recognizing and positioning device and a method for unmanned aerial vehicles. The device comprises a wireless sound sensor network and a master controller. The wirelesssound sensor network comprises a plurality of wireless sound sensor nodes. Each wireless sound sensor node is mainly composed of seven sound sensors, a signal filtering amplification module, an ADC module, an MCU and a wireless transmission module. The master controller comprises a decoding module, a sound recognition module, and a sound source positioning module based on a three-dimensional seven-element sound sensor array. According to the sound recognizing and positioning device and the method for the unmanned aerial vehicles, the environment sound monitoring and transmission are performedin a wireless sensor network mode that is convenient, fast, and high in adaptability. In addition to using MFCC for feature extraction when performing the sound recognition, the sound is also input into a CNN model, then whether the sound of the unmanned aerial vehicle is contained is determined using SVM, so that the recognition rate is more accurate. Finally, a sound source positioning algorithm based on the three-dimensional seven-element sound sensor array is adopted to perform three-dimensional positioning on the unmanned aerial vehicle, so that the positioning is more accurate.
Owner:JIANGXI UNIV OF SCI & TECH

Scanning recognition system, scanning recognition method and book scanning equipment

The invention provides a scanning recognition system, which comprises a scanning unit, a weight acquisition unit, an alarm unit and a storage and control unit, and is characterized in that the storage and control unit is set to automatically compare pre-stored book basic weight information G with book real-time weight information G'acquired by the weight acquisition unit; and according to the forward or reverse type of the difference between the book basic weight information G and the book real-time weight information G ', judging that the book damage category is defect or wet damage, and when the book damage category is judged to be wet damage, further identifying the book wet damage degree as one of different wet damage levels according to the difference between the book basic weight information G and the book real-time weight information G'. And the signal of the alarm unit is controlled according to the identified moisture loss level. The invention also provides a scanning identification method based on a threshold value. Wet damage identification alarm and disposal of the book are achieved in book scanning through the weight detection and comparison principle, large-scale hardware development and upgrading of book scanning equipment are not needed, and the method is efficient, accurate and low in cost.
Owner:QINGDAO UNIV OF SCI & TECH

Feature extraction method of underwater target based on convolutional neural network

The invention provides an underwater target feature extraction method based on a convolutional neural network. 1. Divide the sampling sequence of the original radiation noise signal into 25 consecutive parts, and set 25 sampling points for each part; 2. Normalize and centralize the sampling samples of the j-th segment data signal; perform short-term Fourier transform to get the LoFAR image; 4. Assign the vector to the existing 3-dimensional tensor; 5. Input the obtained feature vector to the fully connected layer for classification and calculate the error with the label data, and check whether the loss error is lower than the error threshold. If it is lower than that, stop the network training, otherwise go to step 6; 6. Use the gradient descent method to adjust the parameters of the network layer by layer from the back to the front, and go to step 2. The recognition rate of the method of the present invention is compared with the traditional convolutional neural network algorithm, and the multi-dimensional weighting operation of the spatial information is performed on the feature layer to make up for the defect of the loss of spatial information caused by the one-dimensional vectorization of the fully connected layer. .
Owner:HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products