Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

35609 results about "Feature extraction" patented technology

In machine learning, pattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations. Feature extraction is related to dimensionality reduction.

attention CNNs and CCR-based text sentiment analysis method

The invention discloses an attention CNNs and CCR-based text sentiment analysis method and belongs to the field of natural language processing. The method comprises the following steps of 1, training a semantic word vector and a sentiment word vector by utilizing original text data and performing dictionary word vector establishment by utilizing a collected sentiment dictionary; 2, capturing context semantics of words by utilizing a long-short-term memory (LSTM) network to eliminate ambiguity; 3, extracting local features of a text in combination with convolution kernels with different filtering lengths by utilizing a convolutional neural network; 4, extracting global features by utilizing three different attention mechanisms; 5, performing artificial feature extraction on the original text data; 6, training a multimodal uniform regression target function by utilizing the local features, the global features and artificial features; and 7, performing sentiment polarity prediction by utilizing a multimodal uniform regression prediction method. Compared with a method adopting a single word vector, a method only extracting the local features of the text, or the like, the text sentiment analysis method can further improve the sentiment classification precision.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Unsupervised domain-adaptive brain tumor semantic segmentation method based on deep adversarial learning

The invention provides an unsupervised domain-adaptive brain tumor semantic segmentation method based on deep adversarial learning. The method comprises the steps of deep coding-decoding full-convolution network segmentation system model setup, domain discriminator network model setup, segmentation system pre-training and parameter optimization, adversarial training and target domain feature extractor parameter optimization and target domain MRI brain tumor automatic semantic segmentation. According to the method, high-level semantic features and low-level detailed features are utilized to jointly predict pixel tags by the adoption of a deep coding-decoding full-convolution network modeling segmentation system, a domain discriminator network is adopted to guide a segmentation model to learn domain-invariable features and a strong generalization segmentation function through adversarial learning, a data distribution difference between a source domain and a target domain is minimized indirectly, and a learned segmentation system has the same segmentation precision in the target domain as in the source domain. Therefore, the cross-domain generalization performance of the MRI brain tumor full-automatic semantic segmentation method is improved, and unsupervised cross-domain adaptive MRI brain tumor precise segmentation is realized.
Owner:CHONGQING UNIV OF TECH

Wheel type mobile fruit picking robot and fruit picking method

The invention discloses a picking method and picking robot device aiming at fruits which are in size of an apple and is similar to a sphere. The picking robot device comprises a mechanical actuating device, control system hardware and control system software. The mechanical actuating device comprises a picking mechanical arm, an underactuated manipulator, an electric sliding table and an intelligent mobile platform, wherein the control system hardware comprises an IPC (industrial personal computer), a motion control card, a data acquisition card, an AHRS (attitude and heading reference system), a coder, a monocular camera, a binocular camera, a force sensor, a slipping sensor and the like. During operation, the IPC fuses information of the coder, the AHRS, monocular camera components and an ultrasonic sensor to enable the mobile platform to independently navigate and avoid obstacles. A binocular vision system collects images of mature fruits and obstacles and extracts the characteristics of the images so as to realize obstacle avoidance of the mechanical arm and fruit positioning. Finally, the IPC fuses the information of the force sensor, the slipping sensor and the position sensor, thereby further reliably gripping the mature fruits and separating the fruits from fruit branches.
Owner:NANJING AGRICULTURAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products