Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2294 results about "Multi modality" patented technology

Multimodality is a theory which looks at the many different modes that people use to communicate with each other and to express themselves.

Method & system for multi-modality imaging of sequentially obtained pseudo-steady state data

Methods, protocols and systems are provided for multi-modality imaging based on pharmacokinetics of an imaging agent. An imaging agent is introduced into a subject, and is permitted to collect generally in a region of interest (ROI) in the subject until attaining a pseudo-steady state (PSS) distribution within the ROI. The imaging agent records a first functional state of the ROI at a given point in time. A first image data set is obtained with a first imaging modality during a first acquisition time interval that occurs prior or proximate in time with the PSS time interval. The subject is transferred from the first imaging modality to a second imaging modality during a transfer time interval that overlaps the PSS time interval. Once transfer is complete, a second image data set is obtained with the second imaging modality during a second acquisition time interval that also overlaps the PSS time interval in which the imaging agent maintains the PSS distribution in the ROI. In accordance with a protocol, the transfer time interval and second acquisition time interval substantially fall within the PSS time interval. The imaging agent collects in the ROI during an uptake time interval which may or may not precede the time interval during which first imaging modality obtains at least a portion of the first image data set. The second image data set is obtained while the imaging agent persists in the ROI at the PSS distribution reflective of the first functional state even after the ROI is no longer in the first functional state.
Owner:UNIV ZURICH +1

Multi-modality medical image identification method and device based on deep learning

The invention provides a multi-modality medical image identification method based on deep learning. The method includes the steps that on the basis of multi-modality medical images of a patient's part to be detected, a registration method is adopted to display the multi-modality medical images in the same three-dimensional space; on the basis of the multi-modality medical images, R-CNN is adopted to identify lesion areas in the multi-modality medical images; according to coordinates of the lesion areas in the multi-modality medical images, lesion bodies are displayed in the same three-dimensional space, and according to image blocks corresponding to diagnosed lesion areas, a dense sampling method and a CNN are adopted to obtain occurrence probabilities of various preset disease types. The invention provides a multi-modality medical image identification device based on the deep learning. The device includes a multi-modality medical image display module, a lesion area detection module, a lesion body display module and a preset disease-type occurrence probability module. According to the multi-modality medical image identification method and device, the automatic identification of the lesion areas in the medical images is achieved, and effective reference data is provided for further diagnosis by doctors.
Owner:BEIJING COMPUTING CENT

Multimodal input-based interactive method and device

The invention aims to provide an intelligent glasses device and method used for performing interaction based on multimodal input and capable of enabling the interaction to be closer to natural interaction of users. The method comprises the steps of obtaining multiple pieces of input information from at least one of multiple input modules; performing comprehensive logic analysis on the input information to generate an operation command, wherein the operation command has operation elements, and the operation elements at least include an operation object, an operation action and an operation parameter; and executing corresponding operation on the operation object based on the operation command. According to the intelligent glasses device and method, the input information of multiple channels is obtained through the input modules and is subjected to the comprehensive logic analysis, the operation object, the operation action and the operation element of the operation action are determined to generate the operation command, and the corresponding operation is executed based on the operation command, so that the information is subjected to fusion processing in real time, the interaction of the users is closer to an interactive mode of a natural language, and the interactive experience of the users is improved.
Owner:HISCENE INFORMATION TECH CO LTD

Multi-modal knowledge graph construction method

PendingCN112200317ARich knowledge typeThree-dimensional knowledge typeKnowledge representationSpecial data processing applicationsFeature extractionEngineering
The invention discloses a multi-modal knowledge graph construction method, and relates to the knowledge engineering technology in the field of big data. The method is realized through the following technical scheme: firstly, extracting multi-modal data semantic features based on a multi-modal data feature representation model, constructing a pre-training model-based data feature extraction model for texts, images, audios, videos and the like, and respectively finishing single-modal data semantic feature extraction; secondly, projecting different types of data into the same vector space for representation on the basis of unsupervised graph, attribute graph, heterogeneous graph embedding and other modes, so as to realize cross-modal multi-modal knowledge representation; on the basis of the above work, two maps needing to be fused and aligned are converted into vector representation forms respectively, then based on the obtained multi-modal knowledge representation, the mapping relation of entity pairs between knowledge maps is learned according to priori alignment data, multi-modal knowledge fusion disambiguation is completed, decoding and mapping to corresponding nodes in the knowledge maps are completed, and a fused new atlas, entities and attributes thereof are generated.
Owner:10TH RES INST OF CETC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products