Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1286 results about "Artificial neural network" patented technology

Artificial neural networks (ANN) or connectionist systems are computing systems that are inspired by, but not identical to, biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any prior knowledge of cats, for example, that they have fur, tails, whiskers and cat-like faces. Instead, they automatically generate identifying characteristics from the examples that they process.

Image modification and detection using massive training artificial neural networks (MTANN)

A method, system, and computer program product for modifying an appearance of an anatomical structure in a medical image, e.g., rib suppression in a chest radiograph. The method includes: acquiring, using a first imaging modality, a first medical image that includes the anatomical structure; applying the first medical image to a trained image processing device to obtain a second medical image, corresponding to the first medical image, in which the appearance of the anatomical structure is modified; and outputting the second medical image. Further, the image processing device is trained using plural teacher images obtained from a second imaging modality that is different from the first imaging modality. In one embodiment, the method also includes processing the first medical image to obtain plural processed images, wherein each of the plural processed images has a corresponding image resolution; applying the plural processed images to respective multi-training artificial neural networks (MTANNs) to obtain plural output images, wherein each MTANN is trained to detect the anatomical structure at one of the corresponding image resolutions; and combining the plural output images to obtain a second medical image in which the appearance of the anatomical structure is enhanced.
Owner:UNIVERSITY OF CHICAGO

Image modification and detection using massive training artificial neural networks (MTANN)

A method, system, and computer program product for modifying an appearance of an anatomical structure in a medical image, e.g., rib suppression in a chest radiograph. The method includes: acquiring, using a first imaging modality, a first medical image that includes the anatomical structure; applying the first medical image to a trained image processing device to obtain a second medical image, corresponding to the first medical image, in which the appearance of the anatomical structure is modified; and outputting the second medical image. Further, the image processing device is trained using plural teacher images obtained from a second imaging modality that is different from the first imaging modality. In one embodiment, the method also includes processing the first medical image to obtain plural processed images, wherein each of the plural processed images has a corresponding image resolution; applying the plural processed images to respective multi-training artificial neural networks (MTANNs) to obtain plural output images, wherein each MTANN is trained to detect the anatomical structure at one of the corresponding image resolutions; and combining the plural output images to obtain a second medical image in which the appearance of the anatomical structure is enhanced.
Owner:UNIVERSITY OF CHICAGO

Medical information extraction system and method based on depth learning and distributed semantic features

ActiveCN105894088AAvoid floating point overflow problemsHigh precisionNeural learning methodsNerve networkStudy methods
he invention discloses a medical information extraction system and method based on depth learning and distributed semantic features. The system is composed of a pretreatment module, a linguistic-model-based word vector training module, a massive medical knowledge base reinforced learning module, and a depth-artificial-neural-network-based medical term entity identification module. With a depth learning method, generation of the probability of a linguistic model is used as an optimization objective; and a primary word vector is trained by using medical text big data; on the basis of the massive medical knowledge base, a second depth artificial neural network is trained, and the massive knowledge base is combined to the feature leaning process of depth learning based on depth reinforced learning, so that distributed semantic features for the medical field are obtained; and then Chinese medical term entity identification is carried out by using the depth learning method based on the optimized statement-level maximum likelihood probability. Therefore, the word vector is generated by using lots of unmarked linguistic data, so that the tedious feature selection and optimization adjustment process during medical natural language process can be avoided.
Owner:神州医疗科技股份有限公司 +1

Method for coding pixels or voxels of a digital image and a method for processing digital images

InactiveCN101189641AAuxiliary diagnosis possibleCharacter and pattern recognitionImage codingVoxelImaging processing
A method for coding pixels or voxels of a digital or digitalized two dimensional or three dimensional image, comprises the steps of: providing a digital image consisting in a two dimensional array of pixels or in a three dimensional array of voxels, each pixel or voxel being defined by at least one variable as its intensity in a grey scale image or the HSV (Hue, Saturation and Value) or the RGB values in a colour image; each pixel or voxel of the image being considered as a target pixel or voxel and for each target pixel or voxel a neighborhood being formed by a pixel or voxel windows comprising the said target pixel or voxel and a certain number of surrounding pixels or voxels for each target pixel or voxel generating a vector univocally associated to the said target pixel or voxel, the components of the said vectors being generated as a function of the values of the said target pixel or voxel and of each of the pixels or voxels of the said pixel or voxel window. The function of the values of the said target pixel or voxel and of each of the pixels or voxels of the said pixel or voxel window correspond to the characteristic parameters of the numerical matrix representing the pixels or voxels of the said window or of a transformation of the said numerical matrix. The invention relates also to an image processing method in which image data coded according to the above method are processed by means of a predictive algorithm as for example an artificial neural network.
Owner:BRACCO IMAGINIG SPA

Programmable visual chip-based visual image processing system

Disclosed in the invention is a programmable visual chip-based visual image processing system, comprising an image sensor and a multilevel parallel digital processing circuit. The image sensor mainly includes a pixel array, an analog preprocessing circuit array and an analog-to-digital conversion circuit array; and the digital processing circuit consists of a parallel processing unit array with an M*M pixel level, a parallel processing unit array with M*1 rows, an on-chip artificial neural network and a reduced instruction processor dual-core subsystem. According to the provided system, high quality image collection with high speed and multilevel parallel image processing are realized and several high-speed intelligent visual application can be realized by programming; and compared with a traditional image system, the provided system has advantages of high speed, high integration, low power consumption and low cost. Moreover, the invention brings forward an embodiment for realizing the above-mentioned system as well as several high-speed intelligent visual image processing algorithms based on the embodiment. High-speed motion detection, high-speed gesture identification and rapid face detection are included; and the processing speed can reach 1000 frames per second. Therefore, a requirement of high-speed real-time processing can be met.
Owner:INST OF SEMICONDUCTORS - CHINESE ACAD OF SCI

Artificial neural network-based multi-source gait feature extraction and identification method

The invention relates to identification, image processing and the like, in particular to an artificial neural network-based multi-source gait feature extraction and identification method, which aims to reduce inferences with external factors such as complex background and shelters so as to more accurately extract the effective information reflecting the walking characteristics of the moving people and improve the gait identification accuracy. The technical scheme of the invention comprises the following steps: separately acquiring the gait data by using a camera and a pyroelectric infrared sensor; extracting the skeleton feature parameter and Radon change peak characteristic parameter from the image source information acquired by the camera, and for the pyroelectric infrared source information, converting an acquired voltage signal into frequency domain characteristic parameter; merging the skeleton feature parameter, the Radon change peak feature parameter and the frequency domain characteristic parameter which are subjected to dimension reduction and corresponding signal process; and finally, realizing classified identification of the merged characteristics by using a BP neutralnetwork as the classifier and evaluating the identification effect. The method is mainly applied to identification.
Owner:中电云脑(天津)科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products