Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

622 results about "Model image" patented technology

Pose-invariant face recognition system and process

A face recognition system and process for identifying a person depicted in an input image and their face pose. This system and process entails locating and extracting face regions belonging to known people from a set of model images, and determining the face pose for each of the face regions extracted. All the extracted face regions are preprocessed by normalizing, cropping, categorizing and finally abstracting them. More specifically, the images are normalized and cropped to show only a person's face, categorized according to the face pose of the depicted person's face by assigning them to one of a series of face pose ranges, and abstracted preferably via an eigenface approach. The preprocessed face images are preferably used to train a neural network ensemble having a first stage made up of a bank of face recognition neural networks each of which is dedicated to a particular pose range, and a second stage constituting a single fusing neural network that is used to combine the outputs from each of the first stage neural networks. Once trained, the input of a face region which has been extracted from an input image and preprocessed (i.e., normalized, cropped and abstracted) will cause just one of the output units of the fusing portion of the neural network ensemble to become active. The active output unit indicates either the identify of the person whose face was extracted from the input image and the associated face pose, or that the identity of the person is unknown to the system.
Owner:ZHIGU HLDG

Pose-invariant face recognition system and process

A face recognition system and process for identifying a person depicted in an input image and their face pose. This system and process entails locating and extracting face regions belonging to known people from a set of model images, and determining the face pose for each of the face regions extracted. All the extracted face regions are preprocessed by normalizing, cropping, categorizing and finally abstracting them. More specifically, the images are normalized and cropped to show only a persons face, categorized according to the face pose of the depicted person's face by assigning them to one of a series of face pose ranges, and abstracted preferably via an eigenface approach. The preprocessed face images are preferably used to train a neural network ensemble having a first stage made up of a bank of face recognition neural networks each of which is dedicated to a particular pose range, and a second stage constituting a single fusing neural network that is used to combine the outputs from each of the first stage neural networks. Once trained, the input of a face region which has been extracted from an input image and preprocessed (i.e., normalized, cropped and abstracted) will cause just one of the output units of the fusing portion of the neural network ensemble to become active. The active output unit indicates either the identify of the person whose face was extracted from the input image and the associated face pose, or that the identity of the person is unknown to the system.
Owner:ZHIGU HLDG

Pose-invariant face recognition system and process

A face recognition system and process for identifying a person depicted in an input image and their face pose. This system and process entails locating and extracting face regions belonging to known people from a set of model images, and determining the face pose for each of the face regions extracted. All the extracted face regions are preprocessed by normalizing, cropping, categorizing and finally abstracting them. More specifically, the images are normalized and cropped to show only a persons face, categorized according to the face pose of the depicted person's face by assigning them to one of a series of face pose ranges, and abstracted preferably via an eigenface approach. The preprocessed face images are preferably used to train a neural network ensemble having a first stage made up of a bank of face recognition neural networks each of which is dedicated to a particular pose range, and a second stage constituting a single fusing neural network that is used to combine the outputs from each of the first stage neural networks. Once trained, the input of a face region which has been extracted from an input image and preprocessed (i.e., normalized, cropped and abstracted) will cause just one of the output units of the fusing portion of the neural network ensemble to become active. The active output unit indicates either the identify of the person whose face was extracted from the input image and the associated face pose, or that the identity of the person is unknown to the system.
Owner:ZHIGU HLDG

Machine room server remote monitoring method and system

The invention discloses a machine room server remote monitoring method and system. The system comprises a plurality of IP cameras, a switch and a machine room monitoring center, wherein the IP cameras are used for receiving a shooting data acquisition command, and performing shooting on equipment state indicator lights in a machine room to generate a group of modeling image data; and the machine room monitoring center is used for acquiring and saving the group of modeling image data in advance, controlling each IP camera to perform shooting on server cabinets monitored by each IP camera according to a set shooting data sampling period, acquiring a group of real-time image data, comparing the chromatic value of each equipment state indicator light in the group of real-time image data with the chromatic values of corresponding equipment state indicator lights in the saved modeling image data one by one, checking abnormal servers in the machine room according to comparison results, and executing alarming. Through adoption of the machine room server remote monitoring method and system, accurate history image data which can be inquired is provided in order to perform server fault analysis and processing, and the server fault checking efficiency can be increased greatly.
Owner:SHENZHEN INFOTECH TECH

System and method for locating a three-dimensional object using machine vision

This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.
Owner:COGNEX CORP

Computerized tomography (CT) image metal artifact correction method, device and computerized tomography (CT) apparatus

The invention provides a computerized tomography (CT) image metal artifact correction method, a computerized tomography (CT) image metal artifact correction device and a computerized tomography (CT) apparatus. The computerized tomography (CT) image metal artifact correction method comprises the following steps that: a metal projection range caused by an interference object is determined according to an original image corresponding to original projection data; diagnosis object projection data after the removal of the interference object are obtained based on metal projection data in the metal projection range, and after that, the original projection data are corrected and a model image is constructed based on the diagnosis object projection data; and secondary correction is performed on the original projection data according to the projection data of the model image, and reconstruction is performed based on corrected target projection data and according to clinically-used scanning and image construction conditions so as to obtain a metal artifact-free target image, and therefore, the purpose of metal artifact correction can be achieved. According to the computerized tomography (CT) image metal artifact correction method of the invention, the original projection data are adopted as a correction object, and therefore, the spatial resolution and low-contrast ability of a processed image can be ensured; and the original projection data completely contain all information of the interference object, and therefore, the introduction of a new artifact can be avoided.
Owner:SHANGHAI UNITED IMAGING HEALTHCARE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products