Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

6741 results about "Sample image" patented technology

Apparatus and method of object tracking

A method of tracking objects on a plane within video images of the objects captured by a video camera. The method includes processing the captured video images so as to extract one or more image features from each object, detecting each of the objects from a relative position of the objects on the plane as viewed from the captured video images by comparing the one or more extracted image features associated with each object with sample image features from a predetermined set of possible example objects which the captured video images may contain; and generating object identification data for each object, from the comparing, which identifies the respective object on the plane. The method further includes generating a three dimensional model of the plane and logging, for each detected object, the object identification data for each object which identifies the respective object on the plane together with object path data. The object path provides a position of the object on the three dimensional model of the plane from the video images with respect to time and relates to the path that each object has taken within the video images. The logging includes detecting an occlusion event in dependence upon whether a first image feature associated with a first of the objects obscures a whole or part of at least a second image feature associated with at least a second of the objects; and, if an occlusion event is detected, associating the object identification data for the first object and the object identification data for the second object with the object path data for both the first object and the second object respectively and logging the associations. The logging further includes identifying at least one of the objects involved in the occlusion event in dependence upon a comparison between the one or more image features associated with that object and the sample image features from the predetermined set of possible example objects, and updating the logged path data after the identification of at least one of the objects so that the respective path data is associated with the respective identified object.
Owner:SONY CORP

Small sample and zero sample image classification method based on metric learning and meta-learning

The invention relates to the field of computer vision recognition and transfer learning, and provides a small sample and zero sample image classification method based on metric learning and meta-learning, which comprises the following steps of: constructing a training data set and a target task data set; selecting a support set and a test set from the training data set; respectively inputting samples of the test set and the support set into a feature extraction network to obtain feature vectors; sequentially inputting the feature vectors of the test set and the support set into a feature attention module and a distance measurement module, calculating the category similarity of the test set sample and the support set sample, and updating the parameters of each module by utilizing a loss function; repeating the above steps until the parameters of the networks of the modules converge, and completing the training of the modules; and enabling the to-be-tested picture and the training picture in the target task data set to sequentially pass through a feature extraction network, a feature attention module and a distance measurement module, and outputting a category label with the highestcategory similarity with the test set to obtain a classification result of the to-be-tested picture.
Owner:SUN YAT SEN UNIV

Image processing apparatus and method

An image processing apparatus and method generate a three dimensional representation of a scene which includes a plurality of objects disposed on a plane. The three dimensional representation is generated from one or more video images of the scene, which include the objects on the plane produced from a view of the scene by a video camera. The method comprises processing the captured video images so as to extract one or more image features from each object, comparing the one or more image features with sample image features from a predetermined set of possible example objects which the video images may contain, and identifying the objects from the comparison of the image features with the stored image features of the possible example objects. The method also includes generating object path data, which includes object identification data for each object which identifies the respective object; and provides a position of the object on the plane in the video images with respect to time. The method further includes calculating a projection matrix for projecting the position of each of the objects according to the object path data from the plane into a three dimensional model of the plane. As such a three dimensional representation of the scene which includes a synthesised representation of each of the plurality of objects on the plane can be produced, by projecting the position of the objects according to the object path data into the plane of the three dimensional model of the scene using the projection matrix and a predetermined assumption of the height of each of the objects. Accordingly, a three dimensional representation of a live video image of, for example, a football match can be generated, or tracking information included on the live video images. As such, a change in a relative view of the generated three dimensional representation can be made, so that a view can be provided in the three dimensional representation of the scene from a view point at which no camera is actually present to capture video images of the live scene.
Owner:SONY CORP

Zero sample image classification method based on combination of variational autocoder and adversarial network

ActiveCN108875818AImplement classificationMake up for the problem of missing training samples of unknown categoriesCharacter and pattern recognitionPhysical realisationClassification methodsSample image
The invention discloses a zero sample image classification method based on combination of a variational autocoder and an adversarial network. Samples of a known category are input during model training; category mapping of samples of a training set serves as a condition for guidance; the network is subjected to back propagation of optimization parameters through five loss functions of reconstruction loss, generation loss, discrimination loss, divergence loss and classification loss; pseudo-samples of a corresponding unknown category are generated through guidance of category mapping of the unknown category; and a pseudo-sample training classifier is used for testing on the samples of the unknown category. The high-quality samples beneficial to image classification are generated through theguidance of the category mapping, so that the problem of lack of the training samples of the unknown category in a zero sample scene is solved; and zero sample learning is converted into supervised learning in traditional machine learning, so that the classification accuracy of traditional zero sample learning is improved, the classification accuracy is obviously improved in generalized zero sample learning, and an idea for efficiently generating the samples to improve the classification accuracy is provided for the zero sample learning.
Owner:XI AN JIAOTONG UNIV

Multi-angle face alignment method based on deep learning and system thereof and photographing terminal

The invention discloses a multi-angle face alignment method based on deep learning and a system thereof and a photographing terminal. Marking of face key points and marking of face rotating angles are performed on a face sample image and the face sample image is inputted to a convolutional neural network to be trained, and different face pose types of the corresponding interval range of the face rotating angles are outputted so that face angle models of different face pose types are obtained; then regression training is performed on the face key point coordinates of the face sample image by utilizing the face angle models so that face alignment models corresponding to different face pose types are obtained; and finally an image to be detected is inputted to the face angle models to perform face angle detection, and the face alignment model of the corresponding angle is called to perform regression prediction. Precision is high, robustness is high and space occupation of the models obtained through training is low, and the method is particularly suitable for the face alignment application of which the situations of the photographed person are complex, the requirement for precision is high and occupation of the algorithm in the physical space is required to be low.
Owner:XIAMEN MEITUZHIJIA TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products