Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1003 results about "Candidate image" patented technology

Cross-camera pedestrian detection tracking method based on depth learning

The invention discloses a cross-camera pedestrian detection tracking method based on depth learning, which comprises the steps of: by training a pedestrian detection network, carrying out pedestrian detection on an input monitoring video sequence; initializing tracking targets by a target box obtained by pedestrian detection, extracting shallow layer features and deep layer features of a region corresponding to a candidate box in the pedestrian detection network, and implementing tracking; when the targets disappear, carrying out pedestrian re-identification which comprises the process of: after target disappearance information is obtained, finding images with the highest matching degrees with the disappearing targets from candidate images obtained by the pedestrian detection network and continuously tracking; and when tracking is ended, outputting motion tracks of the pedestrian targets under multiple cameras. The features extracted by the method can overcome influence of illuminationvariations and viewing angle variations; moreover, for both the tracking and pedestrian re-identification parts, the features are extracted from the pedestrian detection network; pedestrian detection, multi-target tracking and pedestrian re-identification are organically fused; and accurate cross-camera pedestrian detection and tracking in a large-range scene are implemented.
Owner:WUHAN UNIV

Perception-based image retrieval

A content-based image retrieval (CBIR) system has a front-end that includes a pipeline of one or more dynamically-constructed filters for measuring perceptual similarities between a query image and one or more candidate images retrieved from a back-end comprised of a knowledge base accessed by an inference engine. The images include at least one color set having a set of properties including a number of pixels each having at least one color, a culture color associated with the color set, a mean and variance of the color set, a moment invariant, and a centroid. The filters analyze and compare the set of properties of the query image to the set of properties of the candidate images. Various filters are used, including: a Color Mask filter that identifies identical culture colors in the images, a Color Histogram filter that identifies a distribution of colors in the images, a Color Average filter that performs a similarity comparison on the average of the color sets of the images, a Color Variance filter that performs a similarity comparison on the variances of the color sets of the images, a Spread filter that identifies a spatial concentration of a color in the images, an Elongation filter that identifies a shape of a color in the images, and a Spatial Relationship filter that identifies a spatial relationship between the color sets in the images.
Owner:RGT UNIV OF CALIFORNIA

Apparatus and method for partial component facial recognition

A method and system for identifying a human being or verifying two human beings by partial component(s) of their face which may include one or multiple of left eye(s), right eye(s), nose(s), mouth(s), left ear(s), or right ear(s). A gallery database for face recognition is constructed from a plurality of human face images by detecting and segmenting a plurality of partial face components from each of the human face images, creating a template for each of the plurality of partial face components, and storing the templates in the gallery database. Templates from a plurality of partial face components from a same human face image are linked with one ID in the gallery database. A probe human face image is identified from the gallery database by detecting and segmenting a plurality of partial face components from the probe human face image; creating a probe template for each of the partial face components from the probe human face image, comparing each of the probe templates against a category of templates in the gallery database to generate similarity scores between the probe templates and templates in the gallery database; generating a plurality of sub-lists of candidate images having partial face component templates with the highest similarity scores over a first preset threshold; generating for each candidate image from each sub-list a combined similarity score; and generating a final list of candidates from said candidates of combined similarity scores over a second preset threshold.
Owner:INTELITRAC

Hierarchical landmark identification method integrating global visual characteristics and local visual characteristics

The invention discloses a hierarchical landmark identification method integrating global visual characteristics and local visual characteristics. High-dimensional characteristic vectors of landmark images are obtained and are used as the global visual characteristics of the landmark images; the local visual characteristics of the landmark images are obtained; the global visual characteristics andthe local visual characteristics are stored by adopting a hierarchical tree-shaped structure, and a visual characteristic set is obtained; each image is characterized according to the visual characteristic set; the images are pre-retrieved according to the global visual characteristics xi, and first candidate images are obtained; the first candidate images are further retrieved according to statistical characteristics vi of local outstanding points, and second candidate images are obtained; and the second candidate images are further retrieved according to a characteristic set yi of the localoutstanding points, and final candidate images are obtained and are fed back to a user. By adopting the hierarchical landmark identification method, the images to be identified can be rapidly and accurately retrieved, so the requirement of the user for convenient information acquisition is satisfied; and besides, through removing certain mismatching points, the landmark identification accuracy isimproved and the landmark identification complexity is reduced.
Owner:TIANJIN UNIV

Three-dimensional point cloud reconstruction method based on multiple uncalibrated images

The invention discloses a three-dimensional point cloud reconstruction method based on multiple uncalibrated images. The method comprises the following steps: obtaining the image sequence of objects shot at different angles as an input set; obtaining feature matching point pairs of images through feature extraction and matching, and carrying out dense diffusion treatment; selecting feature points of candidate images as seed points to be matched, diffused and filtered towards the peripheral area of the points, and obtaining the dense matching point pairs; calibrating a camera, and combining the matching point pairs to obtain internal and external parameters; recovering a three-dimensional model point according to camera parameters and the matching point pairs; carrying out reconstruction, selecting seed model points to generate initial patches, and diffusing the patches in the grid area to obtain dense patches; and filtering error patches according to constraint conditions, and obtaining an accurate dense three-dimensional point cloud model. According to the method, the high-precision dense point cloud model can be quickly obtained, the generation speed of the model is quickened, the accuracy of matching density is increased, and the density and accuracy of three-dimensional point cloud are improved.
Owner:XIDIAN UNIV

High-resolution remote sensing target extraction method based on multi-scale semantic model

The invention discloses a high-resolution remote sensing target extraction method based on a multi-scale semantic model, and relates to a remote sensing image technology. The high-resolution remote sensing target extraction method comprises the following steps of: establishing a high-resolution remote sensing ground object target image data set; performing multi-scale segmentation on images in a training set, and obtaining a candidate image area block of the target; establishing a semantic model of the target, and calculating the implied category semantic features of the target; performing semantic feature analysis on candidate image blocks on all levels; and finally, calculating a semantic correlation coefficient of the candidate area and the target model, and extracting the target through maximizing semantic correlation coefficient. By the method, the target in the high-resolution remote sensing image is extracted by comprehensively utilizing the multi-scale image segmentation and target category semantic information; the method is accurate in extraction result, high in robustness and applicability, and has a certain practical value in the construction of the geographic information system and digital earth system; and the manual involvement degree is reduced.
Owner:济钢防务技术有限公司

Target tracking method based on difficult positive sample generation

The invention discloses a target tracking method based on difficult positive sample generation. According to the method, for each video in training data, a variation auto-encoder is utilized to learna corresponding flow pattern, namely a positive sample generation network, codes are slightly adjusted according to an input image obtained after encoding, and a large quantity of positive samples aregenerated; the positive samples are input into a difficult positive sample conversion network, an intelligent body is trained to learn to shelter a target object through one background image block, the intelligent body performs bounding box adjustment continuously, so that the samples are difficult to recognize, the purpose of difficult positive sample generation is achieved, and sheltered difficult positive samples are output; and based on the generated difficult positive samples, a twin network is trained and used for matching between a target image block and candidate image blocks, and positioning of a target in a current frame is completed till processing of the whole video is completed. According to the target tracking method based on difficult positive sample generation, the flow pattern distribution of the target is learnt directly from the data, and a large quantity of diversified positive samples can be obtained.
Owner:ANHUI UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products