Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

1837 results about "Feature detection" patented technology

Feature detection is a process by which specialized nerve cells in the brain respond to specific features of a visual stimulus, such as lines, edges, angle, or movement. The nerve cells fire selectively in response to stimuli that have specific characteristics E.G shape, angle, or movement. Feature detection was discovered by David Hubel and Torsten Wiesel of Harvard University, an accomplishment which won them the 1981 Nobel Prize. In the area of computer vision, feature detection usually refers to the computation of local image features as intermediate results of making local decisions about the local information contents in the image; see also the article on interest point detection. In the area of psychology, the feature detectors are neurons in the visual cortex that receive visual information and respond to certain features such as lines, angles, movements, etc. When the visual information changes, the feature detector neurons will quiet down, to be replaced with other more responsive neurons.

Automatic mask design and registration and feature detection for computer-aided skin analysis

ActiveUS20090196475A1Avoiding skin regions not useful or amenableCharacter and pattern recognitionDiagnostic recording/measuringDiagnostic Radiology ModalityNose
Methods and systems for automatically generating a mask delineating a region of interest (ROI) within an image containing skin are disclosed. The image may be of an anatomical area containing skin, such as the face, neck, chest, shoulders, arms or hands, among others, or may be of portions of such areas, such as the cheek, forehead, or nose, among others. The mask that is generated is based on the locations of anatomical features or landmarks in the image, such as the eyes, nose, eyebrows and lips, which can vary from subject to subject and image to image. As such, masks can be adapted to individual subjects and to different images of the same subjects, while delineating anatomically standardized ROIs, thereby facilitating standardized, reproducible skin analysis over multiple subjects and/or over multiple images of each subject. Moreover, the masks can be limited to skin regions that include uniformly illuminated portions of skin while excluding skin regions in shadow or hot-spot areas that would otherwise provide erroneous feature analysis results. Methods and systems are also disclosed for automatically registering a skin mask delineating a skin ROI in a first image captured in one imaging modality (e.g., standard white light, UV light, polarized light, multi-spectral absorption or fluorescence imaging, etc.) onto a second image of the ROI captured in the same or another imaging modality. Such registration can be done using linear as well as non-linear spatial transformation techniques.
Owner:CANFIELD SCI

Real-time panoramic image stitching method of aerial videos shot by unmanned plane

ActiveCN102201115ARealize the transformation relationshipQuickly achieve registrationTelevision system detailsImage enhancementGlobal Positioning SystemTime effect
The invention discloses a real-time panoramic image stitching method of aerial videos shot by an unmanned plane. The method comprises the steps of: utilizing a video acquisition card to acquire images which are transmitted to a base station in real time by an unmanned plane through microwave channels, carrying out key frame selection on an image sequence, and carrying out image enhancement on key frames; in the image splicing process, firstly carrying out characteristic detection and interframe matching on image frames by adopting an SURF (speeded up robust features) detection method with good robustness; then reducing the series-multiplication accumulative errors of images in a frame-to-mosaic image transformation mode, determining images which are not adjacent in time sequence but adjacent in space on a flight path according to the GPS (global positioning system) position information of the unmanned plane, optimizing the frame-to-mosaic transformation relation, determining image overlapping areas, thereby realizing image fusion and the panoramic image construction and realizing real-time effect of carrying out flying and stitching simultaneously; and in image transformation, based on adjacent frame information in a vision field and adjacent frame information in airspace, optimizing image transformation to obtain the accurate panoramic images. The stitching method has good real-time performance, is fast and accurate and meets the requirements of application occasions in multiple fields.
Owner:HUNAN AEROSPACE CONTROL TECH CO LTD

Improved method of RGB-D-based SLAM algorithm

InactiveCN104851094AMatching result optimizationHigh speedImage enhancementImage analysisPoint cloudEstimation methods
Disclosed in the invention is an improved method of a RGB-D-based simultaneously localization and mapping (SLAM) algorithm. The method comprises two parts: a front-end part and a rear-end part. The front-end part is as follows: feature detection and descriptor extraction, feature matching, motion conversion estimation, and motion conversion optimization. And the rear-end part is as follows: a 6-D motion conversion relation initialization pose graph obtained by the front-end part is used for carrying out closed-loop detection to add a closed-loop constraint condition; a non-linear error function optimization method is used for carrying out pose graph optimization to obtain a global optimal camera pose and a camera motion track; and three-dimensional environment reconstruction is carried out. According to the invention, the feature detection and descriptor extraction are carried out by using an ORB method and feature points with illegal depth information are filtered; bidirectional feature matching is carried out by using a FLANN-based KNN method and a matching result is optimized by using homography matrix conversion; a precise inliners matching point pair is obtained by using an improved RANSAC motion conversion estimation method; and the speed and precision of point cloud registration are improved by using a GICP-based motion conversion optimization method.
Owner:XIDIAN UNIV

Method for automatic correction and tiled display of plug-and-play large screen projections

The invention discloses a method for automatic correction and tiled display of plug-and-play large screen projections. The method comprises the steps as follows: adaptively generating a checker pattern with a certain resolution ratio, projecting by projectors in sequence, capturing by cameras, carrying out feature detection and identification on complicated projection surfaces or checkers under the illumination condition with a multi-feature detection method based on color and geometry, and creating a Bezier curve function to represent a corresponding relation of points between projector images and camera images; and obtaining effective display areas for screen projection with a quick approach method, determining a corresponding relation of the projection contents of the projectors and the display areas, carrying out geometrical distortion on the images to be projected for geometrical correction, and calculating weight values of pixels in projection overlapping areas with a distance-based nonlinear weight value distribution method for fusion of edge brightness. According to the invention, a plurality of projection images of irregular surfaces can be aligned and seamlessly spliced, and the whole method is simple and easy to use, higher in autonomy and better in seamless splicing performance.
Owner:OCEAN UNIV OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products