Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

340 results about "Projection model" patented technology

Parallax optimization algorithm-based binocular stereo vision automatic measurement method

InactiveCN103868460AAccurate and automatic acquisitionComplete 3D point cloud informationImage analysisUsing optical meansBinocular stereoNon targeted
The invention discloses a parallax optimization algorithm-based binocular stereo vision automatic measurement method. The method comprises the steps of 1, obtaining a corrected binocular view; 2, matching by using a stereo matching algorithm and taking a left view as a base map to obtain a preliminary disparity map; 3, for the corrected left view, enabling a target object area to be a colorized master map and other non-target areas to be wholly black; 4, acquiring a complete disparity map of the target object area according to the target object area; 5, for the complete disparity map, obtaining a three-dimensional point cloud according to a projection model; 6, performing coordinate reprojection on the three-dimensional point cloud to compound a coordinate related pixel map; 7, using a morphology method to automatically measure the length and width of a target object. By adopting the method, a binocular measuring operation process is simplified, the influence of specular reflection, foreshortening, perspective distortion, low textures and repeated textures on a smooth surface is reduced, automatic and intelligent measuring is realized, the application range of binocular measuring is widened, and technical support is provided for subsequent robot binocular vision.
Owner:GUILIN UNIV OF ELECTRONIC TECH

Method for calibrating external parameters of monitoring camera by adopting reference height

InactiveCN102103747AMeet application needsGuaranteed calibration measurement accuracyImage analysisTerrainHorizon
The invention discloses a method for calibrating external parameters of a monitoring camera by adopting reference height, which comprises the following steps of: far vanishing point and lower vanishing point-based vision model description and related coordinate system establishment; projected coordinate calculation of reference plumb information-based far vanishing points and lower vanishing points in an image plane; reference height calibration-based camera height, overhead view angle and amplification factor calculation; design of a reference plumb direction-based horizon inclination angle calibration tool; and design and use of perspective projection model-based three-dimensional measurement software and three tools for three-dimensional measurement, namely an equal height change scale, a ground level distance measurement scale and a field depth reconstruction framework. The calibrating method is simple and convenient in operation, quick in calculation and high in measurement precision. The reference can be a pedestrian, furniture or an automobile; and a special ground mark line is not needed. The method allows the camera to be arranged at a low place, and the shooting overhead view angle is slightly upward as long as the bottom of the reference can be seen clearly in the video and the ground level coordinate system is definite.
Owner:INST OF ELECTRONICS CHINESE ACAD OF SCI

Picture composing apparatus and method

A picture composing apparatus designed to combine a plurality of images taken by a plurality of image pickup devices. In the apparatus, a first projecting unit projects the plurality of images taken by the image pickup devices onto a projection section in accordance with an image pickup situation of the image pickup devices to generate a plurality of first projected images, and a second projecting unit projects the plurality of first projected images to a three-dimensional projection model to generate a second projected image. Also included in the apparatus are a virtual image pickup device for virtually picking up the second projected image and an image pickup situation determining unit for determining an image pickup situation of the virtual image pickup device, whereby the second projected image is picked up by the virtual image pickup device in the image pickup situation determined by the pickup situation determining unit to combine the plurality of images taken by the plurality of image pickup devices, thus producing a high-quality composite picture. This apparatus can offer a natural composite picture in which joints among the images taken by the image pickup devices do not stand out. In addition, when mounted on a motor vehicle, this apparatus allows a driver to easily seize the surrounding situations and the positional relationship of the motor vehicle with respect to other objects.
Owner:PANASONIC CORP

Fisheye image correction method of vehicle panoramic display system based on spherical projection model and inverse transformation model

A fisheye lens is required to be adopted by the vehicle panoramic display system to obtain a 180-degree wide visual field, however, the visual field is deformed severely. The invention discloses a fisheye image correction method of the vehicle panoramic display system based on a spherical projection model and an inverse transformation model. The method comprises the steps as follows: (1), confirming an area A which is required to be displayed in an aerial view, and establishing a world coordinate system to position the display area; (2), obtaining a coordinate B of the area required to be displayed in a spherical longitude and latitude mapping coordinate system according to camera mounting parameters; (3), confirming the position C of the coordinate B in an original image collected by the fisheye lens according to the fisheye lens spherical projection model; (4), establishing a coordinate transformation relation from C to A through inverse projection transformation; and (5), performing interpolation arithmetic for non-integer points through a bilinear interpolation method to obtain a complete aerial view in a certain direction of the vehicle. The algorithm provided by the invention is simple in implementation and strong in universality and instantaneity.
Owner:丹阳科美汽车部件有限公司

Road conditions video camera marking method under traffic monitoring surroundings

InactiveCN101118648AImprove robustnessMeet the requirements of high-precision camera calibrationTelevision system detailsImage analysisMonitoring systemModel parameters
A road condition video camera demarcating method under the traffic monitoring condition includes the following demarcating steps: (1) visual model describing and relevant coordinate system creating: according to the requirement of the monitoring system performance, a classical Tsai transmission projection model is used for reference, and according to the feature of the road condition imaging, a new visual model is presented after conducting corresponding amendment on the classical Tsai transmission projection model, and three kinds of coordinates are established. (2) demarcating the main point of a video camera and the extension factor: the image monitoring light stream is used as the demarcating basic element; by the extension movement of the video camera, a reference frame forecast image and the difference value of the light stream field in a real time frame sampling image are used as the restriction, a constraint equation is established by adopting the least square method, the main point coordinate of the video camera and the actual magnification coefficient of the video camera can be distinguished by the powell direction family method. (3) Demarcated object selecting and parameter linearity evaluating. (4) Accuracy making on the internal and external parameters of the video camera: according to all the angle points of the monitored image and the corresponding world coordinate point, the Levenberg-Marguardt optimization algorithm is adopted to make accuracy on the video camera model parameter, and then the video camera demarcating is finished.
Owner:NANJING UNIV

Fish-eye lens correction method and device and portable terminal

The invention is suitable for the digital image processing field and provides a fish-eye lens correction method and device and a portable terminal. The method comprises the following steps: carrying out distortion correction on a fisheye image through an equidistant projection model to obtain an initial correction image, the image content of which is a calibration board; obtaining all angle pointsof the calibration board from the initial correction image, re-projecting angle point coordinates to the fisheye image to obtain rough angle point coordinate values of the calibration board in the fisheye image, and serving the angle points as feature points in the fisheye image; and carrying out iteration on the feature points in the fisheye image to obtain different parameter values within a preset parameter range, carrying out correction on the fish-eye lens through the equidistant projection model, and serving the parameter value obtained when the deviation amount is minimum as an accurate correction value of the fish-eye lens. The fish-eye lens correction method reduces preparation workload in the early stage, and is simple in operation process; a camera does not need to change position and angle; and the fish-eye lens correction method is wide in application range, does not need to set a lot of parameters, is simple to calculate, and meanwhile, realizes an accurate fisheye correction effect.
Owner:ARKMICRO TECH

Method for rebuilding image of positron emission tomography

The invention provides a method for rebuilding an image of positron emission tomography. The method includes that a probability model of scattering photon example projection described by appointed model parameters is established according to the operating principle of scattering photon example projection of Compton scattering; a spread function of scattering photon example dot is established according to the probability model of scattering photon example projection; a spread function of non-scattering photon example dot is established according to the probability model of scattering photon example projection and the spread function of scattering photon example dot; and by means of iterative reconstruction algorithm, a PET (positron emission tomography) image is rebuilt by scattering photon example and non-scattering photon example according to the spread function of scattering photon example dot and the spread function of non-scattering photon example dot. According to the method, detection efficiency can be improved, in clinical application, the radiation dosage bore by a detected object and an operator is substantially reduced, detection time is shortened, using efficiency is improved, data sampling is perfected, detector structure is simplified, and cost of the detector is reduced.
Owner:INST OF HIGH ENERGY PHYSICS CHINESE ACAD OF SCI

Drive assisting system

An object of the present invention is to provide a driving assistance system capable of displaying a picked-up image near a vehicle as a less-distorted image on a monitor.
A driving assistance system according to the present invention comprises an imaging means (2) for picking up a surrounding image of a vehicle (1) on a road surface, an image translating means (3) for executing an image translation by using a three-dimensional projection model (300), which is convex toward a road surface side and whose height from the road surface is not changed within a predetermined range from a top end portion of the vehicle (1) in a traveling direction, to translate the image picked up by the imaging means (2) into an image viewed from a virtual camera (2a), and a displaying means (4) for displaying an image translated by the image translating means (3). The three-dimensional projection model (300) is configured by a cylindrical surface model (301) that is convex toward the road surface side, and a spherical surface model (302) connected to an end portion of the cylindrical surface model (301). Accordingly, the straight line on the road surface, which is in parallel with the center axis of the cylindrical surface model (301), is displayed as the straight line, and thus a clear and less-distorted screen can be provided to the driver.
Owner:PANASONIC CORP

Method for reconstructing target three-dimensional scattering center of inverse synthetic aperture radar

The invention provides a method for reconstructing target three-dimensional scattering center inverse synthetic aperture radar. The method for reconstructing the target three-dimensional scattering center of the inverse synthetic aperture radar comprises the following steps: conducting continuous image formation on echo data after motion compensation so as to obtain an ISAR two-dimensional image sequence; respectively conducting horizontal scaling and vertical scaling on the information storage and retrieval (ISAR) two-dimensional image sequence so as to obtain a position coordinate of the scattering center; and respectively extracting a position coordinate of the scattering center in an ISAR two-dimensional image of each frame, calculating a displacement velocity field of every two adjacent frames of the scattering center of the ISAR two-dimensional image, combining projection equation and target motion equation of an orthographic projection model so as to obtain estimated values of a third dimension coordinate by combining a projection equation and a target motion equation of an orthographic projection model, and averaging multiple estimated values to obtain the final third dimension coordinate. Therefore, reconstruction of the target three-dimensional scattering center is completed directly. The method for reconstructing the target three-dimensional scattering center of the inverse synthetic aperture radar does not need cost of extra system hardware, can distinguish scattering centers with different heights in the same distance and position resolution unit, does not need to utilize prior information such as observation perspective of the radar, and has relatively small calculating amount.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Method for acquiring characteristic size of remote video monitored target on basis of depth fingerprint

The invention discloses a method for acquiring a characteristic size of a remote video monitored target on the basis of a depth fingerprint, which comprises the following steps of: acquiring a plurality of slice images of a region of interest of a monitored scene of a target video monitoring system by a gated imaging technique, overlapping the slice images to obtain a depth fingerprint of the region of interest and implanting the depth fingerprint into the target video monitoring system; when the monitored target appears in the region of interest, extracting the monitored target from the background, matching the monitored target with the depth fingerprint, and determining fingerprint lines which are subordinate to foot characteristic lines of the monitored target, wherein space distance information of the fingerprint lines is target distance information; and after acquiring the target distance information, inverting information of the characteristic size of the target from the length of a characteristic segment of the target to be measured, which takes a pixel as the unit, in a monitoring image according to a mopping relation between a three-dimensional space and a two-dimensional image plane under the condition of a perspective projection model. By the method, the problem that a conventional remote video monitoring system is difficult to acquire the information of the characteristic size of the target is solved.
Owner:INST OF SEMICONDUCTORS - CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products