Patents
Literature
Eureka-AI is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Eureka AI

7126 results about "Image pair" patented technology

Process for roll-to-roll manufacture of a display by synchronized photolithographic exposure on a substrate web

<heading lvl="0">Abstract of Disclosure</heading> This invention relates to an electrophoretic display or a liquid crystal display and novel processes for its manufacture. The electrophoretic display (EPD) of the present invention comprises microcups of well-defined shape, size and aspect ratio and the microcups are filled with charged pigment particles dispersed in an optically contrasting dielectric solvent. The liquid crystal display (LCD) of this invention comprises well-defined microcups filled with at least a liquid crystal composition having its ordinary refractive index matched to that of the isotropic cup material. A novel roll-to-roll process and apparatus of the invention permits the display manufacture to be carried out continuously by a synchronized photo-lithographic process. The synchronized roll-to-roll process and apparatus permits a pre-patterned photomask, formed as a continuous loop, to be rolled in a synchronized motion in close parallel alignment to a web which has been pre-coated with a radiation sensitive material, so as to maintain image alignment during exposure to a radiation source. The radiation sensitive material may be a radiation curable material, in which the exposed and cured portions form the microcup structure. In an additional process step, the radiation sensitive material may be a positively working photoresist which temporarily seals the microcups. Exposure of a selected subset of the microcups via the photomask image permits selective re-opening, filling and sealing of the microcup subset. Repetition with additional colors permits the continuous assembly of a multicolor EPD or LCD display.
Owner:E INK CALIFORNIA

Infrared target instance segmentation method based on feature fusion and a dense connection network

PendingCN109584248ASolving the gradient explosion/gradient disappearance problemStrengthen detection and segmentation capabilitiesImage enhancementImage analysisData setFeature fusion
The invention discloses an infrared target instance segmentation method based on feature fusion and a dense connection network, and the method comprises the steps: collecting and constructing an infrared image data set required for instance segmentation, and obtaining an original known infrared tag image; Performing image enhancement preprocessing on the infrared image data set; Processing the preprocessed training set to obtain a classification result, a frame regression result and an instance segmentation mask result graph; Performing back propagation in the convolutional neural network by using a random gradient descent method according to the prediction loss function, and updating parameter values of the convolutional neural network; Selecting a fixed number of infrared image data training sets each time and sending the infrared image data training sets to the network for processing, and repeatedly carrying out iterative updating on the convolutional network parameters until the convolutional network training is completed by the maximum number of iterations; And processing the test set image data to obtain average precision and required time of instance segmentation and a finalinstance segmentation result graph.
Owner:XIDIAN UNIV

Method and apparatus for motion blur and ghosting prevention in imaging system

A method and apparatus for motion blur and ghosting prevention in imaging system is presented. A residue image is computed by performing spatial-temporal filter with a set of absolute image difference of image pairs from input images. A noise adaptive pixel threshold is computed for every pixel based on noise statistics of image sensor. The residue image and the noise adaptive pixel threshold are used to create a motion masking map. The motion masking map is used to represent motion and non-motion pixels in pixels merging. The pixels merging step is performed to generate an output image by considering the motion pixels where the motion pixels are performed separately. The resulting output image having no or less motion blur and ghosting artifacts can be obtained, even the input images having different degree of motion blur between each of the image, while the complexity is low. It is preferred that the current invention is applied in the Bayer raw domain. The benefit is reduced computation and memory because only 1 color component is processed for each pixel. Another benefit is higher signal fidelity because processing in the Bayer raw domain is unaffected by demosaicing artifacts, especially along edges. However, the current invention can also be applied in RGB domain.
Owner:PANASONIC CORP

Target posture measuring method based on binocular vision under double mediums

The invention discloses a target posture measuring method based on binocular vision under double mediums, mainly solving the problem that in an underwater spacecraft simulation experiment, the posture of a target spacecraft cannot be measured. The target posture measuring method specifically comprises the following steps of: collecting a target spacecraft image pair through a binocular camera; carrying out harris angle point detection on the collected left image to find out a projection characteristic point; utilizing an epipolar constraint rule and a pyramid rapid matching method to find out an image projection characteristic of the right image; calculating three-dimensional coordinates of corresponding space characteristic points according to the projection characteristics of the left and right images; establishing a refraction model and revising the three-dimensional coordinates of the space characteristic points according to the refraction model; and screening characteristic points in the same plane and accurately calculating the posture of the target spacecraft according to the three-dimensional coordinates of the characteristic points. When the target posture measuring method is used in an underwater spacecraft simulation experiment, the three-dimensional coordinates of the characteristic points on the target spacecraft are accurately calculated through establishment of the refraction model, and the posture of the spacecraft can be accurately measured and calculated.
Owner:XIDIAN UNIV

Real-time dense monocular SLAM method and system based on online learning depth prediction network

The invention discloses a real-time dense monocular simultaneous localization and mapping (SLAM) method based on an online learning depth prediction network. The method comprises: optimization of a luminosity error of a minimized high gradient point is carried out to obtain a camera attitude of a key frame and the depth of the high gradient point is predicted by using a trigonometric survey methodto obtain a semi-dense map of a current frame; an online training image pair is selected, on-line training and updating of a CNN network model are carried out by using a block-by-block stochastic gradient descent method, and depth prediction is carried out on the current frame of picture by using the trained CNN network model to obtain a dense map; depth scale regression is carried out based on the semi-dense map of the current frame and the predicted dense map to obtain an absolute scale factor of depth information of the current frame; and with an NCC score voting method, all pixel depth prediction values of the current frame are selected based on two kinds of projection results to obtain a predicted depth map, and Gaussian fusion is carried out on the predicted depth map to obtain a final depth map. In addition, the invention also provides a corresponding real-time dense monocular SLAM system based on an online learning depth prediction network.
Owner:HUAZHONG UNIV OF SCI & TECH

Auxiliary ultrasonic scanning system of robot based on RGB-D sensor

Provided is an auxiliary ultrasonic scanning system of a robot based on an RGB-D sensor. The auxiliary ultrasonic scanning system comprises a Kinect sensor, the robot, an ultrasonic probe, a marker and a host and is characterized in that the Kinect sensor serves as a visual servo system of the robot; the ultrasonic probe is clamped on a mechanical arm of the robot; the marker is fixed onto the ultrasonic probe; the visual servo system of the robot is utilized for synchronously acquiring an RGB color image and a depth image and sending images to the host; the host finishes following processes such as image stitching operation and image three-dimensional reconstruction; according to image pairs acquired by the visual servo system, the marker fixed onto the ultrasonic probe is recognized and positioned by the host; according to a recognition-based positioning result, the position and the posture of the ultrasonic probe are calculated; and the host is used for sending a control instruction to the robot so that the mechanical arm of the robot is controlled to reach at the specified position for carrying out ultrasonic scanning operation. The auxiliary ultrasonic scanning system of the robot based on the RGB-D sensor is advantaged by being reasonable in design, reliable in performance, high in automated degree and detection efficiency and low in cost.
Owner:NORTHEAST DIANLI UNIVERSITY

Multi-line array laser three-dimensional scanning system and method

The invention provides a multi-line array laser three-dimensional scanning system and method. Accurate synchronism and logic control of the multi-line array laser three-dimensional scanning system can be achieved through an FPGA, a line laser unit array serves as a projection pattern light source, trigger signals are sent to a stereoscopic vision image sensor and the line laser unit array through the FPGA, an upper computer receives image pairs shot by the stereoscopic vision image sensor, laser line array patterns in the image pairs are subjected to encoding, decoding and three-dimensional reconstruction, three-dimensional reconstruction and matching alignment of three-dimensional feature points between different moments are conducted on surface feature points of an object to be measured, and matching calculation is subjected to prediction and error correction through an optical tracking technology. The system and method are used for registration and connection of time domain laser three-dimensional scanning data, meanwhile, measuring error grades are evaluated in real time and fed back to an error feedback controller for adjusting indication, and therefore laser three-dimensional scanning with low cost and high efficiency, reliability and accuracy is completed accordingly.
Owner:BEIJING TENYOUN 3D TECH CO LTD

Three-dimensional helmet display of augmented reality system

InactiveCN101661163AEasy to integrateEliminate competitionOptical elementsOptical axisDisplay device
A three-dimensional helmet display of an augmented reality system is characterized by adopting an optical perspective type structure; being horizontally symmetrical by taking the nose part of a user as a central line; and being provided with an image source, a polarization azimuth adjustment mirror with adjustable azimuth, an optical imaging system and a polarization combination mirror in sequencefrom the external side to the human eye center along the optical axis direction, wherein the polarization combination mirror leads the light direction of the image source to be reflected and deflected by 90 degrees and then enter human eyes; the external light forms a 90-degree angle with the image source light, and then enters human eyes through the transmission of the polarization combination mirror after pass through the polarization mirror with the adjustable azimuth; two images forming a three-dimensional image pair are respectively transmitted to the image sources at the left side and the right side; the polarization azimuth adjustment mirror and the polarization combination mirror constitute an image source brightness adjustment unit; and the polarization mirror and the polarization combination mirror constitute an external light brightness adjustment unit. The invention can lead the virtual environment and the real environment of the three-dimensional helmet display to achievethe best matching state, eliminates the two-eye competition phenomenon, and is used for the augmented reality system.
Owner:HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products