Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

428 results about "3d vision" patented technology

6-dimensional sensory-interactive virtual keyboard instrument system and realization method thereof

The invention provides a 6-dimentional sensory-interactive virtual keyboard instrument system and a realization method thereof. The 6-dimentional sensory-interactive virtual keyboard instrument system comprises a human body auxiliary device, a 3-dimentiaonal camera shooting device, a 3-dimentiaonal projection device, a signal processing unit and a music collection device. The system provided by the invention has various working modes, including a play mode, an optimal fingering prompting mode, a fingering correction mode and a teaching mode, not only can provides a basic play experience for a user, but also has the functions of fingering prompting, fingering correction and teaching demonstration. The 6-dimentional sensory keyboard instrument system provided by the invention is based on the 3-dimentiaonal vision, hearing, touch feeling and press feeling, and the provided virtual instruments comprises a piano, an electronic organ, an accordion, an organ and other keyboard instruments in species, so that the user can feel an interaction effect just like playing a real instrument; and moreover, the system can realize a powerful teaching function, is simple in use method, flexible and space-saving, is the huge breakthrough for a conventional keyboard instrument, and has a very broad application prospect.
Owner:ZHEJIANG UNIV

Method of estimating the body size and the weight of yak, and corresponding portable computer device of the same

ActiveCN107180438AAutomatic IdentificationReal-time calculation of body size indicatorsImage enhancementImage analysisBody sizeWeight estimation
The invention provides a method of estimating the body size of a yak, based on 3D vision. The method of estimating the body size of a yak, based on 3D vision includes the steps: acquiring the side images of a yak by means of image acquisition equipment; extracting the foreground images from the side images; identifying the key points of the yak body from the extracted foreground images; and by means of the identified key points of the yak body, automatically extracting the body size information of the yak. The invention also provides a method of estimating the weight of a yak, based on 3D vision. The method of estimating the weight of a yak, based on 3D vision includes the steps: including the method of performing the method of estimating the body size of a yak; and taking the extracted body size information of the yak as input, and utilizing a yak weight estimation model to predict the weight value of the yak. Alternatively, the invention furthermore provides a method of estimating the weight of a yak, based on 3D vision. The method of estimating the weight of a yak, based on 3D vision includes the steps: acquiring the side images of a yak by means of image acquisition equipment; extracting the foreground images from the side images; and directly predicting the weight value from the foreground images, by means of a convolution neural network. The invention also provides a corresponding portable computer device.
Owner:TSINGHUA UNIV

Structure optical parameter demarcating method based on one-dimensional target drone

The invention belongs to the technical field of measurement, and relates to an improvement to a calibration method of structured light parameters in 3D vision measurement of structured light. The invention provides a calibration method of the structured light parameters based on a one-dimensional target. After a sensor is arranged, a camera of the sensor takes a plurality of images of the one-dimensional target in free non-parallel motion; a vanishing point of a characteristic line on the target is obtained by one-dimensional projective transformation, and a direction vector of the characteristic line under a camera coordinate system is determined by the one-dimensional projective transformation and a camera projection center; camera ordinates of a reference point on the characteristic line is computed according to the length constraint among characteristic points and the direction constraint of the characteristic line to obtain an equation of the characteristic line under the camera coordinate system; the camera ordinates of a control point on a plurality of non-colinear optical strips are obtained by the projective transformation and the equation of the characteristic line, and then the control point is fitted to obtain parameters of the structured light. In the method, high-cost auxiliary adjustment equipment is unnecessary; the method has high calibration precision and simple process, and can meet the field calibration need for the 3D vision measurement of the large-sized structured light.
Owner:BEIHANG UNIV

Automatic groove cutting system and cutting method based on three-dimensional vision and model matching

ActiveCN108274092ASolve the problems of poor cutting quality, low efficiency and high costSolve the technical difficulties of cuttingGas flame welding apparatusProcess equipmentEngineering
The invention provides an automatic groove cutting system and a cutting method based on three-dimensional vision and model matching, and relates to the technical filed of groove processing equipment.The automatic groove cutting system based on three-dimensional vision and model matching comprises a 3D vision subsystem, a host computer, a motion control system, a cutting robot and a cutting device. The 3D vision subsystem is connected with the host computer in a signal mode, the host computer is connected with the motion control system in a signal mode, the cutting device is arranged on the cutting robot, and the motion control system is electrically connected with the cutting robot. The automatic groove cutting method uses a 3D vision system and an image processing software and is based on an image processing algorithm, and the mapping between a robot, the 3D vision system and a workpiece coordinate system is realized by using a 3D camera internal and external parameter calibration algorithm, a three-point calibration method and an attitude matching algorithm . Different workpieces and groove types can be automatically cut, groove quality is good and efficiency is high, and automation and intelligence of the groove cutting system are improved.
Owner:BEIJING INSTITUTE OF PETROCHEMICAL TECHNOLOGY +1

6-dimentional sensory-interactive virtual instrument system and realization method thereof

The invention relates to a 6-dimentional sensory-interactive virtual instrument system and a realization method thereof. The 6-dimentional sensory-interactive virtual instrument system comprises a human body auxiliary device, a 3-dimentiaonal camera shooting device, a 3-dimentiaonal projection device and a signal processing unit, wherein the human body auxiliary device is used for collecting human body sound information, contact feeling information and press feeling information and converting all the information into signals to be sent to the signal processing unit; and the signal processing unit processes the signals, sends a signal to control the human body auxiliary device to feed back hearing sensing information, contact feeling sensing information and press feeling sensing information, and controls the 3-dimentiaonal projection device to feed back the 3-dimentiaonal vision sensing information. The invention further provides a realization method of a 6-dimentional sensory-interactive virtual instrument. The realization method comprises the four steps of setting, information collection, signal processing and information feedback. The 6-dimentional sensory-interactive virtual instrument system provided by the invention can greatly realize man-machine interaction and virtual reality, a user can feel an interaction effect just like playing a real instrument, and the system has the advantages of powerful function, convenience in use, exquisite appearance and low cost.
Owner:ZHEJIANG UNIV

Joint calibration method and apparatus for structured light 3D visual system and linear array camera

ActiveCN106127745AMeet the requirements of traffic measurementImprove efficiencyImage analysisCamera imageVisual perception
A method and device for joint calibration of a structured light 3D vision system and a line array camera. The structured light 3D vision system includes an area array camera and a laser. The method includes: acquiring a light plane coordinate system and a target coordinate in the structured light 3D vision system The conversion relationship of the system, and as the first conversion relationship; according to the coordinates of the selected feature points in the target coordinate system, and the coordinates of the feature points in the line camera image coordinate system, the target coordinates are established System and the transformation relationship of the line scan camera image coordinate system, and as the second transformation relationship; according to the first transformation relationship and the second transformation relationship, establish the light plane coordinate system and the line scan camera image The conversion relationship of the coordinate system is used as the third conversion relationship; according to the third conversion relationship, the line camera image coordinates corresponding to each coordinate point on the light plane coordinate system are obtained, so as to realize the structured light 3D vision system Joint calibration with line scan cameras.
Owner:BEIJING LUSTER LIGHTTECH

Intelligent handling robotic arm system based on 3D vision and deep learning, and using method

The invention proposes an intelligent handling robotic arm system based on 3D vision and deep learning. The system comprises a vision detection module, a training learning module, a motion planning module and a control module, wherein the vision detection module collects images of objects and sends the images to the control module; the training learning module collects sample data and forms a database for the objects that the robotic arm needs to perform grasping actions; the motion planning module comprises a path planning part and a grasping motion planning part, and the path planning part realizes the path planning of the robotic arm and realizes the function of autonomous selection of paths and obstacle avoidance of the robotic arm; the grasping action planning part realizes the grasping function; and the control module processes the information transmitted by the vision detection module, the training learning module and the motion planning module and transmits corresponding commands to the vision detection module, training learning module and motion planning module, so that the robotic arm completes path movement and grabbing. The invention makes the work scene more diversified, the production and transportation more intelligent, and the application field of the robotic arm broadened.
Owner:SHANGHAI DIANJI UNIV

Real-person shoe type copying device and shoe tree manufacturing method based on single-eye multi-angle-of-view robot vision

InactiveCN104573180AIncrease silhouetteConstraints for adding silhouettesSpecial data processing applications3D modellingPlane mirrorImage segmentation
The invention discloses a real-person shoe type copying device based on single-eye multi-angle-of-view robot vision. The device comprises a single-eye multi-angle-of-view 3D (Three-dimensional) vision box and a computer, wherein the single-eye multi-angle-of-view 3D vision box is used for shooting images of a foot of a real person at five different angles of view; the computer is used for achieving the 3D reconstruction of the foot of the real person and automatically generating a 3D printing file; a plane mirror rectangular bucket-type cavity is formed inside the single-eye multi-angle-of-view 3D vision box and consists of four trapezoidal mirror planes; the upper part of a mirror body is large, and the lower part of the mirror body is small; the mirror planes face the inner side of a cavity body; a light source used for providing uniform soft lighting for the foot of the real person is arranged at the lower part of the plane mirror rectangular bucket-type cavity; cameras used for obtaining the images of the foot of the real person at multiple angles of view according to the refraction and reflection principle of the mirror planes are further arranged in the single-eye multi-angle-of-view 3D vision box; the computer comprises a single-eye multi-angle-of-view 3D vision calibrating unit, an image division, conversion and correction unit, a real person foot surface shape measurement unit and an automatic STL (Standard Template Library) file generation unit. The invention further discloses a shoe tree manufacturing method based on the single-eye multi-angle-of-view robot vision.
Owner:ZHEJIANG UNIV OF TECH

Intelligent sensing grinding robot system with anti-explosion function

The invention relates to an intelligent sensing grinding robot system with the anti-explosion function. The intelligent sensing grinding robot system comprises industrial grinding robots, a 3D vision system, a force/position mixed control system, an anti-explosion device, a conveying system, a shock absorption tool rack, industrial grinding robot control cabinets, a main electric control cabinet and anti-dust automatic tool replacing magazines. Electric spindles with the automatic tool replacing function are arranged on arms of the industrial grinding robots. The conveying system is composed of a sliding table, a sliding block and a cover board, wherein the sliding table is embedded in a ground groove. The conveying system, the industrial grinding robot control cabinets, the anti-dust automatic tool replacing magazines, the anti-explosion device, the 3D vision system and the force/position mixed control system are all connected with the main electric control cabinet. The main electric control cabinet controls and dispatches all the devices. By the adoption of the intelligent sensing grinding robot system, through vision guidance and force and position intelligent sensing, the industrial grinding robots have the visual sense, the touch sense and the perception function, and the better machining quality of a large complex surface is achieved.
Owner:SHENZHEN BOLINTE INTELLIGENT ROBOT CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products