Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2391 results about "Visual recognition" patented technology

Visual or Image Recognition is a popular research area and technology, which finds immense use in many disciplines in both industrial and consumer focused applications. Its part of a broader domain called computer vision and benefits from applying Machine Learning (ML) and Deep Learning (DL) algorithms.

Visual recognition and positioning method for robot intelligent capture application

The invention relates to a visual recognition and positioning method for robot intelligent capture application. According to the method, an RGB-D scene image is collected, a supervised and trained deep convolutional neural network is utilized to recognize the category of a target contained in a color image and a corresponding position region, the pose state of the target is analyzed in combinationwith a deep image, pose information needed by a controller is obtained through coordinate transformation, and visual recognition and positioning are completed. Through the method, the double functions of recognition and positioning can be achieved just through a single visual sensor, the existing target detection process is simplified, and application cost is saved. Meanwhile, a deep convolutional neural network is adopted to obtain image features through learning, the method has high robustness on multiple kinds of environment interference such as target random placement, image viewing anglechanging and illumination background interference, and recognition and positioning accuracy under complicated working conditions is improved. Besides, through the positioning method, exact pose information can be further obtained on the basis of determining object spatial position distribution, and strategy planning of intelligent capture is promoted.
Owner:合肥哈工慧拣智能科技有限公司

Material object programming method and system

The invention discloses a material object programming method and a material object programming system, which belong to the field of human-machine interaction. The method comprises the following steps of: 1) establishing a set of material object programming display environment; 2) shooting the sequence of material object programming blocks which are placed by a user and uploading the shot image toa material object programming processing module by using an image acquisition unit; 3) converting the sequence of the material object blocks into a corresponding functional semantic sequence by usingthe material object programming processing module according to the computer vision identification modes and the position information of the material object programming blocks; 4) determining whether the current functional semantic sequence meets the grammatical and semantic rules of the material object display environment or not, and if the current functional semantic sequence does not meet the grammatical and semantic rules of the material object display environment, feeding back a corresponding error prompt; 5) replacing the corresponding material object programming blocks by using the useraccording to the prompt information; and 6) repeating the steps 2) to 5) until the functional semantic sequence corresponding to the sequence of the placed material object programming blocks meets the grammatical and semantic rules of the material object display environment, and finishing a programming task. By using the method and the system, the problem that children and green hands are difficult to learn programming is solved, and the system has low cost and is easy to popularize.
Owner:INST OF SOFTWARE - CHINESE ACAD OF SCI

Fine granularity vehicle multi-property recognition method based on convolutional neural network

The invention relates to a fine granularity vehicle multi-property recognition method based on a convolutional neural network and belongs to the technical field of computer visual recognition. The method comprises the steps that a neural network structure is designed, including a convolution layer, a pooling layer and a full-connection layer, wherein the convolution layer and the pooling layer areresponsible for feature extraction, and a classification result is output by calculating an objective loss function on the last full-connection layer; a fine granularity vehicle dataset and a tag dataset are utilized to train the neural network, the training mode is supervised learning, and a stochastic gradient descent algorithm is utilized to adjust a weight matrix and offset; and a trained neural network model is used for performing vehicle property recognition. The method can be applied to multi-property recognition of a vehicle, the fine granularity vehicle dataset and the multi-propertytag dataset are utilized to obtain more abstract high-level expression of the vehicle through the convolutional neural network, invisible characteristics reflecting the nature of the to-be-recognizedvehicle are learnt from a large quantity of training samples, therefore, extensibility is higher, and recognition precision is higher.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Accurate visual positioning and orienting method for rotor wing unmanned aerial vehicle

InactiveCN104298248APrecision hoverFlexible and convenient hoveringPosition/course control in three dimensionsVisual field lossVisual recognition
The invention discloses an accurate visual positioning and orienting method for a rotor wing unmanned aerial vehicle on the basis of an artificial marker. The accurate visual positioning and orienting method includes the following steps that the marker with a special pattern is installed on the surface of an artificial facility or the surface of a natural object; a camera is calibrated; the proportion mapping relation among the actual size of the marker, the relative distance between the marker and the camera and the size, in camera imaging, of the marker is set up, and the keeping distance between the unmanned aerial vehicle and the marker is set; the unmanned aerial vehicle is guided to fly to the position where the unmanned aerial vehicle is to be suspended, the unmanned aerial vehicle is adjusted so that the marker pattern can enter the visual field of the camera, and a visual recognition function is started; a visual processing computer compares the geometrical characteristic of the pattern shot currently and a standard pattern through visual analysis to obtain difference and transmits the difference to a flight control computer to generate a control law so that the unmanned aerial vehicle can be adjusted to eliminate the deviation of the position, the height and the course, and accurate positioning and orienting suspension is achieved. The accurate visual positioning and orienting method is high in independence, good in stability, high in reliability and beneficial for safety operation, nearby the artificial facility the natural object, of the unmanned aerial vehicle.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

An online detection and classification device and method for lithium-ion battery pole pieces

The invention relates to an on-line detection grading device for a lithium ion battery pole piece and a method thereof. In the device, image data of the lithium ion battery pole piece are acquired by using an embedded visual recognition device and are analyzed and processed to judge whether the appearance quality of the pole piece meets requirements or not, so that the actions of a turnover part and a grading part are controlled to realize real-time on-line double-side detection and grading of the electrode pole piece. The method has high detection speed; generally, the detection of the frontside and the back side of one electrode pole piece can be finished within 1 second, so that the production efficiency is increased; if the pole piece is manually detected, longer time is required to be taken; the detection accuracy is high; the embedded visual detection technology is adopted, so that the accuracy is high and the stability is also realized, and a detection result has high uniformity; the running time is long, namely a detection system can uninterruptedly run for 24 hours, so that the productivity is improved and the labor cost is saved; and on-line detection can be realized, so that the on-line detection of the electrode pole piece is realized, the interference in a manual operation process is reduced, and the productivity is improved.
Owner:HENAN UNIV OF SCI & TECH

Industrial robot visual recognition positioning grabbing method, computer device and computer readable storage medium

The invention provides an industrial robot visual recognition positioning grabbing method, a computer device and a computer readable storage medium. The method comprises the steps that image contour extraction is performed on an acquired image; when the object contour information exists in the contour extraction result, positioning and identifying the target object by using an edge-based templatematching algorithm; when the target object pose information is preset target object pose information, correcting the target object pose information by using a camera calibration method; and performingcoordinate system conversion on the corrected pose information by using a hand-eye calibration method. The computer device comprises a controller, and the controller is used for implementing the industrial robot visual recognition positioning grabbing method when executing the computer program stored in the memory. A computer program is stored in the computer readable storage medium, and when thecomputer program is executed by the controller, the industrial robot visual recognition positioning grabbing method is achieved. The method provided by the invention is higher in recognition and positioning stability and precision.
Owner:GREE ELECTRIC APPLIANCES INC
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products