Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

46results about How to "Low image quality requirements" patented technology

Motion unit layering-based facial expression recognition method and system

The invention discloses a motion unit layering-based facial expression recognition method and system. The facial expression recognition method comprises steps of classifying three layers, and specifically comprises the following steps: first extracting an area adjacent to the upper part of the nose as a first-layer classification area, and roughly classifying an expression by taking whether an AU9 motion unit is detected or not as a judgment standard of a first-layer classifier; then extracting a lip area as a second-layer classification area, and performing fine adjustment on the basis of a first-layer classification result by taking whether AU25 and AU12 motion units are detected or not as a judgment standard of a second-layer classifier; finally extracting an upper half face area and a lower half face area as third-layer classification areas respectively, and performing precision classification on the basis of a second-layer classification result. The invention further provides the system for implementing the method. According to the method and the system, characteristics of representative areas of the expression are extracted on the basis of an AU layered structure, and layer-by-layer random forest classification is combined, so that expression recognition accuracy is effectively improved, expression recognition speed is increased, and the method and the system are particularly applied to a low-resolution image.
Owner:HUAZHONG NORMAL UNIV

Pointer instrument detection and reading identification method based on mobile robot

ActiveCN110807355AImprove detection accuracyImprove the detection accuracy of small-scale targetsImage enhancementImage analysisEngineeringContrast enhancement
The invention relates to a pointer instrument detection and reading identification method based on a mobile robot. The method comprises the following steps: obtaining a deep neural network detection model M for a pointer instrument; the mobile robot moves to a designated place to obtain an original environment image containing the instrument equipment at present in a mode that the mobile robot carries a camera; S serves as system input and is transmitted to a deep neural network model M, whether an instrument exists in S or not is detected, the position of the instrument is framed out, an image in a frame is intercepted and subjected to height setting processing, the length-width ratio is not changed, and the processing result is represented by J; contrast enhancement is performed on the image, and a processing result is represented by E; local adaptive threshold segmentation is performed on the E to obtain a reverse binary image B; pointer extraction processing is performed based on aprobability circle; a central straight line L of the pointer part is extracted as pointing information of the instrument pointer; and a coordinate system is established based on a probability circlecenter projection algorithm and reading by an angle method.
Owner:TIANJIN UNIV

Focal plane detection device for projection lithography

The invention provides a focal plane detection device for projection lithography. The device is characterized in that: light emitted by a lighting source is irradiated to an absolute encoded grating; the light which is modulated by the encoded grating passes through a projection imaging system, and is projected and imaged to a surface to be detected through a first reflector; after being reflected by the surface to be detected, the light enters a focus detecting mark amplification system through a second reflector and is received by a detector; a height change of the surface to be detected changes an absolute encoded grating image received by the detector, the absolute encoded grating image is received by utilizing the detector, and absolute codes of a grating image corresponding to the height of the surface to be detected is extracted to complete detection on the position height of the surface to be detected. The focal plane detection device adopts the absolute encoded grating instead of the traditional grating slit, and reduces the coding cycle by increasing the number of code bits of the absolute encoded grating, so that the focal detection range can be enlarged, and the focus detection accuracy can be improved.
Owner:INST OF OPTICS & ELECTRONICS - CHINESE ACAD OF SCI

Commodity placement position panorama generation method based on video target tracking

The invention discloses a commodity placement position panorama generation method and device based on video target tracking, and the method and device can achieve the target detection, tracking and recognition of a commodity position in a video based on deep learning. By utilizing the characteristics that the relative position of the static target is invariant and the tracking target correspondingto the same commodity can appear in multiple video frames of the video, the accuracy of commodity tracking target position and category identification is improved, so that the image quality requirement of the commodity in each video frame is reduced, and the efficiency and quality of panoramic image synthesis are improved. When a video is shot, video shooting conditions do not need to be strictlyrequired, for example, the video can be shot from up, down, left and right directions, so that the requirement on video shooting is reduced, and the video data acquisition efficiency before panoramicpicture synthesis is improved. The generated graph can select to use a two-dimensional graph or a three-dimensional graph as required, so that a panoramic graph with more diversified display forms can be generated.
Owner:GUANGZHOU XUANWU WIRELESS TECH CO LTD

Squid freshness identification method based on color space transformation and pixel clustering

The invention discloses a squid freshness identification method based on color space transformation and pixel clustering, and the method comprises the steps: unfreezing squids, cleaning the unfrozen squids, and preparing squid samples; unfolding the squid sample on a workbench, placing the squid sample in an auxiliary light source irradiation area, and performing image acquisition on the squid sample at different angles by using shooting equipment to obtain an original squid image; carrying out image preprocessing to obtain a test image; carrying out color space transformation and pixel clustering on the test image, extracting a red decay area of the test image, carrying out ratio calculation on the red decay area and the total surface area of the squid, dynamically analyzing the meat quality change condition of the squid, and monitoring the decay rate of the squid. According to the method, the image processing technology is utilized, color space transformation and pixel clustering analysis are carried out on the shot squid images to obtain the metamorphic region area of the squid images, ratio calculation is carried out on the metamorphic region area and the total surface area, and therefore non-contact and lossless squid freshness dynamic monitoring and identification under different storage durations and temperatures are achieved.
Owner:ZHEJIANG ACADEMY OF AGRICULTURE SCIENCES

Intelligent collection method for machine room equipment display panel data

The invention discloses an intelligent collection method of machine room equipment display panel data, which comprises the following steps: S1, a robot acquires images of a plurality of display panelsas a training data set; s2, inputting the training data set into an improved master-rcnn algorithm, and training to obtain a text detection model; s3, the robot collects an image of the display panelin real time and inputs the image into the text detection model obtained through training in the step S2, all texts are automatically marked, so that detection boxes are obtained, and position coordinates and size information of all the detection boxes in the image in the display panel are output; s4, extracting a Roi image in the detection frame, performing image preprocessing on the Roi image,and reserving the extracted digital skeleton image as a training sample set; s5, training an svm classifier according to the training sample set, and performing classification recognition on a singlenumber through the svm classifier; and S6, splicing the numbers into a character string, and outputting the character string to a client for display. Data is automatically collected, the labor cost isreduced, and the operation and maintenance efficiency of the data center is improved.
Owner:杭州优云科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products