Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

120 results about "Real time vision" patented technology

Crowd density analysis method based on statistical characteristics

The invention discloses a method for analyzing crowd density base on statistical characteristics. The method comprises the following steps: video input and frame extraction are carried out; the mosaic image difference MID characteristics are extracted from the video-frequency frame sequence, and the subtle movements in the crowd are detected; the uniform distribution of the sequence time of the mosaic image difference MID characteristics is checked; the geometric correction is performed to the crowd and the scene with obvious perspective phenomenon, and a contribution factor of each picture element to the crowd density is obtained on the image plane; and the weighting process is performed to the crowd space area, so as to obtain the crowd density. Compared with the prior method, the method has no need of the reference background, also has no need of the background modeling and can self-adapt change of either morning or evening light, the algorithm is quite robust, and the application is convenient; the mathematical model is simple and effective, the spatial distribution and the size of the crowd can be accurately located, and the vivacity is strong; the calculation amount is small, and the method is suitable for real-time visual monitoring. The invention can be widely applied to the monitoring and the management of the public places with detained crowd density such as the public transportation, the subway, the square and the like.
Owner:INST OF AUTOMATION CHINESE ACAD OF SCI

Electric power inspection robot positioning method based on multi-sensor fusion

The invention provides an electric power inspection robot positioning method based on multi-sensor fusion, and the method comprises the steps: firstly carrying out the preprocessing of data collectedby a camera, an IMU and a speedometer and the calibration of a system, and completing the initialization of a robot system; secondly, extracting a key frame, and performing back-end optimization on the position, the speed and the angle of the robot and the bias of a gyroscope in the IMU by utilizing the real-time visual pose of the key frame to obtain the real-time pose of the robot; then, constructing a key frame database, and calculating the similarity between the current frame image and all key frames in the key frame database; and finally, performing closed-loop optimization on the key frames in the key frame database forming the closed loop, and outputting the pose after closed-loop optimization to complete positioning of the robot. According to the back-end optimization method provided by the invention, the positioning precision is improved; and closed-loop optimization is added in the visual positioning process, so that accumulated errors existing in the positioning process areeffectively eliminated, and the accuracy under the condition of long-time operation is guaranteed.
Owner:ZHENGZHOU UNIV

Self-customizing, multi-tenanted mobile system and method for digitally gathering and disseminating real-time visual intelligence on utility asset damage enabling automated priority analysis and enhanced utility outage response

A self-customizing, multi-tenanted mobile system includes digitally gathering and disseminating real-time visual intelligence on utility asset damage enabling automated priority analysis and enhanced utility outage response. A preferred embodiment may be made up of, for example, communication module (102) to transfer geo-coded damage imaging and associated metadata simultaneously from multiple outage-causing damage locations to dispatchers and operations personnel in the utility control room and in the field. A mobile application (101) is installed onto the first responder's Global Positioning System (GPS) enabled mobile device (104) to send metadata to a multi-tenanted intelligent platform (MTIP) (106). MTIP (106) determines which utility tenant receives the damage report and customizes all aspects of the technical solution: the first responder mobile application (101), the central web portal (103) and the damage viewing application for utility field personnel (110). A central web portal (103) running on a control room personal computer running a Javascript capable browser or similar environment (105) receives geo-coded damage imaging and associated metadata from mobile application (101) via the MTIP (106) which automatically analyzes event location, relevance and severity to compute, recommend and communicate event priority. MTIP (106) further analyzes inbound images using computer vision technology and wire geometry algorithms to determine relative risk and event priority of downed wires. The multi-tenanted intelligent platform (106) is used to store outage-causing damage information and perform damage assessment enabling dispatchers to respond appropriately. A preferred embodiment enables external users—specifically municipal first responders (fire, police and municipal workers) to report outage-causing damage to the electric grid and provides a simple, easily deployable and secure system. The system then uses location, severity and role-based rules to dynamically notify appropriate utility personnel (112) via text message, email notification or within the damage viewing application on the field personnel's mobile device (110) so they are best able to respond and repair the damage. A preferred embodiment also speeds and improves communication between municipalities and utilities, and enhances the transparency of utility damage repair leading to outage resolution.
Owner:BOSSANOVA SYST INC

Vision auxiliary control method and device for wire arrangement consistency of optical fiber winding machine

InactiveCN105841716AEvenly distributedSolve the problem of unable to automatically adjust the wire feedMeasurement devicesFilament handlingWinding machineReduction drive
The invention provides a vision auxiliary control device for wire arrangement consistency of an optical fiber winding machine. The vision auxiliary control device comprises a machine seat, a rack and a supporting cross beam, and further comprises a gear box body which is arranged on the machine seat and can move left and right; a winding main shaft is arranged on the gear box body and can move left and right along with the movement of the gear box body; a first servo motor in the gear box body drives a speed reducer to generate rotating motion so as to realize winding; a bracket capable of moving left and right as well as up and down is arranged on the supporting cross beam; a wire shifting wheel and a camera which are static relatively are fixedly arranged on the bracket; a lens of the camera is positioned in front of the winding wheel. The invention further provides a vision auxiliary control device for wire arrangement consistency of the optical fiber winding machine. According to the vision auxiliary control device and the vision auxiliary control device, the shortcoming that the conventional optical fiber winding machine cannot automatically adjust the wire shifting feeding amount is overcome; real-time vision feedback is added, and the feeding amount of the wire shifting wheel of the optical fiber winding machine is adjusted, so that optical fiber ring winding is higher in consistency.
Owner:NORTH CHINA UNIVERSITY OF TECHNOLOGY

Real-time vision target detection method robust to ambient light change

The invention provides a real-time vision target detection method robust to an ambient light change. The method comprises a step of acquiring a historical image sequence in a video stream, a step of initializing a pixel-level mixed multivariate Gaussian background model, a step of designing a background modeling method of a spherical K-means expectation maximization algorithm and updating parameters of the background model in an online way to adapt to a change of an environment once a new image frame is obtained, a foreground target detection step of using a statistical framework determined bya color space area and calculating a latest image frame and the background model in the statistical framework to obtain a pixel area where a foreground target is located, and a step of weakening detection noise through an iterative Bayesian decision step, wherein a target contour can be enhanced in the process. According to the method, the foreground target position and contour in a visual fieldcan be accurately detected in real time in the visual field, a correct detection rate is high, a false alarm rate is low, the disturbance of a detection result caused by the ambient light change can be resisted, and the method is particularly suitable for an intelligent video surveillance system.
Owner:DONGHUA UNIV

Real-time visual target tracking method based on twin convolutional network and long short-term memory network

The invention relates to a real-time visual target tracking method based on a twin convolutional network and a long short-term memory network, which comprises the following steps of: firstly, for a video sequence to be tracked, taking two continuous frames of images as inputs acquired by the network each time; carrying out feature extraction on two continuous frames of input images through a twinconvolutional network, obtaining appearance and semantic features of different levels after convolution operation, and combining depth features of high and low levels through full-connection cascading; transmitting the depth features to a long-term and short-term memory network containing two LSTM units for sequence modeling, performing activation screening on target features at different positions in the sequence by an LSTM forgetting gate, and outputting state information of a current target through an output gate; and finally, receiving a full connection layer output by the LSTM to output the predicted position coordinates of the target in the current frame, and updating the search area of the target in the next frame. The tracking speed is greatly improved while certain tracking stability and accuracy are guaranteed, and the tracking real-time performance is greatly improved.
Owner:NANJING UNIV OF POSTS & TELECOMM

Flight simulation training system based on unmanned aerial vehicle and flight simulation method thereof

The invention discloses a flight simulation system based on an unmanned aerial vehicle in the technical field of simulated flight. The flight simulation system comprises a ground driving platform and an unmanned aerial vehicle, wherein the ground driving platform is provided with an instrument simulation module, a maneuvering simulation module, a maneuvering module, a vision module and a ground communication module; the unmanned aerial vehicle is provided with an airborne communication module, a vision acquisition module, a flight data acquisition module and an airborne flight control module; the maneuvering module, the maneuvering simulation module, the ground communication module, the airborne communication module and the airborne flight control module are connected in sequence and transmit simulated unmanned aerial vehicle maneuvering information; the vision acquisition module and the flight data acquisition module are separately connected with the airborne communication module and output real-time vision information and simulated flight information; the airborne communication module and the ground communication module are connected with the vision module in sequence and output high recovered video information, and are connected with the instrument simulation module in sequence and output unmanned aerial vehicle flight information. The flight simulation system can improve the quality of simulated flight and meet the requirements of flight training, air travel and research trial.
Owner:SHANGHAI JIAO TONG UNIV

Real-time vision system oriented target compression sensing method

The invention discloses a real-time vision system oriented target compression sensing method, which comprises the following steps: image reconstruction, mixed compression sensing, high-efficient Vibe target detection, updating and post-processing, wherein the step of the image reconstruction comprises the following specific steps: according to the size of a collected image, carrying out 4*4 partitioning on an image, and converting obtained image blocks into 16*1 vectors; the step of the mixed compression sensing comprises the following specific steps: constructing the image block corresponding to a mixed sampling matrix, and carrying out sampling compression; the step of the high-efficient Vibe target detection comprises the following specific steps: for each pixel in the image block, comparing a pixel value with a sample set to judge whether the pixel belongs to a background point or not; the step of updating comprises the following specific steps: according to the above detection result, determining a background block area and a target block area in the image block, and obtaining the parameter regulation information of the mixed sampling matrix of a next frame of image of the pixel according to a situation that the pixel belongs to the background block area or the target block area; and the step of the post-processing comprises the following specific steps: carrying out image optimization processing on each image block of the current frame to obtain a final target image of the current frame.
Owner:SUZHOU INST OF TRADE & COMMERCE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products