Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

52 results about "Active perception" patented technology

Active Perception is where an agents' behaviors are selected in order to increase the information content derived from the flow of sensor data obtained by those behaviors in the environment in question. In other words, in order to understand the world we move around and explore it. We sample the world through our eyes, ears, nose, skin, and tongue as we explore and construct an understanding (Perception) of the environment on the basis of this behavior (Action). Within the construct of Active Perception, the interpretation of sensor data (perception) is inherently inseparable from the behaviors required to capture that data - action (behaviors) and perception (interpretation of sensor data) are tightly coupled. This has been developed most comprehensively with respect to vision (Active Vision) where an agent (animal, robot, human, camera mount) changes position in order to improve the view of a specific object and/or where the agent uses movement in order to perceive the environment (e.g. for obstacle avoidance).

Article classification and recovery method based on multi-modal active perception

ActiveCN111590611ARealize material identificationAccurate classificationGripping headsActive perceptionSignal on
The invention relates to an article classification and recovery method based on multi-modal active perception, and belongs to the technical field of robot application. The method comprises the steps that firstly, a target detection network model facing a target article is built, then a grabbing pose for grabbing the target article is obtained, and a mechanical arm system is guided to actively grabthe target article in a pinching mode according to the grabbing pose; the tail ends of fingers of a mechanical arm are provided with touch sensors, and touch signals on the surface of the target article can be obtained in real time while the target article is grabbed; and feature extraction is conducted on the obtained touch information, the feature information is input into a touch classifier for identifying the material of the article, and classification and recovery of the target article are completed. According to the article classification and recovery method based on multi-modal activeperception, visual and touch multi-modal information is utilized, a robot is guided to actively grab the target article in the most suitable pose through a visual detection result and collect the touch information, article material identification is achieved, and article classification and recovery are completed; and various recyclable articles made of different materials can be automatically identified, and high universality and practical significance are achieved.
Owner:北京具身智能科技有限公司

Multi-sensor based facing sweeping method and sweeping vehicle

The embodiment of the invention relates to a multi-sensor based facing sweeping method and a sweeping vehicle. The multi-sensor based facing sweeping method comprises the following steps: receiving first acquisition information uploaded by a drive sensor; fusing the first acquisition information to obtain first fused information, and judging whether a current sweeping area is a facing area or notaccording to the first fused information; when the current sweeping area is not the facing area, generating a first sweeping path according to the first fused information, and enabling the sweeping vehicle to sweep according to the first sweeping path; when the current sweeping area is the facing area, receiving second acquisition information uploaded by a driven sensor; fusing the first acquisition information uploaded by the drive sensor with second acquisition information uploaded by the driven sensor to obtain second fused information; and generating a second sweeping path according to thesecond fused information, and sweeping by the sweeping vehicle according to the second sweeping path. The multi-sensor based facing weeping method adopts an active perception and passive perception fusing scheme, so that the sweeping vehicle can sweep the area more comprehensively.
Owner:BEIJING ZHIXINGZHE TECH CO LTD

Auxiliary monitoring device in flight control system

The invention relates to the field of flight control systems, in particular to an auxiliary monitoring device in a flight control system. The auxiliary monitoring device comprises a core processing module; the core processing module comprises a database unit, a processing unit and a sending unit; rule data violating flight regulations and unsafe flight control data are stored in the database unit,the processing unit acquires a real-time flight airway and attitude information of an aircraft, the real-time flight airway and attitude information of the aircraft and the rule data violating the flight regulations and the unsafe flight control data are compared and analyzed, once the real-time flight airway and attitude information of the aircraft and the rule data violating the flight regulations and the unsafe flight control data are anastomotic, anastomotic information is transmitted to the sending unit; and the sending unit transmits the anastomotic information to a ground blank pipe. The auxiliary monitoring device in the flight control system has the advantages that through the effective combined design of flight status information of the aircraft and active perception informationof the system, the flight is monitored in real time, it is achieved that effective supervision and alarm of driving are provided for flight crews, and the occurrence of flight accidents caused by man-made operation is avoided.
Owner:BEIJING AERONAUTIC SCI & TECH RES INST OF COMAC +1

A T-shaped active sensing system and method applied to long material stacking batch number recognition

The invention relates to a T-shaped active perception system and method applied to long material stacking batch number recognition, and belongs to the technical field of long material product storagestacking information management of steel rolling production lines of iron and steel enterprises. The system comprises an RFID tag card, a first RFID non-contact card reader, a motion lifting device, alifting controller, a wireless router, a first wireless network card, a stacking data management and control terminal, a second wireless network card, a third wireless network card, a second RFID non-contact card reader, a fourth wireless network card and the like. When long products are hoisted in and out of a warehouse through a travelling crane, the long products are hoisted; the batch information of each bundle is updated in real time; therefore, the accurate management of the stacking position of the long material product bundle is realized; Accurate statistics of warehouse-in and warehouse-out conditions and long material product inventory information is realized, the problems of errors and the like caused by ever-changing production site conditions when manual statistics is used inthe past are solved, the human resource cost is reduced, and the automation and intelligent management level of a long material finished product warehouse is improved.
Owner:YUNNAN KUNGANG ELECTRONICS INFORMATION TECH CO LTD

Retail trade system of geographical position-based AR (Augmented Reality) platform

The invention relates to a retail trade system of a geographical position-based AR (Augmented Reality) platform. The system comprises a customer mobile phone terminal (1) and an LBS (Location Based Service) positioning server (2) connecting with the customer mobile phone terminal in a wireless manner, wherein a GPS (Global Positioning System) positioning module is arranged inside the customer mobile phone terminal (1), and position information is sent to the LBS positioning server (2); the LBS positioning server (2) sends AR promotional information to an APP of the mobile phone terminal according to a position; and the APP of the mobile phone terminal searches a dedicated SSID (Service Set Identifier) channel of a portable wifi probe in each of surrounding real stores at the same time; andonce the position is matched, a customer is determined entering the real store, which is taken as an AR communication performance basis, so as to prove that customers are attracted by the AR promotional information. The method is simpler and easier to operate, and helps the real stores to directly perform interactive promotion and communication with neighboring customers without intermedia; and meanwhile, the customers can actively know the promotion information of the stores on the site, so that the customers are stimulated to enter the stores to increase the sales conversion rate.
Owner:时趣信息科技(上海)有限公司

Camera active sensing method and system

The invention designs an active sensing method and system for video monitoring. The invention provides a high-efficiency cooperative control method for master and slave cameras. A heuristic algorithm is adopted to automatically adjust camera parameters to establish a master-slave camera coordinate mapping model, and a geometric model between a slave camera and a scene is constructed, so that the slave camera can quickly position a target. The invention provides a pedestrian attribute identification method based on target state perception. Pedestrian state judgment is carried out according to the human body key points to guide validity judgment of a pedestrian target attribute recognition result. On the basis, a pedestrian target attribute recognition result is updated based on the multi-frame image sequence, and then a complete pedestrian attribute recognition result is obtained. The invention provides an automatic data labeling method based on master-slave collaboration. The slave camera is utilized to confirm the target detected by the master camera, the master camera periodically generates the background image, the confirmed target and the generated background image are fused to form new annotation data, then the target detection model of the master camera is adjusted and optimized, and the adaptive capacity of the target detection method to a new scene is improved.
Owner:BEIHANG UNIV

Method and device for actively constructing environment scene map by intelligent agent and exploration method

The invention provides a method for actively constructing an environment scene map by an intelligent agent based on visual information, an environment exploration method and intelligent equipment. The method comprises the following steps: collecting an environment scene image required by a training model and a corresponding environment scene map data set; collecting an intelligent agent exploration environment path required by the training model; training an active exploration model by adopting an environment scene image, a corresponding environment scene map data set and a collected agent exploration environment path; and an action is generated based on the trained active exploration model, the intelligent agent explores the environment by adopting the generated action, 3D semantic point cloud data is obtained in the exploration process, and then an environment scene map is constructed by utilizing the 3D semantic point cloud data. According to the method, the limitation that a traditional computer vision task can only passively perceive the environment can be overcome, the active exploration characteristic of the intelligent agent is utilized, the perception ability and the motion ability are combined to achieve scene maps of active perception, active exploration of the environment and active construction of the environment, and the method is applied to various vision tasks.
Owner:TSINGHUA UNIV

Man-machine coupled longitudinal collision avoidance control method

The invention relates to a man-machine coupled longitudinal collision avoidance control method. Theman-machine coupled longitudinal collision avoidance control method comprises a wire control brakingmodule, an active sensing module, and an anthropomorphic control module;theanthropomorphic control module comprises a driver model and a deep neural network anthropomorphic decision-making controller,the active sensing module obtains a real-time traffic condition and inputs to a driver model to output desired brake deceleration;according to basic experimental data of a driver and the active sensing module, the deep neural network anthropomorphic decision-making controller generates a large amount of experimental data by utilizing generative adversarial net technology, the deep neural networkis trained to generate a brake collision avoidance controller, and output information of the brake collision avoidance controller is transmitted to the wire control braking module to complete the braking collision avoidance.According to the man-machine coupled longitudinal collision avoidance control method, the problems that the collision avoidance controller of a vehicle has a narrow range of application and control is harsh are effectively solved, and adaptability of the brake collision avoidance system and the comfort of the driver and passenger are improved.
Owner:SOUTHEAST UNIV

Visual and auditory collaborative power equipment inspection system and method

PendingCN114093145AImprove the active perception of statusEnhance attribute synergy cognitionMeasurement devicesAlarmsAuditory senseActive perception
The invention discloses a visual and auditory collaborative power equipment inspection system and method. The inspection system comprises a multi-source data acquisition module, a data intelligent analysis module and an audio and visual collaborative monitoring module; the multi-source data acquisition module is used for acquiring visual and auditory multi-source data and sending the visual and auditory multi-source data to the data intelligent analysis module; the data intelligent analysis module is used for receiving the data acquired by the multi-source data acquisition module, carrying out overall processing and analysis on the data through data fusion and intelligent analysis, carrying out equipment fault identification and defect early warning on the multi-source data, and sending an equipment fault identification and defect early warning result to the audio and visual collaborative monitoring module; and the audio and visual collaborative monitoring module displays the equipment fault identification and defect early warning result. According to the inspection system and method, multi-source data monitored by visual and auditory sense and other sensors are fully applied, state active perception and attribute collaborative cognition of the power equipment are comprehensively improved, and the inspection efficiency and accuracy of the power equipment are improved.
Owner:XUJI GRP

A hydropower station simulation method and simulation system based on active perception

The invention belongs to the technical field of hydropower station simulation control, and discloses a hydropower station simulation method and simulation system based on active perception. The hydropower station equipment is used as the organization unit for decoupling and simulation services, and the intelligent perception and active service of hydropower equipment are used as the main line to drive the construction. model and simulation process. By rationally integrating the service composition and interaction ideas in SOA and EDA, and seamlessly integrating them into the modeling object of the hydropower environment—the simulation model of intelligent hydropower equipment, the on-demand distribution of device perception information and event-driven service collaboration are realized. , to complete the construction of the entire hydropower station simulation operating environment. The modeling method of the hydropower station simulation system provided by the present invention uses equipment as the basic unit of system decoupling and model organization, which is more in line with the operating principle of a real hydropower station, and is more clear and natural in the division of responsibilities, and is more conducive to the refinement of modeling and modeling Loose coupling, reusability and scalability, as well as mass customization of hydropower station simulation systems.
Owner:CHINA THREE GORGES UNIV

Method, device and exploration method for intelligent agent to actively construct environmental scene map

Provides a method for an agent to actively construct an environmental scene map based on visual information, an environment exploration method, and an intelligent device. The method includes: collecting the environmental scene image and the corresponding environmental scene map data set required for training the model; collecting the agent exploring the environment required for the training model path; the active exploration model is trained by using the environmental scene image and the corresponding environmental scene map data set and the collected agent to explore the environmental path; the action is generated based on the trained active exploration model, and the agent uses the generated action to explore the environment. , to obtain 3D semantic point cloud data, and then use the 3D semantic point cloud data to construct an environmental scene map. The invention can overcome the limitation that the traditional computer vision tasks can only passively perceive the environment in the past, and utilize the active exploration characteristics of the intelligent body to combine the perception ability and the movement ability to realize active perception, actively explore the environment, and actively construct the scene of the environment Atlas, applied to a variety of vision tasks.
Owner:TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products