Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2495 results about "Information fusion" patented technology

Simultaneous localization and mapping (SLAM) method for unmanned aerial vehicle based on mixed vision odometers and multi-scale map

The invention discloses a simultaneous localization and mapping (SLAM) method for an unmanned aerial vehicle based on mixed vision odometers and a multi-scale map, and belongs to the technical field of autonomous navigation of unmanned aerial vehicles. According to the SLAM method, an overlooking monocular camera, a foresight binocular camera and an airborne computer are carried on an unmanned aerial vehicle platform; the monocular camera is used for the visual odometer based on a direct method, and binocular camera is used for the visual odometer based on feature point method; the mixed visual odometers conduct information fusion on output of the two visual odometers to construct the local map for positioning, and the real-time posture of the unmanned aerial vehicle is obtained; then theposture is fed back to a flight control system to control the position of the unmanned aerial vehicle; and the airborne computer transmits the real-time posture and collected images to a ground station, the ground station plans the flight path in real time according to the constructed global map and sends waypoint information to the unmanned aerial vehicle, and thus autonomous flight of the unmanned aerial vehicle is achieved. Real-time posture estimation and environmental perception of the unmanned aerial vehicle under the non-GPS environment are achieved, and the intelligent level of the unmanned aerial vehicle is greatly increased.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Multimodal input-based interactive method and device

The invention aims to provide an intelligent glasses device and method used for performing interaction based on multimodal input and capable of enabling the interaction to be closer to natural interaction of users. The method comprises the steps of obtaining multiple pieces of input information from at least one of multiple input modules; performing comprehensive logic analysis on the input information to generate an operation command, wherein the operation command has operation elements, and the operation elements at least include an operation object, an operation action and an operation parameter; and executing corresponding operation on the operation object based on the operation command. According to the intelligent glasses device and method, the input information of multiple channels is obtained through the input modules and is subjected to the comprehensive logic analysis, the operation object, the operation action and the operation element of the operation action are determined to generate the operation command, and the corresponding operation is executed based on the operation command, so that the information is subjected to fusion processing in real time, the interaction of the users is closer to an interactive mode of a natural language, and the interactive experience of the users is improved.
Owner:HISCENE INFORMATION TECH CO LTD

Multifunctional V2X intelligent roadside base station system

The invention requests to protect a multifunctional V2X intelligent roadside base station system. The system comprises roadside sensing equipment, an MEC server, a high-precision positioning service module, a multi-source intelligent roadside sensing information fusion module and a 5G/LTE-V communication module. An intelligent roadside device integrating C-V2X communication, environmental perception and target recognition, high-precision positioning and the like is designed, and the problem that multi-device information fusion and integration are inconvenient in intelligent transportation is solved. In the system, a C-V2X intelligent road side system architecture and a target layer multi-source information fusion method are designed. Road side multi-source environment cooperative sensing is combined, real-time traffic scheduling of the intersection is realized by using a traffic scheduling module in the MEC server, and communication and high-precision positioning services are providedfor vehicle driving, and finally the target information after fusion processing is broadcasted to other vehicles or pedestrians through a C-V2X RSU (LTE-V2X/5G V2X and the like) according to an application layer standard data format, so the driving and traffic safety is improved.
Owner:CHONGQING UNIV OF POSTS & TELECOMM

Power transmission line fault reason identification method based on high and low frequency wavelet feature association

The invention discloses a power transmission line fault season identification method based on high and low frequency wavelet feature association, which comprises the steps of extracting fault phase current samples of N failure types, and building a sample database; performing high frequency and low frequency data sampling on a fault type T from the sample database, and respectively performing wavelet decomposition; performing feature extraction on a high frequency data wavelet coefficient and a low frequency data wavelet coefficient; S4, building an association relationship model of the fault type T; determining the association relationship model of the N fault types; S6, performing wavelet decomposition on sample data whose fault reason is to be tested, and extracting a feature vector; substituting the feature vector of the sample whose fault reason is to be tested to the association relationship models of the N fault types, and judging the fault type of the sample data whose fault reason is to be tested. According to the invention, fault phase current is analyzed and processed through combining a wavelet transform theory and an information entropy, thereby not only being capable of effectively analyzing suddenly changed signals, but also being capable of achieving a purpose of information fusion, and improving the identification accuracy.
Owner:CHINA SOUTHERN POWER GRID COMPANY +1

Composite machine tool digital twinning monitoring system

The invention discloses a composite machine tool digital twinning monitoring system, which relates to the technical field of digital twinning. The composite machine tool digital twinning monitoring system creates a digital twinning monitoring system through using a machine tool information module, realizes multi-platform, multi-data and multi-interface communication based on an OPCUA transmissioninterface, and constructs a man-machine interaction module; a multi-field data acquisition module acquires multi-source heterogeneous data in real time by adopting different types of sensors, processes the acquired data based on an information fusion technology to form twin data, and forwards the twin data to a modeling calculation module; the modeling calculation module is used for forming a digital twinning body of a compound machine tool through driving of the twin data in combination with rules such as constraint, prediction and decision; and a personalized decision module monitors and manages entity machine tool equipment in real time by reconstructing and optimizing a machine tool monitoring twinning model in real time. According to the composite machine tool digital twinning monitoring system, the monitoring process of the operation of the compound machine tool is simplified, the monitoring precision of the machine tool system is improved, and the active predictive maintenance of the operation of the compound machine tool is realized.
Owner:GUILIN UNIV OF ELECTRONIC TECH

Human action recognition method based on two-channel infrared information fusion

InactiveCN102799856AGuaranteed correct recognition rateCharacter and pattern recognitionFrequency spectrumPrincipal component analysis
The invention discloses a human action recognition method based on two-channel infrared information fusion, and the method comprises the following steps of: collecting a human motion video image through an infrared camera, and collecting a human motion voltage signal through a pyroelectric infrared sensor; respectively carrying out feature extraction on the collected human motion video image and human motion voltage signal, wherein human contour energy is obtained from the human motion video image and spectrum signature is obtained from the human motion voltage signal; respectively carrying out principal component analysis on the human contour energy and the spectrum signature; fusing the principal component analysis results on a feature layer; and carrying out classification and identification on the fused features through the support vector machine method combined with corresponding data in the human infrared action data base. According to the method, multi-level human action information in the infrared image is fully utilized and the human direction information in the output signals of the pyroelectric infrared sensor is fused so as to realize classification and identification on different human actions in different directions and ensure accurate action recognition rate.
Owner:TIANJIN UNIV

Air-ground cooperative intelligent inspection robot and inspection method

PendingCN111300372AGuaranteed all-roundReduce security operating costsManipulatorThe InternetControl engineering
The invention relates to an air-ground cooperative intelligent inspection robot. The robot comprises a robot platform and an unmanned aerial vehicle, wherein the robot platform comprises a vehicle body, wheels, a driving assembly, a mechanical arm, an environment sensing assembly, a communicator, a robot controller and a power supply assembly, and the unmanned aerial vehicle is in communication connection with a base station through the communicator. The inspection method comprises the following steps that air-ground cooperative multi-robot positioning and mapping is carried out, specifically,sensing positioning calculation is carried out, map creation is carried out, and multi-information fusion positioning is carried out; and air-ground cooperative tracking and control is carried out, specifically, unmanned aerial vehicle flight control design is carried out, robot platform trajectory tracking control is carried out, and unmanned aerial vehicle self-help landing control is carried out. The robot can be ensured to execute given navigation and inspection tasks in an omnibearing and all-weather mode, the technologies of the Internet of Things, artificial intelligence and the like are applied, environment sensing, dynamic decision making, behavior control and alarm devices are integrated, the robot has the capabilities of autonomous sensing, walking, protection, interactive communication and the like, basic, repeated and dangerous security work can be completed, and the security operation cost is reduced.
Owner:TONGJI ARTIFICIAL INTELLIGENCE RES INST SUZHOU CO LTD

Attack occurrence confidence-based network security situation assessment method and system

InactiveCN108306894AAccurately reflect the security situationTimely responseData switching networksStream dataNetwork attack
The invention belongs to technical fields characterized by protocols and discloses an attack occurrence confidence-based network security situation assessment method and system. According to the attack occurrence confidence-based network security situation assessment method and system, a machine learning technology is adopted to analyze network stream data and calculate a probability that networkstreams belong to attack streams; a D-S evidence theory is used to fuse the information of multi-step attacks to obtain the confidence of attack occurrence; and a network security situation is calculated by means of situational factor integration on the basis of security vulnerability information, network service information and host protection strategies; and therefore, the accuracy of assessmentis effectively improved. Since the confidence information of detection equipment is added to the assessment system, the influence of false negatives and false positives can be effectively reduced. Anensemble learning method is adopted, so that the accuracy of confidence calculation can be improved. A network attack is regarded as a dynamic process, and merging processing is performed on the information of the multi-step attacks. Information fusion technology is adopted, so that network environment characteristics such as vulnerabilities, service information and protection strategies are comprehensively considered.
Owner:XIDIAN UNIV

Method for constructing semantic map on line by utilizing fusion of laser radar and visual sensor

The invention relates to a method for constructing a semantic map on line by utilizing fusion of a laser radar and a visual sensor. The method comprises the following steps: acquiring an initialized grid map of a current vehicle, and acquiring distance measurement data corresponding to the laser radar and image data corresponding to the visual sensor; performing target detection processing on theranging data of the laser radar to obtain multi-attribute information of a plurality of first-class detection targets; performing feature extraction and matching on the image data of the visual sensorto obtain multi-attribute information of a plurality of second-class detection targets; fusing the multi-attribute information of the first type of detection targets and the second type of detectiontargets, importing the fused multi-attribute information of the detection targets into a Redis database, generating a high-dimensional grid map serving as a semantic map, and storing the multi-attribute information of each detection target in the high-dimensional grid map in a dynamic database table mode. According to the method, the multi-dimensional semantic information of the dynamic and staticenvironments around the vehicle can be represented online in real time.
Owner:廊坊和易生活网络科技股份有限公司

Data fusion dynamic vehicle detection method based on millimeter wave radar and machine vision

The invention discloses a data fusion dynamic vehicle detection method based on millimeter-wave radar and machine vision. A millimeter-wave radar data processing module, a vision image processing module and a data fusion processing module are included. Firstly, through sensor joint calibration, a projection matrix of a millimeter wave radar and a visual sensor is obtained, and a conversion relation between a radar coordinate system and an image coordinate system is established; preprocessing the acquired millimeter-wave radar data, performing effective target screening, projecting a radar detection target to a visual image through a conversion relationship, and obtaining a target region of interest according to the position of the projection target; performing target information fusion according to the overlapping condition of the target region of interest obtained from the image processing algorithm and the target region of interest detected by the millimeter-wave radar; and finally,verifying whether there is a vehicle in the fused region of interest based on an image processing algorithm. According to the invention, the front vehicle can be effectively detected, and the system has good environmental adaptability and stability.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Online nondestructive testing (NDT) method and device for comprehensive internal/external qualities of fruits

The invention discloses an online nondestructive testing (NDT) method and an online nondestructive testing (NDT) device for comprehensive internal/external qualities of fruits. The online NDT method of the invention comprises the following steps: a comprehensive quality evaluation model for the fruits is established firstly by a detection device consisting of a conveying system, a machine vision system, an Near Infrared Spectrum (NIR) system and a grading system, and the fruits performs online uniform motion through the conveying system; the machine vision system acquires the image information of the fruits and extracts the external characteristics of the fruits; the NIR system acquires the spectral information of the fruits; the grading system analyses the spectral information by a pre-established mathematical model, and extracts the internal characteristics of the fruits; and the grading system fuses the internal characteristics and the external characteristics of the fruits by a pre-established information fusion model so as to obtain the comprehensive quality level of the fruits. The method and the device of the invention can detect the internal characteristics and the external characteristics of the fruits simultaneously; a DSP high-speed image processing system is used to handle complex image information, which greatly improves the real-time characteristic of the system; and an information fusion technology is used to carry out online real-time detection on the comprehensive qualities of the fruits.
Owner:扬州福尔喜果蔬汁机械有限公司

System and method for integrally and intelligently controlling water and fertilizer in field based on multi-source information fusion

The invention relates to a system and method for integrally and intelligently controlling water and fertilizer in a field based on multi-source information fusion. The system comprises a weather forecast inquiry receiving subsystem, a weather real-time data collecting control subsystem, a cloud computing platform, a central control unit, an irrigation and fertilization control subsystem, an irrigation and fertilization state monitoring system and an online fault detection system. The system provided by the invention is an automatic control system integrated with the functions of weather forecast inquiry, crop cloud computing platform inquiry, farmland weather real-time collection, solid fertilizer rapid solution, mother solution real-time monitoring regulation, irrigation and fertilization state monitoring, online fault detection, irrigation and fertilization and remote intelligent control. According to the invention, the factors, such as, weather forecast, cloud computing platform, weather real-time collection and growth vigor of the crops in the growth process can be comprehensively considered, corresponding irrigation and fertilization decisions can be made, and thus precise irrigation and precise fertilization can be accurately realized; and the growth vigor of the crops can be described in real time, and the irrigation and fertilization can be performed according to the growth vigor of the crops.
Owner:SHANDONG AGRICULTURAL UNIVERSITY

Video super-resolution reconstruction method based on multi-frame fusion optical flow

The invention discloses a video super-resolution reconstruction method based on multi-frame fusion optical flow. The video super-resolution reconstruction method comprises the steps of collecting a data set, constructing a motion compensation network and reconstructing the network in a super-resolution manner. In a multi-frame fusion optical flow network, for multiple input frames, intra-frame spatial correlation can be fully utilized, loss details can be compensated, fused optical flow is used for motion compensation, and compensation frames are similar to a learning target. In a super-resolution reconstruction network, a three-dimensional scale feature extraction layer and a space-time residual module are used to extract image features of a compensation frame, and sub-pixel convolution is used to obtain a high-resolution video frame. And end-to-end training is carried out on the multi-frame fusion optical flow network and the video super-resolution reconstruction network at the sametime. Space-time information between acquired video frames can express features of video frame information fusion, and high-resolution video frames with good effects are reconstructed. The method canbe applied to the technical fields of satellite images, video monitoring, medical imaging, military science and technology and the like.
Owner:SHAANXI NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products