Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

2992 results about "Aerial photography" patented technology

Aerial photography (or airborne imagery) is the taking of photographs from an aircraft or other flying object. Platforms for aerial photography include fixed-wing aircraft, helicopters, unmanned aerial vehicles (UAVs or "drones"), balloons, blimps and dirigibles, rockets, pigeons, kites, parachutes, stand-alone telescoping and vehicle-mounted poles. Mounted cameras may be triggered remotely or automatically; hand-held photographs may be taken by a photographer.

System and method for generating lane-level navigation map of unmanned vehicle

The invention relates to a system and method for generating a lane-level navigation map of an unmanned vehicle based on multi-source data. The lane-level navigation map comprises an offline global map part and an online local map part. According to an offline module, within a target region where the unmanned vehicle runs, original road data is acquired through satellite photos (or aerial photos), a vehicle sensor (laser radar and a camera) and a high-precision integrated positioning system (a global positioning system and an inertial navigation system), then the original road data is subjected to offline processing, multiple kinds of road information are extracted, and finally the road information extracting results are fused to generate the offline global map. The offline global map is stored through a layered structure. According to an online module, when the unmanned vehicle automatically drives in the target region, the road data in the offline global map is extracted according to real-time positioning information, and the online local map with the vehicle as the center within the fixed distance range is drawn. The system and method can be applied to fusion sensing, high-precision positioning and intelligent decisions of the unmanned vehicle.
Owner:安徽中科星驰自动驾驶技术有限公司

Small target detection method based on feature fusion and depth learning

InactiveCN109344821AScalingRich information featuresCharacter and pattern recognitionNetwork modelFeature fusion
The invention discloses a small target detection method based on feature fusion and depth learning, which solves the problems of poor detection accuracy and real-time performance for small targets. The implementation scheme is as follows: extracting high-resolution feature map through deeper and better network model of ResNet 101; extracting Five successively reduced low resolution feature maps from the auxiliary convolution layer to expand the scale of feature maps. Obtaining The multi-scale feature map by the feature pyramid network. In the structure of feature pyramid network, adopting deconvolution to fuse the feature map information of high-level semantic layer and the feature map information of shallow layer; performing Target prediction using feature maps with different scales and fusion characteristics; adopting A non-maximum value to suppress the scores of multiple predicted borders and categories, so as to obtain the border position and category information of the final target. The invention has the advantages of ensuring high precision of small target detection under the requirement of ensuring real-time detection, can quickly and accurately detect small targets in images, and can be used for real-time detection of targets in aerial photographs of unmanned aerial vehicles.
Owner:XIDIAN UNIV

Real-time panoramic image stitching method of aerial videos shot by unmanned plane

ActiveCN102201115ARealize the transformation relationshipQuickly achieve registrationTelevision system detailsImage enhancementGlobal Positioning SystemTime effect
The invention discloses a real-time panoramic image stitching method of aerial videos shot by an unmanned plane. The method comprises the steps of: utilizing a video acquisition card to acquire images which are transmitted to a base station in real time by an unmanned plane through microwave channels, carrying out key frame selection on an image sequence, and carrying out image enhancement on key frames; in the image splicing process, firstly carrying out characteristic detection and interframe matching on image frames by adopting an SURF (speeded up robust features) detection method with good robustness; then reducing the series-multiplication accumulative errors of images in a frame-to-mosaic image transformation mode, determining images which are not adjacent in time sequence but adjacent in space on a flight path according to the GPS (global positioning system) position information of the unmanned plane, optimizing the frame-to-mosaic transformation relation, determining image overlapping areas, thereby realizing image fusion and the panoramic image construction and realizing real-time effect of carrying out flying and stitching simultaneously; and in image transformation, based on adjacent frame information in a vision field and adjacent frame information in airspace, optimizing image transformation to obtain the accurate panoramic images. The stitching method has good real-time performance, is fast and accurate and meets the requirements of application occasions in multiple fields.
Owner:HUNAN AEROSPACE CONTROL TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products