UAV landing control method and device

A control method and technology of unmanned aerial vehicles, which are applied in non-electric variable control, altitude or depth control, control/regulation systems, etc. The effect of operating the threshold

Inactive Publication Date: 2017-03-22
BEIJING BRISKY TECH DEV CO LTD
6 Cites 12 Cited by

AI-Extracted Technical Summary

Problems solved by technology

The accuracy and speed of land...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

It should be noted that, utilizing high-performance computing chips such as NVIDIA Tegra X1, utilizing the advantages of GPU (GraphicsProcessing Unit, Graphics Processing Unit) for image processing, target detection and recognition can be made to reach the real-time level of 30 frames per second, significantly Increases the speed at which drones land.
Through the UAV landing control device provided by the embodiment of the present disclosure, the deep neural network is used to carry out target recognition and detection of the candidate image to determine the landing position of the UAV, and then control the UAV to automatically land without human intervention , Applying artificial intelligence ...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention relates to a UAV (unmanned aerial vehicle) landing control method and device. The method comprises a step of obtaining a first image corresponding to a destination and the prediction coordinate range of the destination, a step of obtaining multiple candidate images which cover the prediction coordinate range and determining multiple candidate areas in the candidate images according to category information, a step of determining the matching degree of each candidate area according to the first image, a step of determining the candidate area with a highest matching degree as a landing area, and a step of carrying out landing controlling on a UAV according to the coordinate of the landing area. Through the UAV landing control method and device provided by the invention, the target identification detection of the candidate image is carried out, the landing position of the UAV is determined, then the UAV is controlled to land automatically, the human intervention is not needed, and the artificial intelligence technology is applied to the field of UAV. While the UAV operating threshold is reduced and convenience is provided to a user, the speed and accuracy of UAV landing are improved.

Application Domain

Altitude or depth control

Technology Topic

Uncrewed vehicleComputer science +1

Image

  • UAV landing control method and device
  • UAV landing control method and device
  • UAV landing control method and device

Examples

  • Experimental program(3)

Example Embodiment

[0038] Example 1
[0039] figure 1 A flowchart of a drone landing control method according to an embodiment of the present disclosure is shown. This method can be applied to ground stations and unmanned aerial vehicles, which is not limited here. Among them, the ground station can wirelessly communicate with the UAV to control the landing of the UAV. Such as figure 1 As shown, the method may include step 11 to step 15.
[0040] In step 11, the first image corresponding to the destination and the predicted coordinate range of the destination are acquired.
[0041] The first image corresponding to the destination may be an image obtained by taking pictures of the destination in advance, or may be an image with graphical characteristics of the destination, which is not limited herein. For example, such as Figure 6a As shown, the destination is an object W, and the first image corresponding to the destination may be an image obtained by taking pictures of the object W in advance, or may be an image with the graphic characteristics of the object W (4 concentric circles).
[0042] By obtaining the predicted coordinate range of the destination, when the drone needs to be controlled to land, the predicted coordinate range of the drone to the destination can be controlled, and the drone can be controlled to collect images according to the predicted coordinate range of the destination, that is, The drone is controlled to collect multiple candidate images covering the predicted coordinate range, so that the image collection range covers the predicted coordinate range of the destination, so that the collected candidate images include the destination.
[0043] In step 12, multiple candidate images covering the predicted coordinate range are acquired, and multiple candidate regions are determined in the candidate image according to category information.
[0044] In this embodiment, the drone may have a camera device. After obtaining the predicted coordinate range of the destination, the drone can be controlled to obtain multiple candidate images covering the predicted coordinate range of the destination through the camera device. Category information (categories) can be used to describe the characteristics of the image, and the type of category information can be set according to detection requirements.
[0045] In step 13, the matching degree of each of the to-be-selected regions is determined according to the first image.
[0046] In this embodiment, the degree of matching indicates the degree to which the area to be selected matches the first image. The higher the matching degree of a certain candidate area, the more likely the candidate area contains the destination.
[0047] In step 14, the area to be selected with the highest matching degree is determined as the area to be landed.
[0048] As an example of this embodiment, the multiple candidate regions obtained can be screened for the first time according to the set first threshold and the matching degree of each candidate region, and candidate regions whose matching degree is less than the first threshold are deleted. . Through the first screening, you can remove the candidate areas whose category information is different from the category information of the first image, and remove the candidate areas that have a lower degree of matching with the first image, and then obtain the same category information as the category information of the first image And at least one candidate area whose matching degree is greater than or equal to the first threshold. Then, methods such as NMS (Non-maximum suppression, non-maximum suppression) can be used to perform the first step on at least one candidate area whose category information is the same as the category information of the first image and whose matching degree is greater than or equal to the first threshold. The secondary screening process obtains the candidate area with the highest matching degree and determines it as the area to be landed, for example Figure 4 The area H3 shown in d has the highest matching degree, and the area H3 is determined as the area to be landed.
[0049] In other examples, the candidate area with the highest matching degree can be directly determined as the area to be landed through one screening, which is not limited here.
[0050] In step 15, the drone is controlled to land according to the coordinates of the area to be landed.
[0051] In this embodiment, the landing coordinates of the drone in the area to be landed are determined according to the position information of the candidate image to which the area to be landed belongs and the position information of the area to be landed in the candidate image, and then the drone is controlled in Automatically land on the landing coordinates. The landing coordinates may be determined according to the position information of the target center of the area to be landed. For example, the landing coordinates may be the coordinates of the target center.
[0052] figure 2 It shows an exemplary flow chart of determining a plurality of candidate regions in a candidate image according to category information in step 12 of a drone landing control method according to an embodiment of the present disclosure. Such as figure 2 As shown, the determining multiple candidate regions in the candidate image according to category information may include step 21 to step 23.
[0053] In step 21, the candidate image is divided into multiple grids.
[0054] As an example of this embodiment, before performing grid division on the image to be selected, each image to be selected may be scaled to the same size, for example, the resolution of the image to be selected is scaled to 448×448. After scaling each candidate image, the candidate image can be divided into S×S grid cells, for example Figure 4 As shown in a, the image is divided into 7×7 grids. The value of S is not limited in this disclosure, and can be set according to actual testing requirements.
[0055] In step 22, the first grid in the image to be selected is determined according to the category information.
[0056] In step 23, a plurality of areas to be selected are determined according to the first grid, wherein each of the areas to be selected covers the first grid, and each of the areas to be selected includes a plurality of areas with the same category information. grid.
[0057] As an example of this embodiment, by predicting the probability that the image in each grid belongs to various category information, the category information of each grid can be determined. The number of types of category information can be set according to detection requirements, for example, the number of types C is 20. Among them, the specific value of C is not limited in the present disclosure, and can be determined according to actual testing requirements.
[0058] As an example of this embodiment, a to-be-selected area may be determined according to the first grid and multiple grids with the same category information, for example Figure 4 Bounding box A shown in c a (bounding box) is a candidate area. The area to be selected can be a rectangle, irregular polygon, or other shapes. The area to be selected may include multiple grids with the same category information. Due to the limitation of the shape of the area to be selected, the area to be selected may also include a certain proportion of grids with different category information from the first grid.
[0059] As an example of this embodiment, each first grid can predict B to-be-selected regions, and can determine the location information (x, y, w, h) and confidence score (confidence) of each to-be-selected region. Where, x can represent the offset of the abscissa of the target center relative to the abscissa of the upper left corner of the image to be selected. For example, the coordinates of the upper left corner of the image to be selected are (0,0), and the coordinates of the lower right corner of the image to be selected are The coordinates are (1,1), and the target center is at the center of the image to be selected, then x is 0.5; y can represent the offset of the ordinate of the target center relative to the ordinate of the upper left corner of the image to be selected, for example, to be selected The coordinates of the upper left corner of the image are (0,0), the coordinates of the lower right corner of the image to be selected are (1,1), the target center is at the center of the image to be selected, then y is 0.5; w can represent the width of the area to be selected For example, w can be the ratio of the width of the area to be selected to the width of the image to be selected; h can represent the height of the area to be selected, for example, it can be the height of the area to be selected and the width of the image to which the area belongs. Choose the high ratio of the image; the confidence score can be P(A i )×S i. Among them, the specific value of B is not limited in this disclosure, and can be determined according to actual testing requirements.
[0060] As an example of this embodiment, for each candidate image, the confidence that the category information of each grid can be predicted to be the same as the category information of various categories can be used including S×S×(5×B+ C) The tensor of a unit completes the confidence prediction. The tensor can be a matrix with dimensions S×S×(5×B+C); where S×S is the number of grids divided by the image to be selected, and C Is the value of the type of category information, and B is the number of regions to be selected predicted by each first grid. For example, the confidence that the category information of a certain grid is the same as the third category information (one of the C category information) can be determined from the tensor.
[0061] image 3 An exemplary flowchart of step 22 of the drone landing control method according to an embodiment of the present disclosure is shown. Such as image 3 As shown, the determining the first grid in the candidate image according to the category information may include step 31 to step 33.
[0062] In step 31, the multiple grids are classified according to the category information.
[0063] In step 32, the target center is determined according to the grids with the same category information.
[0064] In step 33, the grid containing the target center is determined as the first grid.
[0065] In this embodiment, when it is determined that multiple grids contain the same category information, the location of the target center can be determined based on multiple grids with the same category information, and the grid where the target center is located is determined as the first network. Grid, for example: Figure 4 If the multiple grid types in the H1 area on the left side of the figure b are the same, the location of the target center can be determined based on the multiple grids in the H1 area on the left, and the grid G1 where the target center is located is the first grid in the H1 area. ;same, Figure 4 The multiple grids in the H2 area on the right side of the figure b have the same category information, then the location of the target center can be determined based on the multiple grids in the H2 area on the right, and the grid G2 where the target center is located is the first grid in the H2 area. .
[0066] In this embodiment, the process of determining the area to be landed in step 12 to step 14 is implemented based on the deep neural network. For example, YOLO (You Only Look Once) is used for target detection and recognition.
[0067] In a possible implementation manner, determining the degree of matching of the to-be-selected area according to the first image includes: according to category information of the to-be-selected area, and the difference between the to-be-selected area and the first image The overlap ratio determines the matching degree of the to-be-selected region.
[0068] In a possible implementation manner, the determining the matching degree of the to-be-selected area according to the category information of the to-be-selected area and the overlap ratio of the to-be-selected area and the first image includes:
[0069] Use formula 1 to determine the i-th candidate area A i The matching degree M i;
[0070] M i =P(C j |A i )×P(A i )×S i Formula 1;
[0071]
[0072] Among them, P(A i ) Represents the confidence that the category information of the i-th candidate area is the same as the category information of the first image, P(C j A i ) Represents the category information of the i-th candidate area and the j-th category information C j The same confidence level, S i Represents the overlap rate of the i-th candidate area and the first image.
[0073] As an example of this embodiment, when the category information of the i-th candidate area is different from the category information of the first image, P(A i ) Is 0; when the category information of the i-th candidate area is the same as the category information of the first image, P(A i ) Is 1. That is, in the case where the category information of the i-th candidate area is different from the category information of the first image, the matching degree M of the i-th candidate area i Is 0. Wherein, the category information of the i-th to-be-selected area is the same as the category information of the first grid in the i-th to-be-selected area.
[0074] As an example of this embodiment, the overlap ratio S of the i-th candidate area and the first image i It can be calculated by formula 3:
[0075]
[0076] Among them, D 1 Is the area of ​​the first image, D i Is the area of ​​the i-th candidate area, N i Is the area of ​​the overlapping area between the first image and the i-th candidate area.
[0077] As an example of this embodiment, since the area to be selected is determined according to the first grid, the category information of the grid in the area to be selected is mostly the same as the category information of the first grid. Therefore, the target in the area to be selected can be The category information of the first grid where the center is located is used as the category information of the area to be selected. Then in formula 1, the category information of the i-th candidate area and the j-th category information C j The same confidence level P(C j A i ) Can be based on the category information of the first grid of the i-th candidate area recorded in the tensor and the j-th category information C j The same confidence is determined. For example, if the category information of the first grid of the i-th candidate area is the j-th category information C j , Then the category information of the first grid of the i-th to-be-selected area and the j-th category information C j The same confidence level is 1; if the category information of the first grid of the i-th to-be-selected area is not the j-th category information C j , Then the category information of the first grid of the i-th to-be-selected area and the j-th category information C j The same confidence is 0.
[0078] Figure 5 A schematic diagram showing the structure of the deep neural network in the drone landing control method of an embodiment of the present disclosure, such as Figure 5 As shown, the deep neural network includes multiple convolutional layers, multiple pooling layers, and a fully connected layer. In one example, in PASCAL VOC (a picture data set), the input to be selected is 448x448, and S is =7, B=2, a total of 20 categories (C=20). The image to be selected is processed by the deep neural network to output a tensor with a dimension of 7×7×30. The tensor can record the confidence that the category information of each grid of the image to be selected is the same as each category information. For example, if the category information of grid A is the k-th category information, the confidence that the category information of the grid A is the same as the k-th category information is 1, and the confidence that the category information of the grid A is the same as other category information is 0.
[0079] It should be noted that the use of high-performance computing chips such as NVIDIA Tegra X1 and the advantages of GPU (Graphics Processing Unit) for image processing can enable target detection and recognition to reach a real-time level of 30 frames per second, which significantly improves unmanned The speed of the aircraft landing.
[0080] Figure 6a , 6b , 6c shows a schematic diagram of an example of controlling drone landing according to an embodiment of the present disclosure, such as Figure 6a , 6b As shown in 6c, the process of controlling the landing of the drone can include:
[0081] The drone landing control method described above is used to obtain multiple candidate images within the predicted coordinates of the destination; according to the first image corresponding to the destination, the deep neural grid method is used to perform target detection and recognition on each candidate image. In the image to be selected, determine the area to be landed that best matches the first image, for example Figure 6a In the middle area F1, the object W in the area F1 best matches the object in the first image, then the area F1 is determined as the area to be landed; according to the position information of the candidate image to which the area F1 belongs and the area F1 in the candidate image Determine the landing coordinates of the UAV in the area F1 based on the position information, and then control the UAV to land on the landing coordinates. The landing coordinates can be the coordinates of the center position of the area F1. E.g Figure 6b , 6c As shown, the drone landed on the object W.
[0082] It should be noted that although Embodiment 1 is taken as an example to introduce the example of the drone landing control method as described above, those skilled in the art can understand that the present disclosure should not be limited to this. In fact, the user can flexibly set each step according to personal preference and/or actual application scenarios, as long as it conforms to the technical solution of the present disclosure.
[0083] Through the drone landing control method provided by the embodiments of the present disclosure, the deep neural network is used to perform target recognition and detection on the selected image, determine the landing position of the drone, and then control the drone to automatically land without human intervention. Intelligent technology is applied in the field of drones. While lowering the threshold for drone operation and providing convenience to users, it also improves the speed and accuracy of drone landing.

Example Embodiment

[0084] Example 2
[0085] Figure 7 Shows a structural diagram of a drone landing control device according to an embodiment of the present disclosure, such as Figure 7 As shown, the drone landing control device includes: a destination information acquisition module 100 for acquiring a first image corresponding to a destination and a predicted coordinate range of the destination. The to-be-selected area acquisition module 200 is configured to acquire multiple candidate-selected images covering the predicted coordinate range, and determine multiple candidate-selected areas in the candidate image according to category information. The matching degree determining module 300 is configured to determine the matching degree of each of the to-be-selected regions according to the first image. The to-be-landed area determination module 400 is configured to determine the to-be-selected area with the highest matching degree as the to-be-landed area. The landing control module 500 is used to control the landing of the drone according to the coordinates of the area to be landed.
[0086] Picture 8 An exemplary structural diagram of a drone landing control device according to an embodiment of the present disclosure is shown.
[0087] In a possible implementation, such as Picture 8 As shown, the to-be-selected area acquisition module 200 includes a grid division sub-module 201 for dividing the to-be-selected image into multiple grids. The first grid determination submodule 202 is configured to determine the first grid in the to-be-selected image according to the category information. The to-be-selected area determination sub-module 203 is configured to determine a plurality of to-be-selected areas according to the first grid, wherein each of the to-be-selected areas covers the first grid, and each of the to-be-selected areas includes multiple Grids with the same category of information.
[0088] In a possible implementation manner, the first grid determination submodule 202 includes: a grid classification submodule, configured to classify the multiple grids according to the category information; and a target center determination submodule, It is used to determine the target center according to the grids with the same category information; the first grid determination sub-module is used to determine the grid containing the target center as the first grid.
[0089] In a possible implementation manner, the matching degree determining module 300 is configured to determine the to-be-selected area according to the category information of the to-be-selected area and the overlap ratio between the to-be-selected area and the first image The matching degree of the region.
[0090] In a possible implementation manner, the matching degree determination module 300 uses the above formula 1 to determine the i-th candidate area A i The matching degree M i;
[0091] M i =P(C j |A i )×P(A i )×S i Formula 1;
[0092]
[0093] Among them, P(A i ) Represents the confidence that the category information of the i-th candidate area is the same as the category information of the first image, P(C j A i ) Represents the category information of the i-th candidate area and the j-th category information C j The same confidence level, S i Represents the overlap rate of the i-th candidate area and the first image.
[0094] It should be noted that although Embodiment 2 is taken as an example to introduce the drone landing control device as described above, those skilled in the art can understand that the present disclosure should not be limited to this. In fact, the user can flexibly set each module according to personal preference and/or actual application scenarios, as long as it conforms to the technical solution of the present disclosure.
[0095] Through the drone landing control device provided by the embodiments of the present disclosure, the deep neural network is used to perform target recognition and detection on the selected image, determine the landing position of the drone, and then control the drone to automatically land without human intervention. Intelligent technology is applied in the field of drones. While lowering the threshold for drone operation and providing convenience to users, it also improves the speed and accuracy of drone landing.

Example Embodiment

[0096] Example 3
[0097] Picture 9 It is a block diagram of a device 800 for controlling drone landing according to an exemplary embodiment. For example, the device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, etc.
[0098] Reference Picture 9 , The device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
[0099] The processing component 802 generally controls the overall operations of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method. In addition, the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
[0100] The memory 804 is configured to store various types of data to support operations in the device 800. Examples of these data include instructions for any application or method operating on the device 800, contact data, phone book data, messages, pictures, videos, etc. The memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
[0101] The power supply component 806 provides power to various components of the device 800. The power supply component 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 800.
[0102] The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user. The touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
[0103] The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a microphone (MIC), and when the device 800 is in an operation mode, such as a call mode, a recording mode, and a voice recognition mode, the microphone is configured to receive external audio signals. The received audio signal may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, the audio component 810 further includes a speaker for outputting audio signals.
[0104] The I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module. The peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
[0105] The sensor component 814 includes one or more sensors for providing the device 800 with various aspects of status assessment. For example, the sensor component 814 can detect the on/off status of the device 800 and the relative positioning of the components. For example, the component is the display and the keypad of the device 800. The sensor component 814 can also detect the position change of the device 800 or a component of the device 800. , The presence or absence of contact between the user and the device 800, the orientation or acceleration/deceleration of the device 800, and the temperature change of the device 800. The sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact. The sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
[0106] The communication component 816 is configured to facilitate wired or wireless communication between the device 800 and other devices. The device 800 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication. For example, the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
[0107] In an exemplary embodiment, the apparatus 800 may be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing equipment (DSPD), programmable logic devices (PLD), field programmable Implemented by a gate array (FPGA), controller, microcontroller, microprocessor or other electronic components, and used to implement the aforementioned drone landing control method.
[0108] In an exemplary embodiment, there is also provided a non-volatile computer-readable storage medium including instructions, such as the memory 804 including instructions, which can be executed by the processor 820 of the device 800 to complete the aforementioned drone landing control. method.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Template-based picture retouching method and system for mobile platform

PendingCN113542621ALower operating thresholdEasily achieve complex effectsImage enhancementTelevision system detailsComputer graphics (images)Software engineering
Owner:上海艾麒信息科技股份有限公司

Classification and recommendation of technical efficacy words

  • Improve accuracy and speed
  • Lower operating threshold

Online music recommendation method and device

ActiveCN103678388ALower operating thresholdStrong timelinessSpecial data processing applicationsOnline musicAttenuation coefficient
Owner:GUANGZHOU KUGOU TECH

Director system and recording and live broadcasting system

Owner:北京竞业达数码科技股份有限公司

Novel fault positioning method

Owner:XIAN AIRCRAFT DESIGN INST OF AVIATION IND OF CHINA
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products