Image projection method and device, equipment and storage medium
A technology in images and images, applied in image communication, image data processing, instruments, etc., can solve the problems of reducing the AR effect projection efficiency and accuracy, wasting system resources, and poor fit of the real scene, etc., to improve projection efficiency and Accuracy, improved transferability, enhanced fit
Pending Publication Date: 2019-07-02
APOLLO INTELLIGENT CONNECTIVITY (BEIJING) TECH CO LTD
6 Cites 3 Cited by
AI-Extracted Technical Summary
Problems solved by technology
[0004] However, the existing technology is highly dependent on the display device, and it is difficult to transfer the AR effect to the real-scene images of different display devices. For different display ...
Method used
In a specific embodiment of the present invention, the three-dimensional coordinates of the target object are calculated uniformly in the vehicle coordinate system, and the three-dimensional virtual image of the target object is constructed, so that the target The three-dimensional virtual image of the object in the three-dimensional virtual scene is projected into the real scene image of each display device. It avoids the complete process of target object determination, coordinate calculation, and conversion projection for different display devices, so that the image enhancement rendering effect of the target object can be migrated to different display devices to achieve accurate projection of the image enhancement effect .
In the technical solution of the present embodiment, the camera is used as a benchmark to determine the determination of the three-dimensional coordinates of the target object in the vehicle coordinate system, and the equivalent camera model of the display device is used to determine the equivalent parameters of the camera parameters and each equivalent camera model , determining the conversion relationship between the current actual scene and the three-dimensional virtual scene, so as to project the three-dimensional virtual image of the target object into the real scene image of the display device according to the conversion relationship. In the embodiment of the present invention, on the basis of realizing the projection of the three-dimensional virtual image of the target object to the real scene image of the display device, by uniformly determining the three-dimensional coordinates of the target object in the vehicle coordinate system, the effect of image enhancement on different display devices is improved. Migration avoids the waste of system resources caused by repeated position calculations for different display devices, improves the projection efficiency and accuracy of image enhancement effects, and enhances the fit between image enhancement effects and real scenes.
In the technical solution of the present embodiment, the camera is used as a reference to determine the determination of the three-dimensional coordinates of the target object under the vehicle coordinate system, and utilize the equivalent camera model of the display device to project the three-dimensional virtual image of the target object to the real scene of the display device image. In the embodiment of the present invention, on the basis of realizing the projection of the three-dimensional virtual image of the target object to the real scene image of the display device, by uniformly determining the three-dimensional coordinates of the tar...
Abstract
The embodiment of the invention discloses an image projection method and device, equipment and a storage medium. The method comprises the steps of determining three-dimensional coordinates of a targetobject in an image collected by a camera in a vehicle coordinate system according to camera parameters; and projecting the three-dimensional virtual image of the target object into the live-action image of each display device according to the three-dimensional coordinates of the target object in the vehicle coordinate system, the camera parameters and the equivalent parameters of the equivalent camera model of each display device. According to the embodiment of the invention, the three-dimensional virtual image of the target object is projected to the live-action image of the display device;three-dimensional coordinates of a target object are determined in a vehicle coordinate system in a unified manner; the mobility of the image enhancement effect on different display devices is improved, the problems of system resource waste and the like caused by repeated position calculation for different display devices are avoided, the projection efficiency and accuracy of the image enhancementeffect are improved, and the fitting degree between the image enhancement effect and a real scene is enhanced.
Application Domain
Image data processingSteroscopic systems
Technology Topic
Virtual imageImage enhancement +4
Image
Examples
- Experimental program(5)
Example Embodiment
[0028] Example one
[0029] figure 1 This is a flowchart of an image projection method provided in the first embodiment of the present invention. This embodiment is applicable to a situation where a three-dimensional virtual image corresponding to an actual target object is projected to a display device capable of displaying real scenes during intelligent driving. The method can be executed by an image projection device, which can be implemented by software and/or hardware, and is preferably configured on a display device of a smart driving vehicle, such as a central control screen, instrument panel, head-up display, and electronic navigation Equipment etc. The method specifically includes the following:
[0030] S110: Determine the three-dimensional coordinates of the target object in the image collected by the camera in the vehicle coordinate system according to the camera parameters.
[0031] In the specific embodiment of the present invention, the three-dimensional virtual image corresponding to the object in the actual scene is projected into the actual scene or the two-dimensional image of the actual scene, and the actual scene is image-enhanced. The use scene is the intelligent driving scene to obtain the vehicle Driving parameters. In smart driving scenarios, cameras and display devices can be installed in smart driving vehicles. There is one camera and one or more display devices.
[0032] The camera may be an external camera independent of the display device, or a built-in camera on the display device, for collecting image information of the surrounding environment of the intelligent driving vehicle, especially road environment information in the driving direction. The actual scene image collected can be a single image, multiple images, or video. The camera has internal parameters and external parameters. The internal parameters of the camera can include field of vision (FOV), distortion parameters, resolution, and focal length. The external parameters of the camera can include the height, position, and attitude angle of the camera from the ground. The internal parameters and external parameters of the camera can be pre-determined by the camera calibration method and stored. The display device can be a vehicle-mounted display device such as a central control screen, a dashboard, a head-up display, and an electronic navigation device, which is used to display the actual scene or the image of the actual scene collected by the camera. The image can include objects such as roads, indicators, road signs, and obstacles. Correspondingly, the target object can be any object in the image that requires image enhancement, such as roads, pedestrians, vehicles, and road signs.
[0033] In this embodiment, the vehicle coordinate system is used to uniformly measure the target object, where the vehicle coordinate system refers to a special coordinate system used to describe the movement of the vehicle. figure 2 Is an example diagram of the vehicle coordinate system, such as figure 2 As shown, the vehicle coordinate system used in this embodiment takes the projection point of the camera projected on the ground as the coordinate origin O, the vehicle traveling direction parallel to the ground is the positive direction of the X axis, and the vehicle is parallel to the ground and points to the left side of the vehicle. That is, the direction perpendicular to the X axis is the positive direction of the Y axis, and the direction perpendicular to the ground, that is, perpendicular to the plane formed by the X axis and the Y axis, and pointing upward is the positive direction of the Z axis.
[0034] Specifically, the camera is used to collect images of the surrounding environment of the intelligent driving vehicle, and the images collected by the camera are recognized to determine the target object. Among them, the target object can be a designated object to be recognized, or any object that exists in the actual environment. In addition, the target object may also be an object in the direction of the vehicle that is extracted from the map data according to the current location information of the vehicle. Thus, according to the camera parameters and the imaging plane of the camera, the three-dimensional coordinates of the target object in the vehicle coordinate system are calculated.
[0035] S120: According to the three-dimensional coordinates of the target object in the vehicle coordinate system, the camera parameters, and the equivalent parameters of the equivalent camera model of each display device, project the three-dimensional virtual image of the target object into the real image of each display device.
[0036] In the specific embodiment of the present invention, the equivalent camera model is used to simulate a three-dimensional virtual scene corresponding to the image acquisition angle of view where the display device is located. Similar to a camera, the equivalent camera model also has internal and external parameters and is stored in advance. The equivalent camera model needs to be obtained according to the implementation principle of the display device. Therefore, for a display device that can directly display images collected by a camera, such as a mobile phone, the equivalent camera model is the camera itself that currently collects images. Correspondingly, the external parameters of the camera can be directly used as the external parameters of the equivalent camera model; based on the internal parameters of the camera, the internal parameters of the equivalent camera model can be determined according to the ratio between the resolution of the camera and the resolution of the display device. For a head up display (HUD), the equivalent camera model is a small hole imaging model composed of the intersection of the imaging plane and the reverse extension line of the light. Correspondingly, similar to a camera, the internal parameters of the equivalent camera model can be determined according to the optical machine of the head-up display; the external parameters of the equivalent camera model can be determined according to the installation position of the head-up display.
[0037] Specifically, after the three-dimensional coordinates of the target object in the vehicle coordinate system are determined, a three-dimensional virtual image of the target object in the vehicle coordinate system can be constructed according to the three-dimensional coordinates of each point on the target object. For each display device, according to the camera parameters and the equivalent parameters of the equivalent camera model of each display device, use the Open Graphics Library (OpenGL) configured to run on the Graphics Processing Unit (GPU) Parameters to determine the conversion relationship between the current actual scene collected by the camera and the 3D virtual scene simulated by the equivalent camera model, so as to determine the target object in the vehicle coordinate system according to the conversion relationship between the current actual scene and the 3D virtual scene The three-dimensional coordinates are converted, the GPU rendering process is started, and the three-dimensional virtual image of the target object in the three-dimensional virtual scene is projected into the real image of each display device.
[0038] The technical solution of this embodiment uses the camera as a reference to determine the three-dimensional coordinates of the target object in the vehicle coordinate system, and uses the equivalent camera model of the display device to project the three-dimensional virtual image of the target object into the real image of the display device. In the embodiment of the present invention, on the basis of projecting the three-dimensional virtual image of the target object to the real image of the display device, the three-dimensional coordinates of the target object are determined uniformly in the vehicle coordinate system, thereby improving the image enhancement effect on different display devices. Transferability avoids problems such as waste of system resources caused by repeated position calculations for different display devices, improves the projection efficiency and accuracy of the image enhancement effect, and enhances the fit between the image enhancement effect and the real scene.
Example Embodiment
[0039] Example two
[0040] This embodiment provides a preferred implementation of the image projection method on the basis of the foregoing embodiment 1, which can determine the conversion relationship between the current actual scene and the three-dimensional virtual scene according to the equivalent camera model of the display device. image 3 This is a flowchart of an image projection method provided in the second embodiment of the present invention, such as image 3 As shown, the method includes:
[0041] S310: Determine the three-dimensional coordinates of the target object in the vehicle coordinate system according to the camera parameters.
[0042] In the specific embodiment of the present invention, the target object may be an object obtained from an actual scene, or an object obtained from map data according to vehicle positioning information. Therefore, the three-dimensional coordinates of the target object in the vehicle coordinate system are determined according to the camera parameters.
[0043] Optionally, the image collected by the camera is recognized to determine the target object; the three-dimensional coordinates of the target object in the vehicle coordinate system are calculated according to the camera parameters and the imaging plane of the camera.
[0044] In this embodiment, algorithms such as deep learning may be used to recognize objects in the image, so as to determine objects or objects of interest in the image. The target object can be an object inherent in the actual scene, such as static objects such as roads and road signs, or it can be a static or dynamic object that appears at any time in the actual scene, such as pedestrians or vehicles. This embodiment does not limit the image recognition algorithm, and any algorithm that can realize image recognition can be applied in this embodiment. Thus, according to the camera parameters and the imaging plane of the camera, the three-dimensional coordinates of the target object in the vehicle coordinate system are calculated.
[0045] Specifically, the coordinates of the target object in the vehicle coordinate system are calculated according to the focal length of the camera, the height of the camera from the ground, the angle between the optical axis of the camera and the ground plane, and the resolution of the imaging plane. Figure 4 In order to determine the three-dimensional coordinates of the target object in the vehicle coordinate system, such as Figure 4 As shown, the vehicle coordinate system O is composed of X-axis, Y-axis and Z-axis. The camera is at the position C of the Z-axis, and the height from the ground is H; the imaging plane I includes the image coordinate system O', which consists of the u-axis and the v-axis. Composition; the lane line is located on the ground plane, that is, the XOY plane. For a point (x, y, 0) in the vehicle coordinate system on the ground plane, that is, on the XOY plane, according to the coordinates (u, v) of the point in the image coordinate system, the following equivalent relationship can be constructed, namely: Among them, f is the focal length of the camera, e u ×e v Is the physical size of each pixel on the image plane, and θ is the angle between the optical axis of the camera and the lane plane, that is, the XOY plane, such as the pitch angle or the inclination angle. Thus, the X coordinate value of the target object in the vehicle coordinate system can be obtained The Y coordinate value of the target object in the vehicle coordinate system is In the same way, the Z coordinate value of the target object in the vehicle coordinate system can be obtained according to the same conversion relationship.
[0046] Optionally, according to the vehicle positioning information, extract the object in the driving direction of the vehicle as the target object from the map data; determine the three-dimensional coordinates of the target object in the vehicle coordinate system according to the camera parameters.
[0047] In this embodiment, a GPS positioning system is usually installed in the vehicle. Accordingly, the positioning information of the vehicle can be obtained in real time, and the object in the driving direction of the vehicle can be extracted from the map data as the target object. The target object may be an object inherent in the actual scene and loaded into the map database, such as static objects such as roads and road signs. Therefore, the three-dimensional coordinates of the target object in the vehicle coordinate system are determined according to the camera parameters and the position parameters of the target object in the map data. As a result, when bad weather affects the driving sight, the target object information can be obtained from the map data to enhance the image of the target object in the field of view to assist the driver in understanding the road information.
[0048] S320. Determine the equivalent camera model of each display device and the equivalent parameters of each equivalent camera model according to each display device.
[0049] In the specific embodiment of the present invention, the equivalent camera model is used to simulate a three-dimensional virtual scene corresponding to the image acquisition angle of view where the display device is located. Similar to a camera, the equivalent camera model also has internal and external parameters.
[0050] Optionally, if the display device is a device that directly displays the images collected by the camera, the camera is determined as the equivalent camera model of the display device; correspondingly, the equivalent parameters of each equivalent camera model are determined, including: determining the camera’s The external parameters are used as the external parameters of the equivalent camera model; based on the internal parameters of the camera, the internal parameters of the equivalent camera model are determined according to the ratio between the resolution of the camera and the resolution of the display device.
[0051] In this embodiment, for a display device capable of directly displaying images collected by a camera, such as a mobile phone and other devices, the equivalent camera model is the camera itself that is currently collecting images. Correspondingly, the external parameters of the equivalent camera model are the external parameters of the camera. Based on the internal parameters of the camera, the internal parameters of the equivalent camera model can be determined according to the ratio between the resolution of the camera and the resolution of the display device.
[0052] Optionally, if the display device is a head-up display, the equivalent camera model of the head-up display is determined to be the pinhole imaging model; correspondingly, the equivalent parameters of each equivalent camera model are determined, including: determining according to the optical machine of the head-up display The internal parameters of the equivalent camera model of the display device; the external parameters of the equivalent camera model of the display device are determined according to the installation position of the head-up display.
[0053] In this embodiment, the equivalent camera model of the head-up display is as Figure 5 As shown, the virtual image is projected onto the windshield of the vehicle according to the principle of small hole imaging. The internal parameters of the equivalent camera model can be determined according to the optical machine of the head-up display; the external parameters of the equivalent camera model can be determined according to the installation position of the head-up display.
[0054] S330: Determine the conversion relationship between the current actual scene and the three-dimensional virtual scene according to the camera parameters and the equivalent parameters of each equivalent camera model.
[0055] In the specific embodiment of the present invention, for the display device that can directly display the image collected by the camera, the image collected by the camera is used for the display of the real image in the display device and the projection of the target object; for the head-up display, the image collected by the camera is used for The projection of the target object. The equivalent camera model is used to simulate a three-dimensional virtual scene corresponding to the real scene displayed by the display device. Therefore, according to the camera parameters and the equivalent parameters of the equivalent camera models, the conversion relationship is determined, and the vehicle coordinate system determined based on the camera parameters is converted to the three-dimensional virtual scene simulated by the equivalent camera model to realize the comparison between the actual scene and the three-dimensional virtual scene. Corresponding to make the three-dimensional virtual scene and the actual scene completely fit.
[0056] S340: According to the conversion relationship, convert the three-dimensional coordinates of the target object in the vehicle coordinate system, and project the three-dimensional virtual image of the target object in the three-dimensional virtual scene to the real image of each display device.
[0057] In the specific embodiment of the present invention, the three-dimensional coordinates of the target object are uniformly calculated in the vehicle coordinate system, and the three-dimensional virtual image of the target object is constructed, so that the target object is set in three-dimensional according to the conversion relationship between the current actual scene and the three-dimensional virtual scene. The three-dimensional virtual image in the virtual scene is projected into the real image of each display device. It avoids the complete process of target object determination, coordinate calculation, and conversion projection for different display devices, so that the image enhancement rendering effect of the target object can be transferred to different display devices to achieve accurate projection of the image enhancement effect .
[0058] Exemplary, Image 6 It is a sample diagram of the AR projection system framework. Such as Image 6 As shown, the AR projection system in this embodiment may include a camera, an image recognition engine, a map data input interface, an AR calculation engine, a display device parameter configuration and a camera parameter configuration interface, a GPU, and at least one display device. Specifically, the camera collects the actual scene image, transmits it to the GPU, and uses it as the actual scene image to be rendered by the AR effect. At the same time, the actual scene image can be transmitted to the image recognition engine to identify the target object; or, the target object in the map data can also be obtained through the map data input interface. Then the AR calculation engine calculates the three-dimensional coordinates of the target object in the vehicle coordinate system and transmits it to the GPU. Finally, the GPU determines the corresponding relationship between the actual scene and the three-dimensional virtual scene based on the camera parameters determined by the parameter configuration interface and the parameters of different display devices, so as to render the AR effect of the target object into the real image, and display it in the corresponding Display on the display device to achieve the effect of image enhancement.
[0059] Exemplary, Figure 7 It is an example image of AR image projection effect. Such as Figure 7 As shown, the vehicle includes two display devices, a head-up display and a mobile phone. During the driving of the vehicle, the navigation route information is obtained from the map data according to the vehicle positioning information, and the determined navigation route is projected to each display device in the form of image enhancement. Therefore, in the head-up display, the navigation route image enhancement is carried out on the real scene through the windshield, and the navigation route image enhancement is carried out on the real scene image in the mobile phone navigation. Among them, the navigation route is the target object, and its three-dimensional coordinates in the vehicle coordinate system are uniformly determined, so that the enhanced image of the navigation route is projected to the corresponding display according to the conversion relationship between the camera and the equivalent camera model of the display device. It is sufficient in the device, and avoid repeated calculation of the coordinate information of the image presented on the head-up display and the mobile phone of the navigation route.
[0060] It is worth noting that the above implementation effects are only exemplary descriptions and do not limit the display effects of the actual solution.
[0061] The technical solution of this embodiment uses the camera as a reference to determine the three-dimensional coordinates of the target object in the vehicle coordinate system. The equivalent camera model of the display device is used to determine the current camera parameters and the equivalent parameters of each equivalent camera model. The conversion relationship between the actual scene and the 3D virtual scene, so that the 3D virtual image of the target object is projected into the real image of the display device according to the conversion relationship. In the embodiment of the present invention, on the basis of projecting the three-dimensional virtual image of the target object to the real image of the display device, the three-dimensional coordinates of the target object are determined uniformly in the vehicle coordinate system, thereby improving the image enhancement effect on different display devices. Transferability avoids problems such as waste of system resources caused by repeated position calculations for different display devices, improves the projection efficiency and accuracy of the image enhancement effect, and enhances the fit between the image enhancement effect and the real scene.
Example Embodiment
[0062] Example three
[0063] Figure 8 This is a schematic diagram of the structure of an image projection device provided in the third embodiment of the present invention. This embodiment can be applied to a situation in which a three-dimensional virtual image corresponding to an actual target object is projected to a display device capable of displaying real scenes during intelligent driving. The device can implement the image projection method described in any embodiment of the present invention. The device specifically includes:
[0064] The AR object coordinate determination module 810 is configured to determine the three-dimensional coordinates of the target object in the image collected by the camera in the vehicle coordinate system according to the camera parameters;
[0065] The AR projection module 820 is used to project the three-dimensional virtual image of the target object according to the three-dimensional coordinates of the target object in the vehicle coordinate system, the camera parameters, and the equivalent parameters of the equivalent camera model of each display device To the real image of each display device.
[0066] Optionally, the AR object coordinate determination module 810 includes:
[0067] The image recognition unit 8101 is configured to recognize the image collected by the camera and determine the target object;
[0068] The coordinate calculation unit 8102 is configured to calculate the three-dimensional coordinates of the target object in the vehicle coordinate system according to the camera parameters and the imaging plane of the camera.
[0069] Optionally, the coordinate calculation unit 8102 is specifically configured to:
[0070] According to the focal length of the camera, the height of the camera from the ground, the angle between the optical axis of the camera and the ground plane, and the resolution of the imaging plane, the coordinates of the target object in the vehicle coordinate system are calculated.
[0071] Optionally, the AR projection module 820 includes:
[0072] The display device equivalent unit 8201 is used to determine the equivalent camera model of each display device and the equivalent parameters of each equivalent camera model according to each display device; wherein, the equivalent camera model is used to simulate a three-dimensional virtual scene;
[0073] The scene conversion unit 8202 is configured to determine the conversion relationship between the current actual scene and the three-dimensional virtual scene according to the camera parameters and the equivalent parameters of each equivalent camera model;
[0074] The AR projection unit 8203 is used to convert the three-dimensional coordinates of the target object in the vehicle coordinate system according to the conversion relationship, and project the three-dimensional virtual image of the target object in the three-dimensional virtual scene to the display device Real image.
[0075] Optionally, the display device equivalent unit 8201 is specifically configured to:
[0076] If the display device is a device that directly displays the image collected by the camera, then the camera is determined as the equivalent camera model of the display device;
[0077] Correspondingly, determine the equivalent parameters of each equivalent camera model, including:
[0078] Determining the external parameters of the camera as the external parameters of the equivalent camera model;
[0079] Based on the internal parameters of the camera, the internal parameters of the equivalent camera model are determined according to the ratio between the resolution of the camera and the resolution of the display device.
[0080] Optionally, the display device equivalent unit 8201 is specifically configured to:
[0081] If the display device is a head-up display, determining that the equivalent camera model of the head-up display is a pinhole imaging model;
[0082] Correspondingly, determine the equivalent parameters of each equivalent camera model, including:
[0083] Determine the internal parameters of the equivalent camera model of the display device according to the optical machine of the head-up display;
[0084] According to the installation position of the head-up display, the external parameters of the equivalent camera model of the display device are determined.
[0085] Further, the device further includes a map data acquisition module 830; the map data acquisition module 830 is specifically configured to:
[0086] Extracting the object in the driving direction of the vehicle as the target object from the map data according to the vehicle positioning information;
[0087] According to the camera parameters, the three-dimensional coordinates of the target object in the vehicle coordinate system are determined.
[0088] The technical solution of this embodiment realizes real scene image collection, image recognition, map data extraction, target object determination, three-dimensional coordinate calculation, equivalent model determination, scene relationship conversion, and AR effect projection through mutual cooperation between various functional modules And other functions. In the embodiment of the present invention, on the basis of projecting the three-dimensional virtual image of the target object to the real image of the display device, the three-dimensional coordinates of the target object are determined uniformly in the vehicle coordinate system, thereby improving the image enhancement effect on different display devices. Transferability avoids problems such as waste of system resources caused by repeated position calculations for different display devices, improves the projection efficiency and accuracy of the image enhancement effect, and enhances the fit between the image enhancement effect and the real scene.
PUM


Description & Claims & Application Information
We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Similar technology patents
Computer mouse having cable collecting structure
Owner:DARFON ELECTRONICS CORP
Integration device for unmanned aerial vehicle remote controller and image transfer module
Owner:STATE GRID CORP OF CHINA +1
Laser irradiator with infrared composite imaging system
Owner:JIANGSU LUMISPOT PHOTOELECTRIC TECH CO LTD
Portable intelligent monitoring and controlling device of membrane compost and using method thereof
Owner:CHINA AGRI UNIV
Classification and recommendation of technical efficacy words
- Avoid wasting system resources
- Improve portability
Webpage pre-reading method and device and browser
ActiveCN102810101Aenhanced displayAvoid wasting system resources
Owner:BEIJING SOGOU TECHNOLOGY DEVELOPMENT CO LTD
Overload control method of urgent call and preempted preference call
InactiveCN101141708APrevent excessive forward powerAvoid wasting system resources
Owner:ZTE CORP
Hotlinking detection method and device
Owner:LETV CLOUD COMPUTING CO LTD
Solar Charger Energy Management and Monitoring System
Owner:ADVANCE ENERGY SOLUTIONS
Sound amplification box and sound amplification device including the same
Owner:INNOCHIPS TECH
Combined motor pair trawling test table and installation method thereof
Owner:PHASE MOTION CONTROL NINGBO