In order to have a clearer understanding of the technical features, purposes and effects herein, the specific embodiments of the present invention will now be described with reference to the accompanying drawings, in which the same reference numerals denote the same parts. For the sake of brevity of the drawings, the relevant parts of the present invention are schematically shown in each drawing, and do not represent the actual structure as a product. In addition, in order to make the drawings simple and easy to understand, in some drawings, only one of the components having the same structure or function is schematically shown, or only one of them is marked.
 With regard to the control system, functional modules and application programs (APP) are well known to those skilled in the art, and can take any appropriate form, either hardware or software, a plurality of discretely set functional modules, or are multiple functional units integrated into one hardware. In the simplest form, the control system may be a controller, such as a combinational logic controller, a microprogram controller, etc., as long as the operations described in this application can be implemented. Of course, the control system can also be integrated into a physical device as different modules, which do not deviate from the basic principles and protection scope of the present invention.
 In the present invention, "connection" may include direct connection, indirect connection, communication connection, and electrical connection, unless otherwise specified.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that, when used in the specification, the terms "comprising" and/or "comprising" refer to the presence of stated features, values, steps, operations, elements and/or components, but do not exclude the presence of One or more other features, values, steps, operations, elements, components, and/or groups of components thereof are present or additionally added. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items
 It should be understood that the terms "vehicle" or "vehicle's" or other similar terms as used herein generally include motor vehicles, such as passenger cars including sport utility vehicles (SUVs), buses, trucks, various commercial vehicles, Includes various boats, ships' vessels, aircraft, etc., and includes hybrid vehicles, electric vehicles, plug-in hybrid electric vehicles, hydrogen-powered vehicles, and other alternative fuel vehicles (such as fuels derived from energy sources other than petroleum) . As mentioned herein, a hybrid vehicle is a vehicle having two or more power sources, such as both gasoline-powered and electric-powered vehicles.
 Furthermore, the controller of the present disclosure may be embodied as a non-transitory computer readable medium on a computer readable medium containing executable program instructions to be executed by a processor, controller or the like. Examples of computer-readable media include, but are not limited to, ROM, RAM, compact disc (CD)-ROMs, magnetic tapes, floppy disks, flash drives, smart cards, and optical data storage devices. The computer-readable recording medium can also be distributed over network coupled computer systems so that the computer-readable medium is stored and executed in a distributed fashion, eg, by a telematics server or a controller area network (CAN).
 The invention provides an image processing system applied to a large-scale automobile multi-channel camera, such as figure 2 As shown, it specifically includes: including: a multi-channel camera, a SoC processor, and a display screen, wherein the multi-channel camera and the display screen are connected to the SoC processor;
 The multi-channel cameras are used to collect image data around the large car;
 The SoC processor is used to process the images collected by the multi-channel cameras for external devices to call or output to the display screen for display;
 The SoC processor at least includes: a display controller, a second image processor, a video decoder, and a video encoder. The display controller, the second ISP image processor, the video decoder, and the video encoder are independent of each other, and are respectively connected to AXIbus bus and data exchange through AXIbus bus.
 Specifically, there are multiple sub-modules in the SoC processing, and the functions of each word module are independent of each other and interfere with each other;
 The multi-channel cameras at least include: one or more of a vehicle Ethernet camera, an AHD camera, or a camera with a MIPI interface.
 Specifically, there are many types of cameras. In this embodiment, in order to be compatible with cameras with different interfaces, the SoC processor provides a variety of different interfaces and image processing methods, so that the SoC processor can adapt to cameras produced by different suppliers. .
 Specifically, for different types of cameras, corresponding communication methods are provided:
 The in-vehicle Ethernet camera is connected to the SoC processor through the in-vehicle Ethernet bus; the in-vehicle Ethernet camera compresses the acquired image and transmits it to the Ethernet MAC chip in the SoC processor through the in-vehicle Ethernet bus;
 Or the MIPI interface camera is connected to the deserializer through the GMSL bus and then connected to the SoC processor;
 Or the AHD camera forms parallel data after passing through the decoder and is connected to the SoC processor.
 The lens of the camera can use fisheye lens and spherical lens to increase the viewing angle and reduce blind spots.
 The camera with MIPI interface includes at least: image sensor, serializer;
 According to requirements, vehicle Ethernet camera or AHD camera or MIPI interface camera can choose to configure ISP image processor.
 Specifically, since the cameras come from different manufacturers, the cameras of different manufacturers are configured with different chips. Some cameras have ISP image processing chips, and some cameras are not matched. Therefore, if the camera is equipped with an ISP image processing chip, it does not need to be processed by the image processor in the SoC in the SoC;
 In the camera with MIPI interface, because the camera with MIPI interface adopts a private protocol, its image processor needs to be used, and the image processor is bound with the MIPI interface, so the ISP image processor is generally not set in the camera with MIPI interface. Therefore, if a dedicated processor is not used, it is processed by the second ISP image processor in the SoC processor.
 In the vehicle Ethernet camera or ADI camera, according to the production requirements of different manufacturers, some models of cameras are equipped with ISP image processors, and some cameras are not configured, so the image signals are processed according to the actual situation to avoid waste of resources.
 The data stream protocol of the vehicle Ethernet camera is unpacked by the protocol stack of the SoC processor and then output to the video storage buffer. The video decoding chip extracts the data from the video storage buffer, decompresses it, and obtains the image data in raw dat format. The video storage buffer is called by the second ISP image processor in the SOC processor.
 The SoC processor also sets: Ethernet stack unpacking chip, cross switch chip, video capture chip, GPU chip, image processing engine chip;
 Among them, the Ethernet stack unpacking chip, video capture chip, GPU chip, video processing engine chip, and memory are connected through the AXI BUS bus and perform data exchange;
 The crossbar chip is connected to the video capture chip.
 Specifically, each sub-module of the SoC processor is relatively independent, does not interfere with each other, and communicates with each other through the AXI bus. Compared with the prior art, such as the technology that integrates video decoding, video encoding, and video processing engine chips, the function of one of the modules cannot be directly called because it is integrated. For example, during video encoding, video processing cannot be implemented. However, in the solution provided by this embodiment, since each module works relatively independently and does not affect each other, during video decoding, video decoding and video processing can be performed simultaneously through the video processing engine chip, which can improve efficiency.
 The cross switch chip converts the received M channels of input data into N channels of output data.
 According to actual needs, the images in the second buffer can be selected to enter the GPU or the video processing engine chip or the panoramic image stitching module for processing.
 Specifically, since the image processing system of the large-scale automobile multi-channel camera provided by the present invention greatly improves the existing image processing system architecture, the existing image processing method cannot handle the multiple images in the new image processing system architecture. road camera image. If the analysis is performed according to the traditional method, it will lead to packet loss, jitter of the multi-channel camera screen, repeated processing or inability of the image, etc. In order to solve the above problems, based on the image processing system provided by the present invention, an image processing system is proposed. methods, including:
 After the system is powered on, determine whether the camera interface in the SoC processor has camera signal access;
 If there is a camera's signal access, obtain the image processing ID corresponding to the camera ID;
 Specifically, read the attribute value corresponding to the camera ID in the ROM of the system, check whether the attribute value exists, if the attribute value does not exist, remind for setting;
 The attribute value of the camera ID includes at least the image processing ID, which records whether the current camera image of the camera is processed by the ISP processor in the camera; the image transmission protocol, and the image data type.
 Specifically, since there are image data captured by multiple cameras, and the processing stages are not the same, if there is no shortage of buffers in the prior art, the system needs to generate a real-time monitoring process and also need to process the image processing process. Otherwise, the system cannot know whether the image is currently going through the processing steps. On the one hand, real-time monitoring will lose a lot of CPU system resources, recording information also needs to occupy storage space and has to be queried and judged when calling. In order to solve the above problem, this embodiment stores image data at different stages by setting up a plurality of different buffers in the storage type.
 The shared content is set in a memory, such as a system memory, or an area is selected as an image storage buffer. The image buffer includes: a first buffer for storing the unpacked original image data received by the SoC. like image 3 and Figure 4 shown.
 The second buffer is used for the image data after being decompressed by the video decoder and processed by the ISP processor inside the multi-channel camera.
 The third buffer is used for the image data after being decompressed by the video decoder and processed by the ISP processor inside the multi-channel camera.
 The fourth buffer is used to store the image data processed by the chip or the panorama stitching module caused by GPU or video processing.
 Different buffers can be distinguished by adding an identifier to the header file of the corresponding buffer, thereby reducing frequent calls between processes and threads and repeated processing of images, saving system resources.
 acquiring image data, storing the image data in the first buffer, and the video decoder acquiring and decompressing the image data from the first buffer;
 The image processing ID is used to record whether the image obtained by the corresponding camera ID is processed by the ISP image processor in the camera.
 Determine whether the image has passed through the ISP image processor in the camera according to the image processing ID, and store the image in the preset buffer according to the judgment structure.
 If it passes through the ISP image processor in the camera, the image obtained by the corresponding ID camera is stored in the second buffer;
 If the ISP image processor in the camera is not passed, the image obtained by the corresponding ID camera is stored in the third buffer;
 If there is image data in the third buffer, the second ISP image processor obtains the image data from the third buffer, processes the image data, and stores it in the second buffer.
 Specifically, users can obtain panoramic images of multi-channel cameras according to their needs, so as to monitor the surroundings of the vehicle in real time. The specific methods are as follows:
 If a panoramic image needs to be obtained, the system will form a panoramic image after splicing the images captured by multiple cameras, which can be called by external devices or output to the display screen for display through the display controller;
 Specifically, since the multi-channel cameras may come from cameras of different manufacturers and different transmission protocols, the synchronization time of the images captured by the cameras is very important. The image is shaken or distorted, and even causes the stitching algorithm to fail because the common area cannot be found. Therefore, it is necessary to synchronously correct the images captured by the multi-channel cameras. In the prior art, the shooting time of the image is used as the criterion, the shooting time is obtained from the image attributes, and the stitching is performed according to the same time. However, this method is only for cameras with the same protocol. When the cameras are transmitted based on different protocols, due to the large difference in the crystal oscillator of the system clock, the error of this method may be too large, making the synchronization structure inaccurate. Therefore, This embodiment provides a method for stitching panoramic images to solve this problem:
 The panorama image stitching method specifically includes: performing time synchronization on the multi-channel camera images for panoramic stitching, and obtaining images captured by the middle and multi-channel cameras at the same time;
 Open up a synchronization queue cache in the second buffer for storing images captured by multiple cameras at the same time after time synchronization;
 The system calls the panoramic image stitching algorithm to call the image from the synchronization queue buffer space for panoramic stitching, and then obtains the panoramic image and stores it in the fourth buffer area;
 After starting the panorama stitching, the system creates the same number of queues as the multi-channel cameras in the second buffer space, and each queue stores the image corresponding to the camera ID;
 Synchronize the images of the image multi-channel cameras by starting multiple threads;
 see Figure 5 , assuming that there are 6 cameras in the system, that is, 6 vehicle cameras. At this time, in the second buffer, when panorama stitching is detected, 6 queues need to be opened at the same time, and each queue stores the images taken by the cameras in sequence. Data, data is stored according to one frame and one frame. One frame can be divided into multiple small pieces of data, which are numbered in the header file of each queue, such as CID01, which means the first camera.
 Obtain the time delay T of the data transmission of each camera to the SoC processor, obtain the second time stamp of the image received by the SoC processor from the image, correct the image through the second time stamp and the time delay T, and obtain the same time stamp. image;
 That is, at this time, the images at the same time are acquired in each queue, assuming that at time t, such as Figure 5 The images in the data block of the shaded area in are the same moment, which just meets the conditions of panorama stitching. In the prior art, all are taken directly according to the address number, but if the image at time t in CID01 is taken out, the video needs to query the image at time t in CID02 to CID06 again, and the CPU interrupt generated by each query will increase the system. load, so this implementation provides solutions as follows:
 Take out the corrected images at the same time from the queues captured and stored by the corresponding cameras, respectively, and store them in the synchronous queue cache space;
 The cache space of the synchronization queue includes a header file and a data block area. The header file is used to record the address of the data block and the time of the image corresponding to the data block, wherein each data block is used to store the number of images for each panorama stitching.
 That is, by opening up the synchronization queue buffer space in the second buffer, it is specially used for panorama stitching. like Image 6 As shown, each large data block in the cache space of the synchronization queue stores 6 images corresponding to the elapsed time synchronization, and the 6 images are stored in order, which is convenient for the panorama stitching module to call. In this way, each data block is stored in units of 6, and the order is first-in, first-out. Once the panorama stitching module knows that there is data in the buffer space of the synchronization queue, each image with a large data block can be stitched without considering other factors. Such as the time of the image does not match, whether the image is processed by the ISP and so on.
 The above are only preferred embodiments of the present invention, and the present invention is not limited to the above embodiments. Those skilled in the art can understand that the form in this embodiment is not limited to this, and the adjustable manner is not limited to this. It can be understood that other improvements and changes directly derived or thought of by those skilled in the art without departing from the basic idea of the present invention should be considered to be included within the protection scope of the present invention.