Computer live image rendering method and system based on graphics card
An image rendering and computer technology, applied in the computer field, can solve problems such as inability to provide high-speed processing capabilities, achieve the effect of ensuring picture clarity and video fluency, meeting filter requirements, and improving user experience
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0098] 1. Create a Direct3D rendering pipeline, write the code in PixelShader (pixel shader) to achieve color beautification by referring to the texture, specifically the red, green and blue primary color data of the screen as the coordinates of the texture, and extract the corresponding color from the texture.
[0099] 2. Load the color map texture compatible with GPUImage's LookupTable filter, upload it to the video memory, and pass it in as the parameter of the above PixelShader code.
[0100] 3. Capture the camera image and upload it to the video memory by writing the DirectShow filter and component media flow graph.
[0101] 4. Execute the Direct3D rendering pipeline, let the code of PixelShader process the camera image, and generate the target image.
[0102] 5. Download the target screen from the video memory to the internal memory for subsequent encoding and streaming to the live server for backup.
Embodiment 2
[0104] Taking the Windows system as an example, the basic steps of implementation are as follows: Figure 6 shown, including:
[0105] 1. By calling the DirectShow interface of the Windows system, capture the lens image from the camera device and store it in the memory.
[0106] 2. Create two textures in the video memory, and transfer the camera image and the color map of the color filter to the video memory.
[0107] 3. The software gives the position of the lens image displayed in the preview screen, transfers the position coordinates to the video memory, and converts the coordinate system through the vertex shader to obtain the final display coordinates.
[0108] 4. Sampling in the lens image texture through the given coordinates to get a pixel.
[0109] 5. Perform linear interpolation on the two mapped colors according to the color mapping table to obtain an accurate final output.
[0110] 6. Transfer the processed texture from the video memory to the memory.
[0111] ...
Embodiment 3
[0114] On the basis of Embodiment 2, between steps 4 and 5, the passing pixels can be distinguished by the YUV ellipse model to distinguish the content of the picture. Take differentiating skin as an example. If the pixel is skin color, it will be treated with microdermabrasion and strong whitening. If it is not the color of the skin, no microdermabrasion and weak whitening treatment will be performed. For the pixels of the whitened color, it is decomposed into the three primary colors of red, green and blue, and the mapped color is obtained through blue. Which two small blocks in the 64 small blocks in the mapping table are located, and the mapped color is obtained through red and green. Where is the small piece.
[0115] The present invention uses a graphics card on a computer to perform high-performance color beautification by means of Direct3D, defines a color mapping table by means of a LUT (Look UpTable), and realizes direct access to live broadcast software based on t...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


