OpenGL (open graphics library)-based inverted image display processing device and method

A reflection and display device technology, applied in image data processing, 3D image processing, instruments, etc., can solve the problems of inflexible production process, heavy artist workload, difficult to handle non-planar object models, etc., and achieve the change of fixed assembly line operation mode, good picture effect and fluency, high efficiency, flexibility and adaptability

Active Publication Date: 2012-01-04
SHENZHEN TCL NEW-TECH CO LTD
3 Cites 21 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0006] Method 3: Similar to method 2, except that the reflection texture image is taken from the original texture image of the object model. After program processing, it has a hazy gradient fade effect and produces a reflection texture object, but this consumes CPU resources and is only easy to handle. It is very difficult to deal with the reflection of non-planar three-dimensional objects
[0007] Among the several existing reflection drawing methods mentioned above, although the first method is universal, it needs to...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Method used

Illustrate the specific calculation principle of R, G, B, A factor value with certain code: the direction of object model reflection is multiplied with position height, the result obtained is compared with reflection depth, if result is within reflection depth , then calculate and specify a R, G, B, A factor value; if the result is greater than the reflection depth, then the R, G, B, A factors remain unchanged, all being 1.0. The calculation method is simple, general, fast, convenient and efficient.
The embodiment of the present invention solution is mainly: load and calculate object model parameter by CPU, obtain the texture map pixel data and/or default vertex color value of object model; Carry out vertex position space conversion to object model parameter successively by GPU afterwards As well as clipping operations, rasterization processing and fragment coloring processing, a reflection image of the object model is generated and output to a display device for display. The reflection effect of the object model is realized through parameter control, and the simulated reality is more realistic. The hazy gradient fade does not depend on the original texture image of the object model, nor does it depend on the format of the original texture image, the number of color channels, and the length of color bytes to save storage. Space, and reduce the amount of CPU operations and memory usage.
The specific calculation principle of R, G, B, A factor value is illustrated with a certain code: the direction of object model reflection is multiplied with the position height, and the result obtained is compared with the reflection depth, if the result is within the reflection depth , then calculate and specify a R, G, B, A factor value; if the result is greater than the reflection depth, then the R, G, B, A factors remai...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Abstract

The invention relates to an OpenGL (open graphics library)-based inverted image display processing device and method. The method comprises the following steps: loading and calculating parameters of an object model, acquiring texture mapping pixel data and/or default vertex colour values of the object model; carrying out vertex position spatial transformation and cutting operation on the parameters of the object model; carrying out rasterization on the parameters of the object model after the vertex position spatial transformation and cutting operation are carried out; according to the texture mapping pixel data, carrying out segment colouring treatment on the parameters of the object model after the rasterization is carried out, so as to generate an inverted image of the object model; and outputting the inverted image to display equipment for display. The device and method in the invention has the advantages that inverted image effect of the object model is realized by parameter control in an OpenGL drawing process, the simulated reality is more vivid, the storage space is saved, and the computation load of a CPU (central processing unit) and the internal memory usage are reduced; the more efficient flexibility and adaptability are provided for the CPU, the preferable frame effect and the fluency are realized, and the burden of a UI (user interface) designer is alleviated.

Application Domain

Technology Topic

Graphics libraryDisplay processing +9

Image

  • OpenGL (open graphics library)-based inverted image display processing device and method
  • OpenGL (open graphics library)-based inverted image display processing device and method
  • OpenGL (open graphics library)-based inverted image display processing device and method

Examples

  • Experimental program(1)

Example Embodiment

[0050] The solution of the embodiment of the present invention is mainly to: load and calculate the object model parameters through the CPU to obtain the texture map pixel data and/or the default vertex color value of the object model; then the GPU sequentially performs vertex position space conversion and cropping operations on the object model parameters , Rasterization processing and fragment coloring processing, generate the reflection image of the object model, and output it to the display device for display. The reflection effect of the object model is realized through parameter control, and the simulated reality is more realistic. The hazy gradation fade does not depend on the original texture image of the object model, nor does it depend on the format, number of color channels, and color byte length of the original texture image to save storage. Space, and reduce the amount of CPU computing and memory footprint.
[0051] The invention is realized by OpenGL graphics programming technology and OpenGL Shader Language graphics programming technology. The hardware involved includes a CPU (Central Processing Unit, central processing unit), a GPU (Graphic Processing Unit, graphics processing unit), and a display device. There are hardware entities such as rasterizer, texture memory and vertex shader and fragment shader programs in GPU.
[0052] The R, G, B, and A values ​​mentioned in the following embodiments respectively correspond to the red, green, and blue colors of the color components and the transparency alpha. The pixel format of an image does not necessarily have the above color components, but can be assembled into the above color component format. When mapping the texture to the surface of the object model or coloring the object model, pass the height of the object model to the corresponding shader, calculate the R, G, B, A factor values ​​of the corresponding position according to the height and the reflection direction, and use the fragment shader in the Object model scenes, etc. are converted into pixels, and the pixel color control operation of the final reflection before drawing to the color buffer in the frame buffer area, combined with the inherent mixing function of OpenGL, achieves parameter control and realizes the reflection effect. The invention applies the calculation of the pixel color to the GPU for processing, and the operation is flexible, and the performance of the GPU is also fully utilized. And it can be processed once before the pixels are output to the frame buffer area, which has high efficiency and can be processed in large areas.
[0053] Specifically, such as figure 1 As shown, an embodiment of the present invention provides an OpenGL-based reflection display processing method, including:
[0054] Step S101, load and calculate the object model parameters, and obtain texture map pixel data and/or default vertex color values ​​of the object model;
[0055] This step can be completed in the CPU. The CPU loads and calculates the object model parameters. First, the CPU starts the OpenGL application, loads the OpenGL graphics dynamic library, initializes the application, and calls the API of the OpenGL graphics dynamic library; then, the corresponding API is called The hardware driver interface; after the subsequent GPU for graphics processing, the resulting reflection image is output to the display device.
[0056] Among them, the object model parameters include: vertex data, height of the model object, reflection direction, depth, light and fog coordinates, etc.
[0057] Vertex data includes: position information, normals, colors, and texture coordinates.
[0058] The object model in this embodiment may be an object model with texture maps, or an object model without texture maps, and the object model itself has a default object color, which is the so-called default vertex color value in this embodiment. According to whether the object model has a texture map, the subsequent coloring process of the object model is different (see step S104 for details).
[0059] In addition to loading and calculating the object model parameters, the CPU also needs to obtain the corresponding pixel color parameters of the object model. For the object model with texture mapping, it obtains the texture mapping pixel data of the object model. For the object model without texture mapping, it obtains the object. The default vertex color value of the model may also be obtained from the above two kinds of data (for the object model with object color and texture map).
[0060] The CPU sends the loaded and calculated object model parameters, as well as the acquired texture map pixel data and/or default vertex color values ​​of the object model to the GPU, and the GPU performs subsequent image processing.
[0061] The texture map pixel data of the obtained object model is stored in the texture memory in the GPU, and the default vertex color value is stored in the pixel operation module in the GPU.
[0062] Step S102, performing vertex position space conversion and clipping operations on the object model parameters;
[0063] This step is completed by the vertex shader program in the GPU. The vertex shader program processes the object model parameters transmitted by the CPU, and performs spatial position data transformation and R, G, B, and A factor calculations.
[0064] Specifically, the world coordinates of the object model parameters are converted into the viewpoint coordinate system, and then the reflection range of the object model is determined according to the object model parameters after coordinate conversion, and then the color factor of the reflection of the object model is calculated according to the height and reflection direction in the object model parameters Values ​​and transparency values ​​(ie R, G, B, A factors) are used as input to the fragment shader.
[0065] Take a code as an example to illustrate the specific calculation principle of the R, G, B, and A factor values: multiply the direction of the reflection of the object model by the height of the position, and the result obtained is compared with the depth of the reflection. If the result is within the depth of the reflection, then The calculation specifies a value of R, G, B, and A factors; if the result is greater than the depth of the reflection, the R, G, B, and A factors remain unchanged, and all are 1.0. The calculation method is simple, universal, fast, convenient and efficient.
[0066] Step S103, performing rasterization processing on the object model parameters after the vertex position space conversion and the cropping operation;
[0067] Rasterization is a process of converting basic primitives into two-dimensional images. Each pixel of the converted image includes information such as color and depth. Therefore, the rasterization of basic primitives consists of two parts. The first part of the work is to determine which integer grid areas in the window coordinates are occupied by basic primitives; the second part is to assign a color value and a depth value to each area. The processed result will be passed to the next platform of GL (fragment operation), where the message area is used to update the appropriate area in the frame buffer. The rasterization operation here refers to the first part of the work described above.
[0068] The rasterization processing in this embodiment refers to the rasterization of the object model in the viewpoint coordinate system, specifically performing rasterization processing of points, lines, polygons, bitmaps, and pixel rectangles.
[0069] Step S104, performing fragment coloring processing on the rasterized object model parameters according to the texture map pixel data, generating a reflection image of the object model, and outputting it to a display device for display.
[0070] This step is completed in the fragment shader in the GPU, and the fragment shader performs texture sampling and outputs pixel colors according to the mapping relationship of texture coordinates in the object model parameters.
[0071] The frame buffer area accepts the pixel color and pixel depth output by the fragment shader or accepts the data directly output by the pixel operation module (for object models without texture mapping).
[0072] After that, the texture image data is extracted, the texture dimension and filter parameters are specified, the texture object is generated, and the reflection image is generated, which is output to the display device for display. Among them, the texture operation data can also be directly obtained from the frame buffer area, and not necessarily from external texture image data.
[0073] Specifically, such as figure 2 As shown, step S102 includes:
[0074] Step S1021, performing coordinate conversion on the object model parameters and cutting operations;
[0075] Step S1022: Determine the reflection range of the object model according to the object model parameters after coordinate conversion and clipping operation;
[0076] Step S1023: Calculate the color factor value and the transparency value of the reflection of the object model.
[0077] Such as image 3 As shown, step S104 includes:
[0078] Step S1041, judge whether the object model has texture mapping; if yes, go to step S1042; otherwise, go to step S1045;
[0079] Step S1042, sampling and calculating the pixel color of each vertex in the texture map pixel data according to the mapping relationship of the texture coordinates in the object model parameters;
[0080] Step S1043: Calculate the final pixel color value of each vertex according to the pixel color and transparency value of each vertex;
[0081] Step S1044: Paste the final pixel color value of each vertex on the object model to generate a reflection image of the object model, and output it to the display device for display.
[0082] Step S1045: Calculate the final pixel color value of each vertex according to the default vertex color value and transparency value; and proceed to step S1044.
[0083] In the specific implementation process, the shader program was correctly written in GLSL language, and the shader program object was successfully compiled and connected. After setting the height parameters, reflection direction and depth information of the object model, in the vertex shader, calculate the R, G, B, and A factor values ​​corresponding to the dim fade position of the reflection, and pass them to the fragment shader, and sample the texture in the fragment shader Color or directly accept the object color before outputting the final pixel color of this fragment, apply the R, G, B color factor values ​​of the reflection and the transparency value alpha to this color, and recalculate the final color value. In this way, the color of the final output pixel of this segment is different from the original object model color of the reflection, and it is mixed with the existing color in the frame buffer area according to the specified mixing method to achieve the ideal and realistic reflection effect.
[0084] The following example illustrates:
[0085] For the production of dynamic reflections, the traditional method is to design multiple reflection texture images, or design an algorithm. Based on the original texture image, each frame needs to recalculate and modify the texture image to generate texture objects. This kind of multiple masks The design method is impractical, and it increases the workload of the artist, and the memory consumption and CPU usage are also very high, making the program inefficient.
[0086] For the object model and the reflection surface are not completely symmetrical, the object model position direction is inclined to realize the reflection, such as placing a box on the glass plane, the reflection is also inclined, requiring gradual fade. Existing method 1, although the above-mentioned effect can be achieved by using the masking method, the art work is heavy, the program efficiency is not high, and it needs to draw one more patch, and the CPU resource occupancy rate is high; the existing method two is processed by algorithm Since the texture coordinate mapping relationship between the texture image and the object model is unknown, it is impossible to know which parts of the texture image should be processed and which parts are not used, so this method cannot achieve the above effects.
[0087] The method of the present invention solves the above-mentioned difficulties of the known method. When in use, it is only necessary to calculate which parts of the object model require gradation processing and the corresponding color visual effect value according to the height and reflection direction of the object model. The method is simple to use, block processing, and efficient and versatile.
[0088] This embodiment does not require excessive texture maps without increasing other hardware costs, saving storage space, and greatly saving memory usage, reducing the burden on the CPU, and freeing up more CPU time slices for other tasks , Which greatly improves the efficiency of the CPU and reduces the workload of UI designers. After the parameter control interface function for the reflection of the object model is implemented, the depth, color and degree of change of the 3D object model reflection on the user interface can be parameterized. The reflection effect is more realistic, and the UI display effect is more beautiful and fashionable, thereby enhancing the product Competitiveness.
[0089] Such as Figure 4 As shown, an embodiment of the present invention also provides an OpenGL-based reflection display processing device, including: a main control module 401, a vertex shading module 402, a rasterization module 403, and a fragment shading module 404, wherein:
[0090] The main control module 401 is configured to load and calculate object model parameters, and obtain texture map pixel data and/or default vertex color values ​​of the object model;
[0091] The vertex shader module 402 is used to perform vertex position space conversion and clipping operations on the object model parameters;
[0092] The rasterization module 403 is used to perform rasterization processing on the object model parameters after the vertex position space conversion and the cropping operation;
[0093] The fragment shading module 404 is configured to perform fragment shading processing on the rasterized object model parameters according to the texture map pixel data and/or default vertex color values, to generate a reflection image of the object model, and output to a display device for display.
[0094] Among the above modules, the main control module 401 can be set in the CPU, and the vertex shader module 402, the rasterizer module 403, and the fragment shader module 404 correspond to the vertex shader program, rasterizer, and fragment shader program in the GPU, respectively.
[0095] First, the main control module 401 in the CPU starts the OpenGL application, loads the OpenGL graphics dynamic library, initializes the application, and calls the API of the OpenGL graphics dynamic library; then, the API calls the corresponding hardware driver interface; after the vertices of the subsequent GPU After the rendering module 402, the rasterization module 403, and the fragment rendering module 404 perform graphics processing, the resulting reflection image is output to the display device.
[0096] Among them, the object model parameters include: vertex data, height of the model object, reflection direction, depth, light and fog coordinates, etc.
[0097] Vertex data includes: position information, normals, colors, and texture coordinates.
[0098] The object model in this embodiment may be an object model with texture maps, or an object model without texture maps, and the object model itself has a default object color, which is the so-called default vertex color value in this embodiment. According to whether the object model has a texture map, the subsequent coloring process of the object model is different.
[0099] In addition to loading and calculating the object model parameters, the main control module 401 also needs to obtain the corresponding pixel color parameters of the object model. For the object model with texture mapping, it obtains the texture mapping pixel data of the object model. For the object model without texture mapping, Then, the default vertex color value of the object model is obtained, or both of the above two types of data are obtained (for the object model with object color and texture map).
[0100] The main control module 401 sends the loaded and calculated object model parameters, and the acquired texture map pixel data and/or default vertex color values ​​of the object model to the vertex shader module 402 of the GPU, and each module of the GPU performs subsequent image processing.
[0101] The texture map pixel data of the object model acquired by the main control module 401 is stored in the texture memory in the GPU, and the default vertex color value is stored in the pixel operation module in the GPU.
[0102] After that, the vertex shader module 402 processes the object model parameters transmitted from the main control module 401, and performs spatial position data transformation and R, G, B, and A factor calculations.
[0103] Specifically, the world coordinates of the object model parameters are converted into the viewpoint coordinate system, and then the reflection range of the object model is determined according to the object model parameters after coordinate conversion, and then the color factor of the reflection of the object model is calculated according to the height and reflection direction in the object model parameters Values ​​and transparency values ​​(ie R, G, B, A factors) are used as input to the fragment shader.
[0104] Take a code as an example to illustrate the specific calculation principle of the R, G, B, and A factor values: multiply the direction of the reflection of the object model by the height of the position, and the result obtained is compared with the depth of the reflection. If the result is within the depth of the reflection, then The calculation specifies a value of R, G, B, and A factors; if the result is greater than the depth of the reflection, the R, G, B, and A factors remain unchanged, and all are 1.0. The calculation method is simple, universal, fast, convenient and efficient.
[0105] After that, the rasterization module 403 performs rasterization processing on the object model parameters after the vertex position space conversion and the cropping operation.
[0106] Rasterization is a process of converting basic primitives into two-dimensional images. Each pixel of the converted image includes information such as color and depth. Therefore, the rasterization of basic primitives consists of two parts. The first part of the work is to determine which integer grid areas in the window coordinates are occupied by basic primitives; the second part is to assign a color value and a depth value to each area. The processed result will be passed to the next platform of GL (fragment operation), where the message area is used to update the appropriate area in the frame buffer. The rasterization operation here refers to the first part of the work described above.
[0107] The rasterization processing of the rasterization module 403 in this embodiment refers to cropping the object model parameters in the viewpoint coordinate system, specifically performing rasterization processing of points, lines, polygons, bitmaps, and pixel rectangles.
[0108] The data processed by the rasterization module 403 is transmitted to the fragment shading module 404, and the fragment shading module 404 performs texture sampling and outputs pixel colors according to the mapping relationship of texture coordinates in the object model parameters.
[0109] The frame buffer area accepts the pixel color and pixel depth output by the fragment shader or accepts the data directly output by the pixel operation module (for object models without texture mapping).
[0110] After that, the texture image data is extracted, the texture dimension and filter parameters are specified, the texture object is generated, and the reflection image is generated, which is output to the display device for display. Among them, the texture operation data can also be directly obtained from the frame buffer area, and not necessarily from external texture image data.
[0111] Specifically, such as Figure 5 As shown, the vertex coloring module 402 includes: a coordinate conversion unit 4021, a reflection range determination unit 4022, and a factor calculation unit 4023, wherein:
[0112] The coordinate conversion unit 4021 is used for performing coordinate conversion on the object model parameters and cutting operations;
[0113] The reflection range determining unit 4022 is configured to determine the reflection range of the object model according to the object model parameters after coordinate conversion and clipping operations;
[0114] The factor calculation unit 4023 is used to calculate the color factor value and the transparency value of the reflection of the object model.
[0115] Such as Image 6 As shown, the fragment shading module 404 includes: a texture sampling unit 4041, a pixel color calculation unit 4042, and a coloring unit 4043, where:
[0116] The texture sampling unit 4041 is used to sample and calculate the pixel color of each vertex in the texture map pixel data according to the mapping relationship of the texture coordinates in the object model parameters when the object model has a texture map;
[0117] The pixel color calculation unit 4042 is configured to calculate the final pixel color value of each vertex according to the pixel color of each vertex and the transparency value;
[0118] The coloring unit 4043 pastes the final pixel color value of each vertex on the object model, generates a reflection image of the object model, and outputs it to the display device for display.
[0119] Further, the pixel color calculation unit 4042 is also used to calculate the final pixel color value of each vertex according to the default vertex color value and the transparency value when the object model does not have a texture map;
[0120] The coloring unit 4043 is also used to paste the final pixel color value of each vertex on the object model to generate and display a reflection image of the object model.
[0121] In the OpenGL drawing process of this embodiment, the reflection effect of the object model can be controlled by parameters, and the simulation reality is more realistic, the hazy gradient fades, and it does not depend on the original texture picture, nor the format and color channel of the original texture image. Quantity and color byte length, only one texture picture or no texture picture (but the object model must have color), no special processing of reflection texture images, and no special design algorithm to process reflection texture images to produce reflection texture . Save storage space, reduce the amount of CPU calculations and memory usage, free up more time for the CPU to perform other calculations, change the fixed pipeline operation mode of the GPU, and bring more efficient flexibility and adaptability to the GPU, which can be used more The GPU's ability to achieve better picture effects and smoothness; it also reduces the burden on UI designers.
[0122] The above are only the preferred embodiments of the present invention, and do not limit the scope of the present invention. Any equivalent structure or process transformation made by using the content of the description and drawings of the present invention, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of the present invention.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Similar technology patents

Message transmission method and device

ActiveCN104168222ASave storage spaceReduce overheadData switching networksReal-time computingTernary content addressable memory
Owner:NEW H3C TECH CO LTD

Method, device and system for sharing user contact details

InactiveCN102752392ATimely backupSave storage spaceTransmissionContact methodComputer hardware
Owner:BEIJING CLEADERWIN TECH

Classification and recommendation of technical efficacy words

  • Save storage space
  • Reduce computation

Video commenting method and video player

ActiveCN101930779ASave storage spaceCarrier indexing/addressing/timing/synchronisingGraphicsVideo player
Owner:GLOBAL INNOVATION AGGREGATORS LLC

System and method for restaurant electronic menu

InactiveUS20060085265A1Save storage spaceFast retrievalMarketingSoftwareNutritional information
Owner:IBM CORP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products