位置: 编程技术 - 正文
推荐整理分享The graphics pipeline ,Open GL 渲染管线,希望有所帮助,仅作参考,欢迎阅读内容。
文章相关热门搜索词:,内容如对您有帮助,希望把文章链接给更多的朋友!
来源: Ever since the early days of real-time 3d, the triangle has been the paintbrush with which scenes have been drawn. Although modern GPUs can perform all sorts of flashy effects to cover up this dirty secret, underneath all the shading, triangles are still the medium in which they work. The graphics pipeline that OpenGL implements reflects this: the host program fills OpenGL-managed memory buffers with arrays of vertices; these vertices are projected into screen space, assembled into triangles, and rasterized into pixel-sized fragments; finally, the fragments are assigned color values and drawn to the framebuffer. Modern GPUs get their flexibility by delegating the “project into screen space” and “assign color values” stages to uploadable programs called shaders. Let’s look at each stage in more detail:
The vertex and element arrays A rendering job starts its journey through the pipeline in a set of one or more vertex buffers, which are filled with arrays of vertex attributes. These attributes are used as inputs to the vertex shader. Common vertex attributes include the location of the vertex in 3d space, and one or more sets of texture coordinates that map the vertex to a sample point on one or more textures. The set of vertex buffers supplying data to a rendering job are collectively called the vertex array. When a render job is submitted, we supply an additional element array, an array of indexes into the vertex array that select which vertices get fed into the pipeline. The order of the indexes also controls how the vertices get assembled into triangles later on.
Uniform state and textures A rendering job also has uniform state, which provides a set of shared, read-only values to the shaders at each programmable stage of the pipeline. This allows the shader program to take parameters that don’t change between vertices or fragments. The uniform state includes textures, which are one-, two-, or three-dimensional arrays that can be sampled by shaders. As their name implies, textures are commonly used to map texture images onto surfaces. They can also be used as lookup tables for precalculated functions or as datasets for various kinds of effects.
The vertex shader The GPU begins by reading each selected vertex out of the vertex array and running it through the vertex shader, a program that takes a set of vertex attributes as inputs and outputs a new set of attributes, referred to as varying values, that get fed to the rasterizer. At a minimum, the vertex shader calculates the projected position of the vertex in screen space. The vertex shader can also generate other varying outputs, such as a color or texture coordinates, for the rasterizer to blend across the surface of the triangles connecting the vertex.
Triangle assembly
The GPU then connects the projected vertices to form triangles. It does this by taking the vertices in the order specified by the element array and grouping them into sets of three. The vertices can be grouped in a few different ways:
Take every three elements as an independent triangle Make a triangle strip, reusing the last two vertices of each triangle as the first two vertices of the next Make a triangle fan, connecting the first element to every subsequent pair of elements The diagram shows how the three different modes behave. Strips and fans both require only one new index per triangle in the element array after the initial three, trading the flexibility of independent triangles for extra memory efficiency in the element array.
Rasterization
The rasterizer takes each triangle, clips it and discards parts that are outside of the screen, and breaks the remaining visible parts into pixel-sized fragments. As mentioned above, the vertex shader’s varying outputs are also interpolated across the rasterized surface of each triangle, assigning a smooth gradient of values to each fragment. For example, if the vertex shader assigns a color value to each vertex, the rasterizer will blend those colors across the pixelated surface as shown in the diagram.
The fragment shader The generated fragments then pass through another program called the fragment shader. The fragment shader receives the varying values output by the vertex shader and interpolated by the rasterizer as inputs. It outputs color and depth values that then get drawn into the framebuffer. Common fragment shader operations include texture mapping and lighting. Since the fragment shader runs independently for every pixel drawn, it can perform the most sophisticated special effects; however, it is also the most performance-sensitive part of the graphics pipeline.
Framebuffers, testing, and blending A framebuffer is the final destination for the rendering job’s output. In addition to the default framebuffer OpenGL gives you to draw to the screen, most modern OpenGL implementations let you make framebuffer objects that draw into offscreen renderbuffers or into textures. Those textures can then be used as inputs to other rendering jobs. A framebuffer is more than a single 2d image; in addition to one or more color buffers, a framebuffer can have a depth buffer and/or stencil buffer, both of which optionally filter fragments before they are drawn to the framebuffer: Depth testing discards fragments from objects that are behind the ones already drawn, and stencil testing uses shapes drawn into the stencil buffer to constrain the drawable part of the framebuffer, “stencilling” the rendering job. Fragments that survive these two gauntlets have their color value alpha blended with the color value they’re overwriting, and the final color, depth, and stencil values are drawn into the corresponding buffers.
Conclusion That’s the process, from vertex buffers to framebuffer, that your data goes through when you make a single “draw” call in OpenGL. Rendering a scene usually involves multiple draw jobs, switching out textures, other uniform state, or shaders between passes and using the framebuffer’s depth and stencil buffers to combine the results of each pass. Now that we’ve covered the general dataflow of 3d rendering, we can write a simple program to see how OpenGL makes it all happen. Throughout the course of this tutorial, I’d love to get your feedback—let me know if it’s helping you or if anything doesn’t make sense.
Following the pipeline 1.PassingDatatotheVertexShaderThevertexshaderisthefirstprogrammablestageintheOpenGLpipelineandhasthedistinctionofbeingtheonlymandatorystageinthepipeline.VertexAttributesInGLSL,themechanismforgettingda
Tutorial 4: Shaders 本文源自:
FreeGLUT Tips: 详解 glutInit 的入口参数 问题前文我们遇到的问题是,如何正确地给glutInit()这个函数传递一个正确的入口参数,使它能够正确地初始化OpenGL环境。假设大家都在使用VisualC++。当
标签: The graphics pipeline ,Open GL 渲染管线
本文链接地址:https://www.jiuchutong.com/biancheng/369418.html 转载请保留说明!友情链接: 武汉网站建设