Recipes to enhance your code

This section is dedicated to additional Python recipes, which improve the functionality of your program but aren’t fundamental to the learning objectives, so we left them out of the main practicals.

Read and use when you have time outside practicals or are finished with them, to explore or enhance your project.

Enable transparency in your code

To render transparent objects, alpha blending must be specifically turned on in the constructor of Viewer by adding these two lines:

GL.glEnable(GL.GL_BLEND)           # enable blending
GL.glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA)

The alpha value given as fourth coefficient of the fragment in the fragment shader will then be used to mix the incoming fragment color with the color already present in the framebuffer. For this to work however, make sure you render non transparent objects first, and transparent objects last, else objects that are behind transparent objects will not show up because the z-test then says to discard the fragments of whatever’s behind the transparent object’s geometry.

As an example, the simple color shader could be modified to account for a global transparency factor by simply passing the object transparency as a uniform:

#version 330 core

uniform float alpha;

in vec3 fragment_color;
out vec4 out_color;

void main() {
    out_color = vec4(fragment_color, alpha);
    // instead of out_color = vec4(fragment_color, 1);

    // can also override the alpha by writing this:
    out_color.a = alpha;
}

Time your code for performance

It is quite useful to time your code to understand what is going on. But timing isn’t completely trivial : some things are happening on CPU, and others on GPU. You have to make queries for both types of measurements. glfw.get_time() allows to query CPU time, and the OpenGL driver gives you an internal GPU elapsed time measurement through a specific object query protocol illustrated below. Swapping buffers waits for the OpenGL drawing calls to finish, so its timing is higher than (and includes) the CPU time actually spent issuing OpenGL draw commands.

Thus the suggested code measures three things: CPU time spent preparing and issuing OpenGL draw commands (surrounding the object draw calls), GPU render time actually executing these draw commands, and total frame time between two frame swaps including waiting for the OpenGL commands to finish before buffer swapping. The latter corresponds to the actual perceived frame rate.

class Viewer:
    ...
    def run(self):
        ...
        query, time = GL.glGenQueries(1)[0], GL.GLuint(0)

        # main rendering loop
        while not glfw.window_should_close(self.win):
            start_time = glfw.get_time()
            glClear(...)
            GL.glBeginQuery(GL.GL_TIME_ELAPSED, query)

            # draw our scene objects
            for drawable in self.drawables:
                drawable.draw(...)

            # query CPU drawing preparation time and GPU drawing time
            frame_time = 1e3 * (glfw.get_time() - start_time)
            GL.glEndQuery(GL.GL_TIME_ELAPSED)
            GL.glGetQueryObjectui64v(query, GL.GL_QUERY_RESULT, time)

            # flush render commands, and swap draw buffers
            glfw.swap_buffers(self.win)

            # frame swap time query
            swap_time = 1e3 * (glfw.get_time() - start_time))

            # print frame rate stats
            print('  \rGL render %.03fms, CPU %.03fms, frame swap at %.03fms' %
                  (time.value * 1e-6, frame_time, swap_time, end='', flush=True)