How I am handling rendering
Rendering is quite a complex topic, and I’m pretty sure that everyone will give a different answer as to how they handle it in their engine. In the underlying engine of this game, I use am using quite a low level graphics API (Vulkan) which already will give the programmer quite a lot of control over exactly what will be going on over at the GPU. Therefore, I didn’t really add a whole lot of abstraction over the API, to be able to conserve that level of control.
However, one thing that I wanted to make sure, was that outside of the renderer module, as little Vulkan code would be needed to make the game work (inside the module however, Vulkan code will be very present, even in parts that don’t have to do with the API, like atmosphere, lighting, etc…).
One thing that I thought would be worth sharing is how meshes and vertex bindings / attributes are handled.
Right here, is the mesh structure. One thing that I really wanted to have, was the ability for the programmer to easily control stuff like not having an index buffer, so that vkCmdDraw could be used for more trivially rasterised geometry (which the game will definitely have).
In this structure, there is an array of mesh buffers which simply contain the vulkan buffer, and what type of buffer it is.
And this buffer type, basically dictates what usage the buffer will serve for rendering the mesh.
The way the programmer would add a buffer to this structure (say a BT_VERTEX
type, which will be simply for the vertex positions), they would push onto the buffer type stack, the type of the buffer they want (BT_VERTEX
), which will increment the buffer_count
variable. The actual buffer object goes to the array buffers[]
at index BT_VERTEX
, like this: buffers[BT_VERTEX] = mesh_buffer_t{...}
.
For this operation, there is a very simple function:
This function, however, simply adds space for some mesh_buffer that will be usable. To actually fill the mesh_buffer_t
object with an actual GPU buffer containing data, the programmer will have to access this mesh_buffer_t
. To do that, they would have to request a pointer to the buffer at index buffer_type_t
with this function:
Which will return &mesh->buffers[buffer_type]
if the buffer_type
has been pushed to the buffer type stack.
Now, here comes the part that’s cool: from this mesh_t
structure, we can create all the necessary binding and attribute information.
In this function, depending on the order in which you allocated your buffer types in the stack of buffer_type_t
, the binding / attribute information will be created in that order.
This binding info structure will then get passed to the function which creates a graphics pipeline for 3D rendering.
When it comes to submitting the mesh:
mesh_render_data_t
contains push constant information. In this function, depending on whether the mesh contains a BT_INDICES
buffer, it will use vkCmdDraw
or vkCmdDrawIndexed
.
And that’s it!
Here’s some code to create a sphere and submit it.
The two most important things that this system is currently missing is a way to render instanced geometry, and a way to merge the buffers into one big buffer. Once that gets added if it is needed, I will definitely write another post about it.