I must admit, I really love game development. It always comes to me as a shock whenever I think that the making of something that was so often frowned upon by the adults that surrounded me as a kid, is something that I truly feel indebted to for its educational value especially considering that I practically didn’t know anything about programming before starting.
On the Internet, we often find articles and videos on the Internet explaining how to get a 3D scene set up in which the user can move around, and see how the world interacts with objects. However, something I rarely see on the Internet, is how to make your application start to actually look like a game with a main menu, intermediate menus, and what it takes to structure your code in such a way that it becomes possible and not extremely difficult. In order to do that, there are a couple things we are going to have to discuss.
I’m far from an Emacs veteran, but I love this operating system and have spent an unhealthy amount of time configuring the thing. I almost live in it. There are, however, a couple problems with the default build of Emacs: the keybindings are painful and I find the constant movement of the eyes quite uncomfortable.
In the previous article, we discussed the implementation-side of player movement, prediction errors, and entity interpolation. Terraforming, is a much, much more complex issue, that I spent a very long time just hacking at the computer trying to get perfectly right. I’ve tried this whole time to find a simpler way of synchronising terraforming between multiple clients, but am still dumbfounded at how complex it ended up being.
In the previous article, we discussed the theory behind client side prediction, error corrections and interpolation. The theory may not sound so bad, but the implementation is particularly taxing and difficult. The most difficult part is handling the prediction errors and making sure that when the client corrects them, the server and clients are all in sync.
In the previous article, we discussed basic client-server models and basic networking concepts. In this article, we are going to talk about basic synchronisation of world simulation, and player movement.
Phew!! Got here! The most exciting part of the project is here: multiplayer architecture. Throughout these articles, I will be covering in a lot of depth how multiplayer will be implemented in this game’s engine. There are a couple things that will be quite technically difficult, simply due to the nature of the engine: real-time, fast paced, terrain (an asset that all entities share) is entirely modifiable, … But do not worry, we will go through these challenges.
The last step to making a decent foundation for a multiplayer game is a system for handling players, or entities. There are ton of different ways of organising your entities in a game, such as all the different variations of component systems, etc… In this game, we won’t be using a component system for now, as there won’t be that many different types of entities: only players, bullets, and some props.
One big part of this game that will be very important is the fact the terrain is 100% modifiable (players should be able to destroy / add terrain arbitrarily).
Hello, I wanted to address the build architecture of this project. Given that this is a multiplayer game, there will be a server and client program. I absolutely want both of these programs to be using the same code. The only difference between these two programs will be the fact that one will be presenting (rendering) to the screen while the other will just be a standard console application (which you will be able to send commands to via sockets / network).
Rendering is quite a complex topic, and I’m pretty sure that everyone will give a different answer as to how they handle it in their engine. In the underlying engine of this game, I use am using quite a low level graphics API (Vulkan) which already will give the programmer quite a lot of control over exactly what will be going on over at the GPU. Therefore, I didn’t really add a whole lot of abstraction over the API, to be able to conserve that level of control.
Hello !! I’ve got something quite exciting to share… Over the past few months, I’ve been playing around with a ton of different things when it comes to game development (link to Github), and thought I might start a project that may help others to learn and overcome certain challenges I had faced during this time.
I have this artifact, in my new project, with the SSR implementation that I provided a while back: The shaders are written in GLSL, and are fed to the Vulkan graphics pipeline for rendering the screen space reflections.
So in June 2019, I was lucky enough to get the opportunity to spend two weeks at Splash Damage for a work experience. For someone who really loves game development, this was one of the most magical experience I’ve ever had in my life. I remember just about everything from this experience because of how sureal it was.
I haven’t posted in quite some time, as a couple weeks ago, I decided to learn Vulkan, the new, low-overhead graphics API developed by Khronos. The reasons for which I want to learn this are :
Recently, I’ve been trying to make Emacs my main “IDE” for C++, OpenGL etc… because I’m faster on it, and also Visual Studio has been really really really buggy lately (crashing, not updating, bugging…). For a while now, I’ve been having as a mini side project trying to make Emacs a decent C++ and OpenGL editor. However, it worked terribly.
This was quite hard to implement mostly because there isn’t many documentation on how to properly implement SSR. The best I found was this
Deferred rendering, is a technique in which the program postpones (hence deferring) the lighting calculations after all the geometry has been rendered so that the lighting calculations happen only on the pixels that are rendered. All geometry behind the player, or culled by the view frustum will not have the lighting process, saving a lot of time.
These two effects are not too complicated and can really make the 3D scene look much much better. For a simple 2D texture, it can turn something like this :
At one point while making the engine, I realised that I had to hardcode everything in C++. All texture loads, entity creation, shader loads, materials…
I decided therefore to create a JSON parser (downloaded from github) to parse files to create certain objects like entities.
A good and flexible rendering pipeline is essential for me as I often like to tweak it. I tried to make one that was relatively decoupled so that if I move the stages around it still work. The solution I had (not a very complicated one) was to simply have a stack of polymorphic render stages.
Ok, here we go. Animations.
For implementing animations, I used once again, ThinMatrix’s tutorial.
Post processing is super cool! as it’s not very hard to implement and makes your game look so cool.
GUIs were already not a very simple thing to implement. A lot of tutorials that I looked at for GUIs said that implementing them was relatively easy - “it’s just rendering a 2-D textured quad” … NO IT’S NOT JUST THAT
Unfortunately, this post will not be too interesting. Although water looks super awesome in games, it was much easier to implement than those damn shadows (which look easier to implement).
Shadows, something that needs lots of things to work, in order to work…
Component Systems… I had some trouble designing the initial version of this component system…
This is my first post. For a long time, I have been extremely interested in the architecture of game engines. I tried to make a game previously(which I have a post about, it’s called Landscaper), however, I wanted to develop an actual little game engine for me to use when developing games. This game engine will treat all the issues that I encountered in my experience in making games. This game engine will feature support for :
subscribe via RSS