Mod Support #0 – Introduction

Mod support is a loved, much asked for and difficult to implement side of game development. It can range from small configuration file changes to full blown total conversions. In the more extensible form it takes enough experience with games to extract game logic from core logic, embed scripting languages, load models and animations at runtime and merge it all together as seamlessly as possible.

Coming from a modding background before entering software and games development it’s an area very close to my heart. I feel it’s an amazing tool to help increase your game’s longevity and, more importantly, allow for a community to form around your game. In the best situations this community can create amazing additions or changes to your game that can add new life to a game that, even as the developer of, may rekindle your interests after you’ve long exhausted yourself.

As we’ll be adding mod support to our current game we’ll write about what we use and how we go about using it. As there tends to be little information about the more complex side of things we’ll try to expose these under discussed areas to help more developers planning the same thing.

If anyone wants to get in contact with me feel free to reach me at richard@roguevector.com or my Twitter at @CWolf.

Contents

The IOs of Game Netcode #3 – Synchronisation and Locks

This is part of a series of posts revolving around game netcode development, the introduction and links to the other posts can be found here.

 

We’ve already established you need to run your netcode framework on a different thread to your main game’s update/render loop, something which all netcode developers will agree is a requirement for writing good netcode. This means you’re already in danger of the threading and concurrency issues mentioned in the last post. So how do you ensure that you don’t run into these issues? Well, you have a few weapons in your arsenal to tackle the problem. The first one I’m going to discuss is locks.

 

Locks

Think of locks as your most basic weaponry against threading issues. The pistol with infinite ammo you switch back to after your power-up wears off. Locking is a synchronisation mechanism that ensures that only one thread can execute a piece of code at any given time. It does by forcing other threads to wait until the currently active thread is finished with the locked code.

Locks are such a common mechanism that I’m pretty sure all modern languages that support multi-threading also support locking. It’d be pretty disastrous if they didn’t. The main differences are the types of locks, the terminology and how they’re used. You usually have a couple of types available to you, such as object-level locks and method-level locks. Object-level locks are the bread and butter here and more complex locking mechanisms can be built upon them. In C# an object-level lock looks like this:

lock (this) {
	// Your synchronised code
}

In Java you achieve object-level locks using synchronized blocks, which are essentially the same thing:

synchronized (this) {
	// Your synchronised code
}

Unfortunately, in C++ you do not have a simple statement for object-level locking (although I’m sure you can create a template to do it). Instead, you need to instantiate a mutex (mutually exclusive) object (part of the C++ standard library) and perform the lock operation on that.

std::mutex mtx; // Declared in the class constructor

mtx.lock();
// Your synchronised code
mtx.unlock();

These code snippets will all do the same thing when multiple threads hit it at the same time. The first thread to request the lock will get it and be allowed to proceed to execute the code within the lock. All other threads will request the lock, but will be forced to wait (block) until the thread with the lock leaves the synchronised code, after which another thread will receive rights to the lock and proceed.

There are also method-level locks, which work similarly to the object-level locks, except work at the method level, synchronising everything inside a particular method. In C# this is achieved with the following:

[MethodImpl(MethodImplOptions.Synchronized)]
public void SynchronisedMethod() {
	// Your synchronised code
}

In Java, you use the same synchronized keyword as in object-level locking:

public synchronized void synchronisedMethod() {
	// Your synchronised code
}

C++ does not have a mechanism for method-level locking, but this can be easily achieved with a simple object-level lock. The method-level locking is just a convenience mechanism for object-level locking everything in a method.

Now, method-level locking and the examples I’ve shown for object-level locking have a pretty serious issue to take into consideration. A thread can achieve a lock on a method or an object while another thread has the same lock on a different object of the same class. This is because these types of locks are handled at the object level (locking against it’s own instance or a member object). For the most part this is exactly what is required, but sometimes there are situations in which you need to synchronise all instances of a particular class. This is called class-level locking and can be achieved in a couple of ways. You can perform an object-lock on a static object shared by all instances of a specific class, or you can do method-level locking on static methods.

Of course, with most coding concepts, there is a down-side to very up. With locking mechanisms, that is deadlocks

 

Deadlocks

A deadlock is when a thread is indefinitely blocked by a lock and they usually occur when multiple threads are executing code that results in nested locks on. For example, let’s take the classic bank transfer scenario.

public class Account {

	private double balance;

	public Account(double balance) {
		this.balance = balance;
	}

	public void withdraw(double amount) {
		balance -= amount;
	}

	public void deposit(double amount) {
		balance += amount;
	}

}

public class Bank {

	public void transfer(Account from, Account to, double amount) {
		synchronized (from) {
			synchronized (to) {
				from.withdraw(amount);
				to.deposit(to);
			}
		}
	}

}

Both accounts are synchronized so that exclusive access is obtained and the program is assured that it can perform all operations required of the transfer without blocking. The deadlock arises when a Bank executes two opposing transfers between the same two accounts at the same time (separate threads).

final Account account1 = new Account(1000);
final Account account2 = new Account(1000);
final Bank bank = new Bank();

new Thread(new Runnable() {
	public void run() {
		bank.transfer(account1, account2, 500);
	}
}).start();

new Thread(new Runnable() {
	public void run() {
		bank.transfer(account2, account1, 500);
	}
}).start();

The result can vary depending on execution order, but if both threads manage to obtain their first lock, then a deadlock will occur. This is because both threads cannot obtain their second lock until the other thread releases it, resulting in both thread blocking indefinitely.

The primary reason behind deadlocks is bad software design. Deadlocks can easily be avoided if the developer takes a step back and designs the inter-thread communication first. If the program/system makes use of multiple threads, then a deadlock scenario should be at the forefront of the developer’s mind. I’ve seen a number of poorly written programs where the deadlock scenario isn’t even considered and, for the most part, the program runs fine, but occasionally a deadlock will occur. This is because the developer will just throw in a lock here or there to ensure they don’t run into concurrent modification exceptions and suddenly have methods with locks calling other methods with locks, resulting a nested deadlock. Of course, the way they solve this is to ensure the proper execution order by adding more locks, which as you can guess, just makes things worse.

In my experience, the best implementation of multi-threaded operation consists of very few locks, but in the right places. DESIGN YOUR MULTITHREADING FIRST! 🙂

 

Thread-safe and Concurrent Data Structures

So now you should understand how to synchronise your multithreaded code, while avoiding potential deadlocks. It will help you avoid the concurrency issues when writing good threaded netcode. Now, I’m going to briefly touch on a set of special data structures that are designed to further help you avoid concurrency issues and deadlocks.

Thread-safe and concurrent data structures are a set of data structures that follow a certain set of rules to ensure that race conditions do not happen. This is done through a variety of different methods, such as locking and atomic operations.

They take care of all of the concurrency issues without any of the potential deadlocks, making your life a lot easier. Java, C# and C++ all have a decent set of thread-safe data structures in their standard libraries, too many to go through here, but it’s definitely worth searching their respective documentations and read up on the specifics.

One thing you need to keep in mind though, even though the add and remove operations are thread-safe, any iterators or enumerations generated from these data structures may not be. Meaning, if you loop over all the elements in a thread-safe data structure, the resulting collection may not be thread-safe. So if another thread adds or removes an element to the data structure while the loop is being processed, you’ll end up with concurrent modification and a race condition causing unpredictable behaviour. Fortunately, these thread-safe data structures usually offer up some kind of fail-fast iterator or enumerator which will throw a concurrent modification exception when it detects that the underlying structure has been modified after it has been created, which can then be handled properly by the developer.

You can usually avoid the concurrent modification exception by using the correct concurrent data structure for the job and in the right place. I like to use a concurrent queue for inter-thread communication as it allows me to iterate through the queue in a thread-safe way using peek and pop operations (no need for iterators or enumerators), which I will show an example of in a later post.

 

This pretty much wraps up this post. It’s a bit longer than the previous ones but I wanted to finish with the multithreading so that I can actually get into some nitty-gritty netcode stuff next post 🙂 Hopefully you can take something away from this that will aid you in your game development, and as always, if you want to chat or pick my brains about anything you can either comment below, catch me on IRC (click Chat above) or fire me a tweet to @Jargon64.

Thanks for reading! 🙂

Interfacing with UI #2 – Scalability & Aspect Ratios

This is part of a series of posts revolving around user interface design and development, the introduction and links to the other posts can be found here.

The first article in this series discussed the different libraries that exist and the pros and cons of both. In this article I’ll explain the choice of UI library we selected for our current game in development and how we tackled the scalability and aspect ratios problem of UI design and implementation.

 

Our Library Choice

Daikon Forge GUI

Based off the information discussed in the last article, we decided to go with Daikon Forge UI framework (DF-GUI) for our game. For all the good and bad, DF-GUI is a raster based library for Unity. Along with all the previously discussed advantages we felt the source code access was crucial so we could maintain ownership over our codebase. While we would have been able to obtain the source code to some of the vector based libraries the cost was prohibitively expensive. This limits us from some of the nicer vector libraries but as long as this is planned for, it isn’t a major problem.

It’s important to note that DF-GUI is being redesigned from the ground up for version 2.x. If you intend to buy DF-GUI you will want to either wait until 2.x is released, which that is risky unless you’re not on a schedule, or use another library.

The rest of this article will discuss two approaches to solving scaling and aspect ratio issues and will go into our reasons for selecting the one we did. I’ll try to keep things as generic as possible.

 

Why not use NGUI?

Since this question may pop into a few heads I’ll tackle it straight away. NGUI is a widely used UI library for Unity. It was so widely used that Unity even hired the lead / sole developer to help them create uGUI and NGUI was used as the starting code base (even though apparently it’s changed a lot since then).

Not to go too deeply into this point I feel we should touch on it at least a little. We used NGUI 2.x in a previous project spanning seven months. While it’s a powerful UI library we found we were fighting with it every step of the way. Over the past few months NGUI has been undergoing major redesigns and features for the 3.x branch. We tried an early version of the 3.x branch out and, while the changes were improvements, we felt we were going down the same road as before. Most of the examples are completely out of date and, whilst the NGUI forums are very active, the developer support is usually limited to a single line reply. Needless to say we felt it wasn’t for us and so we decided to look for alternatives and found DF-GUI. In the end, some people are very happy with NGUI so I’d recommend you do your research into it either way.

 

Why not wait for uGUI?

With Unity’s very own uGUI arriving this summer in Unity 4.6 why not wait for it? One rule of thumb is to never wait for technologies to arrive to develop on. The technology usually will not arrive when it’s meant to and when it does arrive it’ll be, or do, less than you anticipated.

 

The Problem – Scale and Aspect Ratio

Since we are using a raster based library we accept the problems previously discussed, primarily ensuring scaling and aspect ratios don’t destroy a carefully crafted UI. So the main problem breaks down into two problems.

 

Pixel Perfect Scaling & Blurring

Game UIs need to scale. Without scaling you’ll end up playing games that seem to have tiny user interfaces since they were designed for smaller resolutions than you’re currently playing at. The problem when scaling a raster based UI system is that you tend to get blurry images. This is a very similar effect to when you run a game at non-native resolution and the game text and UI is slightly blurry. The term I’m using, pixel perfect scaling, is a bit of a misnomer. An image of size 200×200 pixels will only be pixel perfect if it stays at 200×200 pixel in screen space. A nine-sliced sprite, however, can be a little more flexible when it comes to being pixel perfect. This can scale and remain sharp.

Pixel Perfect Example

 

Aspect Ratios & Stretching

Games UIs need to accommodate the main aspect ratios that exist at the time of creation and the near future. For us at the moment we’re seeing 16:9, 16:10, 4:3 and 5:4 still being used. The difference in horizontal screen space between 16:9 and 4:3 is fairly sizable and this difference can cause some big issues with UI layouts, especially if the UI should maintain a specific user experience.

 

Aspect Ratio Flexible Layouts

So to fix these two problems I did a lot of research and came to the conclusion that it’s actually hard to find information on this. There seems to be two approaches, which are to ensure the entire UI can scale and stretch or adopt a safe zone aspect ratio to allow for non-stretching (but this could be extended to support stretching too).

 

Stretchable UI

The focus of this approach is to develop the UI so that it stretches to accommodate the different aspect ratios sensibly. The layout is entirely anchor based and would be linked to screen resolution and specific UI elements. The top level UI elements, usually panels, would be anchored to other UI panels and screen edges so when the aspect ratio changes the UI would grow and shrink accordingly. If set up well, changing the size would fill up the blank space that would otherwise appear when changing from a smaller (4:3) to larger aspect ratio (16:9).

Here we hit an important consideration to take into account. Depending on the UI design, the children of those top level panels may not be intended to be stretched. Plain sprites, as opposed to nine-sliced sprites, look bad when stretched width wise only. Think of an image for an icon for instance. On the other hand, any child panels would usually be fine to stretch as these tend to be nine-sliced sprites, but again the children of these panels may not stretch well. A mixture of fixed aspect ratio and stretch support is needed.

For us, DF-GUI’s 1.x anchor system isn’t flexible enough to support this approach very well in my opinion. Trying to construct a stretchable UI lead to a lot of frustrating days trying to ensure the aspect ratios would stretch but also maintain the overall user experience. The new NGUI 3.x anchor system would help a lot in this situation as it’s more flexible but at the time of testing NGUI 3.x there were still some issues that contributed to our choice to go with DF-GUI.

In the end, from our experience, this approach requires more development effort and testing to get right than the safe zone approach below. Even then, a lot of work needs to go into ensuring the UI is well developed for each aspect ratio so there are no large empty spaces within UI panels. This would happen when stretching a UI but not having enough content to actually fill the UI with. In this case minimum and maximum sizes would help but these would have to be computed at runtime as min / max sizes are limiting when factoring in scaling and resolution sizes.

 

Safe Zone UI

The focus of this approach is to develop the UI for the smallest aspect ratio but taking into account the highest resolution. Anchors are used to edge fit UI.

To help explain the logic, below is an example of the UI running at 1280×720 resolution. The implementation consists of two UI containers. The first is always set to maintain the core 4:3 aspect ratio and is coloured green. The second is set to expand to the full resolution of the game and is coloured blue.

Aspect Ratios

To allow for the maximum use of a game resolution that is not 4:3, the core UI (green) is scaled to the maximum possible resolution whilst still maintaining the 4:3 aspect ratio (this is why in the example the UI core is 960×720). The blue UI container is always at the full resolution of the game.

To use this layout the rules are:

  • All elements must fit in the core UI container when running in 4:3 aspect ratio.
  • All elements must be created within the core UI container. This maintains a consistent scale and prevents unwanted UI stretching.
  • Any elements that need to be on the edge or corners of the screen must use anchors. Anchors will position the element correctly regardless of aspect ratio by locating the corners / edges of the blue container. Even though anchored elements may be outside the core UI container, they will still be a child of it. This maintains scale and prevents stretching.

 

So what about if you want to stretch some UI elements in this layout? You would still be able to with a UI library that supported anchors or you developed your own. In our case, we will probably develop our own unless DF-GUI 2.x introduces a more extended anchor system.

 

Scaling – Dynamic Fonts

To maintain sharp fonts in the game use of dynamic fonts are a must. Traditional fonts are bitmap based and scale badly causing blurring. You either have lots of font bitmaps of different sizes, which is needless and takes up more space, or you use dynamic fonts. With dynamic fonts, Unity uses the FreeType font rendering engine to create the font texture at runtime. This helps a lot but a dynamic font set to size 12 will still be small when shown on resolutions larger than the design time resolution. The last step to correctly scale the font is having the dynamic font size actually set as:

[the design time font size] X [scale index]

This scale index would be calculated by the design time resolution compared to the current runtime resolution.

 

Scaling – Sprite Atlases

Since we are using a raster based UI system there will be times when a image will not scale well due to it being too small or too large. In this case we will need to use different sprite atlases and swap them in depending on the current resolution. This isn’t a great solution as it involves a whole new set of images at different resolutions but if the original sprite atlases is of high enough resolution this may not be an issue for some games. Scaling down a high resolution image is always prefered than trying to scale up a low resolution one.

 

Closing

The two approaches I’ve outlined along with the surrounding techniques are almost certainly not the only approaches to scaling and aspect ratios but I found very little information on this area. These are the approaches I discovered and developed upon. Hopefully this helps some of you out there. If you have similar experiences, or have different approaches I’d love to hear your thoughts. Comment, email or grab me on Twitter at @CWolf.

Thanks for reading.

Interfacing with UI #1 – Structure and Libraries

This is part of a series of posts revolving around user interface design and development, the introduction and links to the other posts can be found here.

During the development of our current game I’ve been tackling the user interface. This post outlines some issues involved with creating a pixel perfect scalable user interface that also handles the different aspect ratios whilst maintaining a consistent look and feel. (A bit of a mouthful, right?). Some of the article will be specific to Unity as this is our current development engine but, hopefully, even if you’re not using Unity you’ll be able to take something useful away.

 

Libraries vs. Bespoke

The first decision that will affect your ability to achieve the best possible UI will be a typical development question.

Shall I use a library or build my own?

 

Each development team approach this question differently but there are some key points to consider. There is the usual trade off between how big your budget is, how many developers are involved on the project and how long the project schedule is. Even if you can afford to dedicate a developer for a few months to develop a custom UI system from scratch, is that really the best use of their time and your money? I’d say it usually isn’t unless you’re planning a lot of revolutionary features that none of the existing libraries provide. This usually isn’t the case.

Now I’ll play devil’s advocate. If you decide to go with a library, what about the features, maintenance, extendibility and future roadmap? Is there much use using a library that you can’t extend or fix yourself? What if the updates are released further and further apart? Those aren’t ideal situations so each of these points are worth some consideration. Any one of them might have a major impact on your game.

For us, a two person team, building our own UI system just isn’t viable. It would take a single developer many months of full time development to achieve the functionality that is available in existing libraries. In our case we’re happy with functionality provided by some UI libraries, however, we made sure that the source code was available in case we ever wanted to branch development.

 

Vector vs. Raster (Bitmap)

When selecting or building a UI system a choice will need to be made on what image format will be used. Will the system support vector, raster or both? For this article I’ll assume the difference between vector and raster is known but if not there is a summary here.

There seems to be some debate on whether to use of a vector or raster based UI system. When digging deeper into developer’s preferences two Twitter conversations lead to the following comments.

You mean something like Scaleform? Too expensive. NGUI (on which uGUI is based) does a perfect job, if you do it right.

and

Smart UI dev tries to be resolution independent, supports various aspect ratios. With vector based UI, it’s not a problem.

 

The split in opinion comes from the fact that there isn’t a clear right or wrong choice and there very rarely is in software development. Both vector and raster based UI systems have their advantages and disadvantages independent of the file format itself. Straying into Unity specific libraries I’ll try to highlight the differences between the two systems.

 

Raster System (Daikon Forge, NGUI, uGUI)

The raster based libraries that exist for Unity are almost entirely drag and drop or wizard based. This makes for a very designer friendly approach, especially if the designer has limited to no programming experience. Some programmers may become frustrated with such approaches though.

Good: Better effects & depth

Raster art tends to have a much wider range of effects that can be created and applied to them compared to vector art. Very often original files that start out as vectors will eventually be rasterised so effects and textures can be applied to them to give them more depth and smooth blends in colour.

Good: Source code provided

Most, if not all, of the raster libraries provide their source code. This is an extremely important point that cannot be overemphasised. As a general rule of thumb, developers should stay away from making any local edits to a library they are using, however, there will usually be situations where a change will need to be made. With the source code this isn’t a problem but care must be taken to port the change to later versions of the library. Without the source code this turns into a big issue that slows development down and, in the worst case situation, can lead to replacing the library.

Bad: Pixel perfect scaling issues

Pixel perfect scaling for raster libraries can be a real pain. When using libraries like Daikon Forge and NGUI you’re able to turn pixel perfect on by a click of a checkbox, however, this won’t scale yet. You’ll need to combine this with anchors to ensure that the position is correct. It can take a bit of playing to get things right from my experience.

Why do you want a scaled pixel perfect UI? Without being pixel perfect you’ll have a blurry interface on anything except the designed for aspect ratio and resolution. Without scaling then the UI will seem too small or large depending on the resolution being used vs. the designed resolution.

If you are developing a game for mobile and desktop platforms then alternative images may be required for different devices based on resolution requirements. If this is the case then it’s common for sprite atlas and texture swapping to be taken into account.

Bad: Aspect ratio issues

Things tend to get worse when scaling with aspect ratios. Your UI design may be thrown completely out the window if you haven’t taken care to incorporate the supported aspect ratios. A major problem with this is UI stretching. For example, making use of anchors to achieve a scaling UI designed for a 4:3 ratio will cause a lot of stretching when playing in a 16:9 resolution unless a lot of care is taken.

Anchor systems differ considerably from Daikon Forge and NGUI (NGUI v2 and the new NGUI v3+ anchors) and in our case we developed custom anchors to help fill the gaps in functionality.

 

Vector System (Scaleform, NoesisGUI)

The vector based libraries that exist for Unity are a mixture of third party tool designer based and pure code based systems. A more code based approach may appeal to some developers more than a designer based approach so it’s something to consider along with the following points.

Good: Pixel perfect scaling

The better vector based libraries will handle runtime vector loading instead of buildtime vector loading. This means scaling will always be pixel perfect with very little effort on the developer. Compared to raster libraries this could save a fair amount of time and effort. Aspect ratios are still a problem but removing pixel perfect scaling from the situation makes things easier.

Good: Image sizes

Vector art have smaller file sizes than raster art due to vectors being mathematical formula based. If building a game for a platform that has limited file system space or there is a requirement to keep final build size as low as possible then a vector based system will help. Generally it won’t help in terms of memory usage as the vector art will use more memory the more complex it is and it tends to effectively draw a bitmap based off the vector data.

Bad: More expensive in price

If using a Unity library for vector UI then the prices are generally at least twice as much as the next best raster based UI library. The libraries are around the range of £250 so, for a business, this isn’t too much in reality. If your budget can cover this then it’s not a problem.

Bad: Closed source code

As explained above, not having the source code for a library can cause some major problems later in the development process. You’ll usually find the source code is available for these libraries but it’s usually for a large cost and must be negotiated for.

Bad: Cross-platform support issues

Support for multiple platforms, especially Linux, tends to be lacking. Devices like Oculus Rift aren’t supported and NoesisGUI doesn’t support consoles yet.

Neutral: Third party tools for designing layout

Scaleform uses Adobe Flash Studio (and I think a few other tools support it too) for designing the UI and this leads to a flash based UI.

NoesisGUI uses XAML and this can be hand coded or designed using Visual Studio.

While these two points aren’t really bad it does lead to reliance on more tools, some of which you have to pay for. If taking the XAML route then this isn’t a problem as you can hand code the design (or visually design it in Visual Studio) then view the UI in Noesis’ viewer. It’s just more points to be aware of.

 

Closing

So, as you can see there is no real right or wrong choice (or at least that’s my opinion). Whether you develop a bespoke system or use one of the two types of libraries, make sure it matches you and your team’s approach, resources and requirements.

For the next article I’ll go into what UI framework we chose and how we addressed the problems and questions raised above. If you have any questions, want to debate or just to share your experiences – grab me on Twitter at @CWolf.

Thanks for reading! 🙂

Interfacing with UI #0 – Introduction

Over the past few years I’ve been slowly drawn into the user interface tasks that we encounter in our contracts and games. Some areas of UI design and construction have a lot of information written about them whilst other areas have precious little. With the aim of passing knowledge on I’ve decided to outline my experience over a series of UI related articles with the hope that it’ll help others working on user interfaces.

While I still have a lot to learn I’ve also learnt a lot of tricks that can help. I especially have a wealth of information in regards to the different frameworks and libraries that exist out in the wild, especially when it comes to Unity. I’ll be posting the links to the articles here as and when they are written to keep them together in one place.

If anyone wants to get in contact with me feel free to reach me at richard@roguevector.com or my Twitter at @CWolf.

Contents

The IOs of Game Netcode #2 – Threading and Concurrency

This is part of a series of posts revolving around game netcode development, the introduction and links to the other posts can be found here.

 

In the last post of The IOs of Game Netcode (found here), I talked about a few general rules of thumb I usually follow when starting a new netcode framework. Over the next few posts I’m going to go a little deeper into the technical options available to us as netcode developers and what routes I take based on different scenarios. The first topic I’m going to start with is threading, and the resulting issue – concurrency. This post is quite long so I’ve only addressed the issues the developer should keep in mind while working with threads. The solutions to these issues will be covered in the next post 🙂

 

Threading

As any netcode developer will know, the first hurdle you will hit when writing netcode or working with sockets is the threading issue. Normally, you can only execute a single piece of program code at a time. Threading allows you to execute multiple pieces program code simultaneously by running it on different threads. By default, reading and writing via sockets will block current execution of program code until it is complete. This isn’t necessarily an issue with writing to a socket unless you are writing faster than the hardware can handle (network card or modem), but if you’re reading, the operation will block until there is data to be read. One of the ways around this is to check the number of available bytes to be read before reading and if there are bytes available, only read that much. However, even with using this method, the actual reading and writing of bytes will block, even if it’s just a moment, and you don’t want this in your main game or render loop. Another way around this is to use asynchronous read and write operations. These run in their own threads automatically (provided by the socket library you are using) and pass the data back via a callback when complete. They have their uses, such as web services, but for real-time game netcode they can begin to cause problems as you cannot be sure of the order of transmission and you start to encroach of race condition territory.

So, in order to effectively read and write across a network with sockets, without causing the main program to hang while it is doing so, you’ll need at least one additional thread to perform the socket operations on. The reason I say at least one additional thread is because when developing your game client, you only really need a single socket to connect to your server. However, for the server, you’ll need a thread for every connecting client to handle each of the socket operations. For those of you who are now thinking “Why not use non-blocking sockets?”, I’m aware of this and it will be covered in a future post, but for the time being I’m focusing on the standard variety of sockets as there’s a lot more to consider with using non-blocking sockets 🙂

Anyway, as soon as you start working with multiple threads, it opens up a whole new bag of worms in the form of concurrent modifications.

 

Concurrency

Concurrent modifications are when you are reading a value of a variable in one thread, while it is being modified in another. Another example is looping over a collection while different thread is adding or removing an element from the collection. Most languages languages allow this type of access with unpredictable results. This is due to two main reasons.

 

Race Conditions

The first is the race condition, you just don’t know what thread is going to access the variable first. There are three scenarios for a race condition:

  • Read & Read – Both threads want to read the value of a variable. It is unknown which thread reads the variable first but it doesn’t matter as it does not change. Both threads read the same value.

  • Read & Write – One thread reads the variable, while the other writes to it. The final value of the variable will always be what is written, but the value read by the reading thread may be that of the variable before the write, or after. This can lead to the aforementioned unpredictable behaviour and potential crashes.

  • Write & Write – Both threads want to write. No read operations are carried out, but that does not eliminate an unpredictable value being read later. This is because the final value of the variable is unknown. It is the value of whichever thread wrote to the variable last.

The above scenarios are very specific and only show two threads accessing a single variable. However, in reality, these threads would be doing more than just reading and writing to a variable. For example, we have a shared (global or static) float variable called currentSpeed accessed by both threads:

Thread 1 – Anti speed-hacking protection

currentSpeed = player.getVelocity().getMagnitude();
if (currentSpeed > MAX_PLAYER_SPEED) {
    player.disconnect();
}

Thread 2 – Find the fastest moving entity

Entity fastestEntity = null;
currentSpeed = 0;
for (Entity entity : entities) {
    if (entity.getVelocity().getMagnitude() > currentSpeed) {
        currentSpeed = entity.getVelocity().getMagnitude();
        fastestEntity = entity;
    }
}
return fastestEntity;

For the record, you should never share a variable between two different tasks like this, but if you did this is how it might play out. For this example we are going to assume that the player is moving at a speed of 4 and there is one other entity in the world moving at a speed of 10:

// Start with Thread 2
Entity fastestEntity = null;
currentSpeed = 0;
for (Entity entity : entities) {
    if (entity.getVelocity().getMagnitude() > currentSpeed) {
// Switch to Thread 1
currentSpeed = player.getVelocity().getMagnitude();
// Switch to Thread 2
        currentSpeed = entity.getVelocity().getMagnitude();
        fastestEntity = entity;
    }
}
// Switch to Thread 1
if (currentSpeed > MAX_PLAYER_SPEED) {
    player.disconnect();
}

In this scenario, the first time currentSpeed is assigned is after the first switch, where it gets set to 4. Before it can test the value of currentSpeed, the process switches to thread 2, where currentSpeed is set to the value of the fastest moving entity’s speed, which 10. Then the process switches back to thread 1 to perform the test. Oh look, the player is moving at a speed of 10, they must be speed-hacking, better disconnect them!

This occurs because the threads can switch at any point in during normal processing and is always something you need to keep in mind while working with multiple threads. There are mechanisms to get around these issues, but first…

 

Non-Atomic Operations

The second reason concurrent access can cause unpredictable results is due to non-atomic load and store operations. This is a bit more low level and might be harder to grasp for those who aren’t familiar with CPU architecture. There are a couple of definitions when it comes to the atomicity of an operation. It can refer to a single instruction or an operation of multiple instructions. Essentially, an operation is considered atomic if it completes in a single step relative to other threads. Therefore, a non-atomic operation can also result in a race condition as described above, but for the purposes of this section, we’ll be focusing on single instructions.

When you want to run your game (or program), you need to compile it into machine code first. Every developer knows this. During the compilation process, the compiler reads our source code and optimises it internally before outputting machine code, therefore the machine code doesn’t directly reflect the logic that we’ve defined. Most developers know this. One of the optimisations compilers do is to maximise CPU register usage. General purpose CPU registers typically have a size equal to the bit-architecture of the system. Modern day systems are 64 bit architecture and have 64 bit general purpose CPU registers. If you have two 32 bit integers that have some operation performed on them, the compiler will attempt to optimise the machine code to load them both into the same 64 bit register to perform the operation more efficiently. Some developers know this.

Now, the problem lies in the scenario where you attempt to perform an operation on a data type that has a larger bit requirement than the CPU register can handle, or the register already has some active data in it. The data ends up being split into multiple machine code instructions – and this is what causes the problem. Can you remember when I mentioned that threads can switch at any point in normal processing? Well, this happens at the machine code instruction level. So, a simple variable assignment such as:

long timestamp = 1L;

Can be split into two machine code instructions, with a thread switch right in the middle.

Not many developers know this.

This is the very essence of non-atomic operations. If processing switches to another thread during a multi-instruction load or store, the race condition is the least of your worries. Depending on your operation, you’ll either end up with a torn-read or a torn-write. One thread attempts to write a 64 bit integer to a variable but only gets as far as the first 32 bit store instruction, another thread reads the full 64 bit contents of the variable, then the first thread writes the second 32 bit store instruction. What the second thread ends up reading is one-half correct data, one-half bad data and one-whole big problem.

 

Some of you may now be thinking “Holy crap, threads are dangerous, how the hell do programs even function without exploding into a flaming ball of random corruption!?” Well, the answer is yes, they are dangerous, but there are also certain principles you can abide by and mechanisms you can use that prevent this pseudo-random behaviour. However, these topics will be addressed in the next post 😉

Like before, if you have any questions about this post or just want to chat netcode, please comment below or fire me a tweet at @Jargon64.

Thanks for reading! 🙂

IGC Conference for Game Devs 2014

This last week was GDC for a lot of the games community. Not being able to go this year we joined the rest who were eager to talk games and listen to the announcements. Not only that but we noticed the Indie Games Collective in London were hosting a Conference for Games Developers so we instantly signed ourselves up for two free tickets. It was well worth it.

IGC-logo-300x193

We travelled up to London on Thursday night so we wouldn’t have to travel early on the first day plus we heard rumours of a small get-together at the Loading Bar, a game themed bar in Soho (sadly moved to another location now). With a good few drinks and a game of Cards Against Humanity we closed the night off having a great time.

The venue at London South Bank University was pretty accessible and the first day of talks were enjoyable. We especially liked the chunk that involved Louise James from Generic Evil Business (Holy crap, how good is Twitter?), Andy Esser from Zero Dependency (Carving Your Own Path – Making Tools, Not Games) and James Parker from Opposable Games (Dual-screen vs Second-screen).

Danny Goodayle from Just a Pixel gave his first first talk about Light then tweeted about setting up a weekly indie Google Hangout. We signed up to take part in that so we’ll see if anything evolves out from there.

The evening event was a good chance to talk in a more casual environment (more drinks!) and we both had fun chatting the night away. Sadly, we had to leave after lunch on the second day but all the talks were great too.

The conference was recorded and the videos should be uploaded over the coming week or two. Even though there was nothing ground breaking we had a lot of fun and we’ll definitely try to attend any more events that are organised in the future by the great people that make up the IGC. Special thanks to Byron Atkinson-Jones and Natalie Griffith for getting the event rolling.

The IOs of Game Netcode #1 – A Few Rules of Thumb

This is part of a series of posts revolving around game netcode development, the introduction and links to the other posts can be found here.

 

First of all, welcome to the series! 🙂

This first instalment of The IOs of Game Netcode will cover a few rules that I’ve come to follow whenever I approach a new netcode project. Now, as I’ve been writing this I’ve come to realise that because netcode is mainly backend code, there isn’t going to be a whole lot to look at except large walls of text. I’m afraid I can’t do much about that. Hopefully, you find the posts interesting enough to persevere 🙂 So, without further ado, here are my rules of thumb…

 

Text is BAD

What I’m referring to here is text-based netcode serialisation is bad and you shouldn’t do it. If you do it, then you are a bad person and should feel bad. Of course, there are exceptions to this rule. The first being web services, which in all fairness is a pretty big exception. Turn-based games can get away with it as well. Also, you may not have a choice if you’re developing a mobile game due to unreliable Internet connections and are forced down the RESTful web-service route here too. I’ve played a few real-time multiplayer mobile games and they’re alright, if you’re on wifi, otherwise it’s a terrible experience.

So, if you don’t fall into one of the above exceptions, you’re going to want to go with object serialisation at least otherwise you’re going to get a lot of overhead on parsing your netcode into usable formats. Now, for a lot of you this is pretty obvious and you may be asking yourself “Why would any sane person try and implement a real-time game’s netcode with text-based serialisation?” Well, for some of the more inexperienced developers out there, it may seem like a good idea to use a flexible text format like XML or JSON for your netcode because “everything can read it” and “other people can write clients to consume it like a web service”. Just save yourself the headache and stop yourself now.

If you haven’t guessed, yes, this is something I chose to do while developing Root Access. This was a long time ago now and what roped me in was the flexibility, you don’t need to deserialise everything you receive and different clients can select only what they need, but this was an inexperienced train of thought. Of course you want to deserialise everything, if you don’t you clearly don’t have efficient netcode. Even though it did work, and pretty well at first, it got progressively slower the more data we shoved through it. Eventually, I went back and recoded all of the netcode to use Java’s object serialisation framework, which took a long time and was a massive headache. So save yourself the trouble and don’t do it! 🙂

 

Everything is Multiplayer

It is my firm belief, that if you are making a game with multiplayer, you should bite the bullet and make your singleplayer multiplayer. What I mean by this is your singleplayer is actually a private multiplayer session consisting of one player using the system’s local loopback interface. Of course, this basically means you need to START with your netcode framework before you can make any kind of progress. Yes, this does slow down the initial project start quite considerably and also makes developing the singleplayer aspect of the game a bit slower and require more effort, but I believe the benefits are definitely worth it.

For starters, you’ll finish your game faster. If your singleplayer and multiplayer modes use exactly the same code, then you’re killing two birds with one stone. All you have to do is flip a switch and you go from singleplayer to multiplayer and vice versa. Writing two separate systems to manage singleplayer and multiplayer will just be a maintenance nightmare down the road. There shouldn’t be any latency issues running singleplayer through the loopback interface. If you are experiencing any kind of noticeable delay while playing a local singleplayer, this just points out that there may be an issue with your netcode design / implementation, forcing you to fix it and resulting in a better multiplayer experience as well! 🙂

Lastly, and this is the big one, you’ll avoid the gargantuan headache that is retrofitting multiplayer into your game. If you focus on singleplayer only to get your game out as soon as possible, with the idea of multiplayer coming much later on, you’re in for a world of hurt. I’ve seen so many game developers do this as well, and it basically appears that they’ve hit a brick wall in terms of progress because they are so busy behind the scenes refactoring, debugging and generally rewriting a huge portion of their game. I was actually impressed when the guys at Mojang managed to retrofit Minecraft’s singleplayer into a multiplayer system to support LAN play, but it required huge changes including the way their singleplayer stored its map and player data. So, if you’re planning multiplayer at ANY point during a game’s development, DO IT FIRST! You’ll thank me later 😉

 

Stand Alone, Together

Now this one may not be as obvious and a lot of people may actually disagree, but I think it’s something that can help a game’s design, development and maintainability. When writing your multiplayer framework for your game, separate the game client and server into separate projects, and also build them as separate binaries. It may be tempting to implement your multiplayer server into your game client as it’s easier to work in a single project, especially if the game doesn’t really require a dedicated server. Do it anyway. The reason I say this is because you’re going to end up with a lot of duplication, particularly in regards to the game data (which is understandable due to the client and server’s own knowledge of things). If you’re not careful you’ll start cross-referencing data without going through the netcode properly and before you know it, your multiplayer isn’t truly multiplayer and it’ll be a development and maintenance nightmare.

If you do separate your client and server into separate projects, but plan to compile and distribute together as one application, then I don’t really see a problem with this. I just prefer to go all the way and build as two separate applications, then you have a dedicated server from the beginning. Then for locally hosted games, you just get the game client to launch the server in the background, telling the server that the player is the host / has admin privileges by providing a player’s token or account ID to the server process. This also works for singleplayer games as mentioned in the Everything is Multiplayer section raised above, just you keep the game private, skip the game lobby screen and launch straight into the game.

What I have suggested so far makes the assumption that your client and server are written in the same language. If they’re not, then you’ll probably never encounter these issues, but there is also an advantage of working in a single language and that is shared projects. Back in the day when we were very much Java, Java, Java we usually started a new game with the same triad of projects. Client, server and shared. The client and server were dependent on the shared project, which housed all common functionality between the client and the server. It included the netcode framework, netcode commands and data structure classes. This removed a whole tonne of duplication in the projects and is something that worked very well for us while we worked in a single language. It’s a lot harder to do when you work in Unity and C++ :). Anyway, since those days, I’ve refined my thinking a bit and come to the conclusion that sharing the netcode framework and potentially the data structure classes is potentially a bad idea, and that only the netcode commands should be shared.

If you share your netcode framework you have to start writing in special cases depending on whether the process using the framework is the server or the client, as well as identifying the source of a netcode command (whether it is the server or a client and how to handle it). I actually saw someone tweet a code snippet that made reference to this exact check recently and, to be honest, I don’t think it’s needed. Of course, there’s always an exception to the rule. If you’re developing a peer-to-peer model, then you may not have a choice, depending on how your client gets its peer list. Anyway, the reason I mention that you shouldn’t be sharing a data model either is because your client should never know everything your server knows. The client should only know portion of your server’s game data, only what is relevant to it. Also, you should only be sending client required data for each entity class. Your server versions of the entity classes will contain a lot more and some of it may be sensitive. Your server’s game data classes will also contain a tonne of functionality regarding how the entity behaves, that the client should never need. If it does, then your design is wrong as your server is not fully authoritative and then you have another whole lot of problems, such as players being able to headshot everyone in the server with a press of a button or players setting their own stats (Yes, I’m looking at you Battlefield).

Some of you may be thinking “I don’t see the problem, I’ll just create pure abstract data classes with only the common data between the client and server, with no behavioural functionality. Then subclass in each of the client and server projects”. By all means, you can do this. I just find this to be a pain because you end up maintaining three data structures.

 

Anyway, this post has gone on far longer than I thought it would so wrapping it up now. Sorry for that, I’ll try to keep future posts more concise and maybe provide code snippets to illustrate my points 🙂

If you have any questions or just want to chat netcode, just comment below or fire me a tweet at @Jargon64.

Thanks for reading! 😀

The IOs of Game Netcode #0 – Introduction

I’ve done a lot of coding over the years and learnt a substantial amount, either through reading tutorials, looking at examples, trial and error, reverse engineering, etc. So I’ve come to think I have a wealth of knowledge regarding the subject these days.

For a long time I’ve been thinking that I’d like to give back in some way. So after some thought on what I could share I’ve come to realise that nearly all of the games and other projects I have worked on have networking implemented in some form or other, and for the most part, this has all been developed from the ground up with very little library use (you can take this as good or bad, probably a bit of both). Also, it hasn’t just been the same type of networking. Infact, it ranges all the way from XML serialisation to binary serialisation, and everything in between.

Now, I don’t consider myself a guru on the subject, but I’d like to think that I have an advanced level of knowledge on it. So, I’ve decided to create a series of posts based on my experience on game netcode development and what I think is the best way of implementing game netcode as well as the issues that I ran into that led me to think this way. Hopefully, some of the game devs out there will find what I will be sharing useful and maybe beneficial to their current projects!

I’ve decided to call the series The Ins and Outs of Game Netcode or shortened The IOs of Game Netcode. Now remember, these are my opinions based on my experiences, so don’t take it as fact. I don’t think there is a single right way to do it, but I’ll be sharing what works for me and how I do it.

This introduction post will also serve as a contents page (with links) for the series. So stay tuned 🙂

Contents

Merry Christmas!

We hope everyone is having a great day so far!

Looking back over the past year it’s amazing to see how much has happened to Rogue Vector. We can’t help but review what has past and wonder what will come.

Rogue Vector entered the year as Deadworld Studios. After a trademark dispute we decided to change our name to Rogue Vector. In hindsight it was a very good decision and we feel very settled and happy with the name less than one year in.

As the year went on we worked with Mr Dog Media to develop Banshee. With that wrapped up 7 months later and being released next year, we took some much needed rest.

So, what does the new year bring for us? Well, we’ve started development on our next in-house game. It’s really early days so we don’t have a whole lot of information to give out on it yet but we’ll be starting to expose the development process as we ramp up our work. Needless to say, we’re very excited that we’re finally able to build this game as it’s been a project in our minds for a very long time in one form or another.

So once again, from us here at Rogue Vector, Merry Christmas!

Interfacing with UI #4 – Coherent UI

February 5th, 2016

This is part of a series of posts revolving around user interface design and development, the introduction and links to the other posts can be found here. Last I wrote about user interfaces I discussed the new Unity UI system and I wrote about our process of porting from Daikon Forge to it. That was a year and a half ago and a lot has changed since then. To keep things interesting we decided to move from Unity UI (yet another move?!) to Coherent UI and I’ll explain why we did it. Why Move… Again?!... (read more)

@SolitudeGame: Status update: Fuel reserves low. Asteroid mining facility detected on sensors. No response to our communications. On approach station appears to be abandoned and running on emergency power only. Away mission approved. Mission objective: Search and salvage - fuel is a priority.

23/03/2022 @ 11:00am UTC

@RogueVec: And so it begins! #RebootDevelop

19/04/2018 @ 8:05am UTC

@RogueVec: We'll be at @RebootDevelop this year. We can't wait! If you want to hang out just give us a shout! #RebootDevelop2018 #GameDev

16/04/2018 @ 12:06pm UTC

@SolitudeGame: Fullscreen terminals allow you to hook into your ship's guns for fine control! Moddable gun modules, terminals and UI! https://t.co/B5N01jrA70 #GameDev

8/12/2017 @ 4:58pm UTC

@CWolf: Woo! And, now we have a cross-compiled (nix --> win64) @SolitudeGame server in our build and deploy pipeline #RV #GameDev

28/11/2017 @ 3:39pm UTC