You are currently viewing My Small Experience With Unity DOTS (Part-2)

My Small Experience With Unity DOTS (Part-2)

 44 total views

by Rushidan Islam Nirjhor

I’ve been discovering new avenues of game programming as I journey through the bumpy roads of Unity DOTS. The past few months have been quite the experience, stressful at times, ecstatic at others. See, that is the thing about Unity DOTS. Getting something to actually work in this system can be quite challenging at first. But once you do get it to work, the results surely make it worthwhile.

What is DOTS?

DOTS or Data Oriented Technology Stack is Unity’s implementation of Data Oriented Design of programming. It is composed of three major components – The Entity Component System (ECS), the Job System and the Burst Compiler. It is a programming structure where the data and the systems operating on those data are kept separate. Here, the data is divided up into suitable chunks as structs and operating on those data means using only the specific ones that are required for it and nothing else. Also, the structs being stored in ordered blocks of memory (stack memory) means that the CPU can enjoy much faster memory access in fetching and storing those data compared to the scattered data access of classes and objects (heap memory) required in traditional Object Oriented Programming. All these mean that you get to experience far superior performance out of your CPU, be it in a fully stacked gaming rig or a couple years old measly mobile device. Even when raw performance is not that big of an issue, DOTS can also help with keeping device heat, battery consumption and memory usage down which ensures a better overall gaming experience especially for mobile devices.

Project Setup

One of the key benefits of DOTS is that it can handle a very large number of objects as entities with great performance on mobile devices be it with or without physics simulation. And that was our goal with DOTS in the first place, to be able to have numerous objects on the mobile screen, preferably with animation, with appreciable frame rate. With that in mind I set out to set up a project with the required Unity packages. This is where I faced the first round of difficulties. While the packages I’ve used up until then were available in the package manager as released or preview packages which were relatively easy to search and find, DOTS packages, being experimental packages, are not readily available in the package manager and there is no “Show Experimental Packages” option. So, I had to do my research to figure out the packages needed and add them manually using their specific names (e.g. – com.unity.entities). The addition of a specific scripting define symbol (ENABLE_HYBRID_RENDERER_V2) was required in order to activate this feature which is required to display DOTS skin bound animations in a sub scene. All this information wasn’t as easy to find as I’ve come to expect from Unity. And throughout the project, I painfully realized that Unity’s own documentation and community support for finding answers related to DOTS is not as abundant as you’d hope for. 

DOTS Animation

After familiarizing myself with some basic DOTS components and how to get entities to work, I stumbled across a major roadblock in trying to implement skin bound animation for entities. While components like the mesh filter, mesh renderer, rigidbody, box collider etc. get automatically converted to their entity component counterparts by adding a simple “Convert To Entity” component, there’s no such conversion available for “Skinned Mesh Renderer”. So, I started to look for ways to get skinned mesh animation working in DOTS. While I found a way to play skinned mesh animation with the “Animation Graph” component and “Blend Graph” DOTS animation asset, I could only play a single preset animation on it that too after going through much hassle in setting up Subscenes and activating the Hybrid Renderer V2 feature. As the DOTS animation graph asset is still in a very rudimentary, unpolished and buggy stage with little to no information about it available from unity or from the community, I failed to control it at runtime through script or to get different animation clips playing in it at runtime. Then I started digging into Unity’s DOTS Sample Project with animation and got a few code snippets from there with which I could at least play different animations with control from script. But I still wasn’t able to apply any blending although blending was implemented in the sample project. It was a very complicated implementation of an animation system in the sample project and I just couldn’t follow through in the limited amount of time. But the main problem I stumbled upon was that converting animation clips to DOTS usable data uses some functions available in the new unity animations package. But these functions are not available in the Player, meaning that these functions couldn’t be used in build or atleast I couldn’t get them to work. So, even though I was able to gain some control over playing different animation clips in the Editor, I simply couldn’t get them to work on my mobile device (build failed).

GPU Animation

Instead, I found a very different yet interesting way of implementing skinned animation which is actually unrelated to DOTS. It is a way of implementing vertex animation from skinned animation clip data with the GPU through the use of shaders. There are many benefits to using this system over the “Skinned Mesh Renderer” component. Firstly, it is very efficient to use the GPU for animation as it involves manipulating all the vertices just like a vertex shader. On the other hand, skinned mesh renderer is quite a performance intensive operation on the CPU as it has to handle the calculation of the bone transforms and their corresponding vertex response all together which can rapidly bog down the device when used in large quantities. But animating with the GPU doesn’t require any bone information at runtime once the animations have been prepped in textures. Also, it is possible to have GPU instancing with this method that batches all the animated objects in a single draw call which is not applicable for skinned mesh renderer. So, the performance gains by using this method is really quite enormous. With 1000 animated entities on screen in the editor for a fairly detailed mesh of 4936 vertices, I went from getting ~20fps with skinned mesh renderer to ~60fps with GPU vertex animation (even without instancing).

With Skinned Mesh Renderer With GPU Animation

How this system works is by baking the animation clip data into a texture for the GPU to read from and work with during runtime. Let me break it down. First we iterate through each frame of the animation clip as it is playing, plotting the vertex position and normal vectors for each vertex during that frame as pixels into two separate textures – one for vertex position and one for normal vector which are saved as assets. The result is that for any particular animation clip we obtain two different textures – one with the position data and the other with the normal data of each vertex stored in the horizontal direction and time or frame count being in the vertical direction along the texture. Although this baking is a one time procedure, it is quite an intensive operation and using a compute shader should help speed up the texture plotting process.

Mesh Vertex Position Texture

Mesh Normal Texture

The next step is to write a vertex fragment shader to allow the GPU to use these pre baked textures to manipulate the vertices and normals of the mesh. We can decorate this shader with different parameters for controlling the speed, looping characteristics etc. of the animation. With a few extra lines of code, we can also make this shader GPU instancing enabled. But getting GPU instancing to work with different animations at the same time can be a bit tricky as texture2d is not a per instanced property, so a change of texture would mean separation from the batch. However, by using the different animation textures as a texture2darray and then using the integer index as per instanced property for the shader, it should be possible to play different animations on different objects at the same time while keeping them draw call batched together.

With GPU Instancing ON

Programming in DOTS

Now, onto programming in DOTS. I had attempted learning DOTS and ECS once before a couple years back when I was still learning the basics of Unity. And at that time it felt so confusing and hard to comprehend that I had to give up eventually. But now that I’ve worked with Unity and C# extensively, this time the journey was much less frightening. As I started to learn the new coding pattern, I found some similarities with the traditional MonoBehaviour approach of Unity programming which made it a little less intimidating. For example, like we use the MonoBehaviour’s Update() method for executing something every frame, there is a similar OnUpdate() method that we have to use when implementing the JobComponentSystem class used for running or scheduling jobs to operate on some data. The existence of this class along with the Entities.ForEach lambda, makes it much easier to run and manage multithreaded code to work on a large number of entities which is one of the primary objectives of using DOTS in the first place. Then there is the OnCreate() method of the same class that can be thought of being analogous to the MonoBehaviour Awake() or Start() method. Also, as we can store data for any object in a MonoBehaviour component, there is a similar approach of having data for entities (although data doesn’t belong to any entity in particular) in structs implementing the IComponentData interface. Then we can associate this data to any entity we want similar to how we attach MonoBehaviour components to gameobjects. 

But the similarities end here sadly. You see, components that hold data in MonoBehaviour, can also be used to hold the methods or systems operating on that data. This is why we can do transform.Translate() or rigidbody.AddForce() etc. But DOTS uses Data Oriented Design, meaning that the data and the systems working on that data have to be kept separate. We cannot do data.DoSomething() here. This is the first thing that I had to get accustomed to while coding for my DOTS project. How to manage data, how to structure the systems for ECS so that they are Burst Compiler compatible, how to access and use data from MonoBehaviour, how to fetch data to and from other entities, these were things that I had to actively think about while designing my project. And too many times have I stumbled upon errors saying that I did something that made my system Burst Compiler incompatible or that the task couldn’t be parallelised. While some of them had no other option, most of them could be tweaked or worked around to make use of those technologies just by changing the way data was used in the system. Unity’s error messages helped a lot in this regard actually. More often than not they were precise in directing me to the exact cause of the problem and gave an idea of how to solve it. See, that’s the thing with DOTS. Anything that you would do in the normal MonoBehaviour approach is still pretty doable here as well. It just requires a ton of thinking and work around at times.

The other difference that gave me quite the trouble was the fact that entities and the systems are quite persistent. Let me explain. You know how gameobjects and the monobehaviours linked to them are created on scene load and destroyed when the active scene is destroyed (unless explicitly told no to), as it turns out entities and systems don’t do that. Rather entities have to be manually destroyed and systems have to be explicitly disabled or destroyed if they are not needed anymore and they start executing even before scene load automatically, unless tagged with [DisableAutoCreation]. And when auto creation of a system is disabled, it has to be manually added to the job system update loop for it to call OnUpdate(). Figuring these out and managing my systems so that they can be activated or deactivated on level load as required was quite a hassle. 

Hybrid ECS

Up until now, I’ve worked on a few physics based projects involving interactions of a large quantity of objects. The more I worked on these projects, the more I got accustomed to the data-system structure of coding. Even though I still get errors pretty often, I can usually draw an idea of the solution or workaround early on as well. Throughout these projects I’ve realized that it is much easier and more practical to go for a hybrid ECS approach rather than pure ECS. Because it is not absolutely necessary to make everything into an entity and not everything can be (e.g. – camera). I’ve found entity conversion to be most useful for large batches of objects or for leveraging the performant DOTS physics for physics interaction of numerous objects. For operations involving a small number of objects or where DOTS implementation would be very complicated, it is wiser to use the traditional MonoBehaviour approach instead. So, handling input or moving a single object around is better done with the MonoBehaviour approach by establishing a data link between them and the entity driving systems.

Let me give you an example. In one of my projects, I had to implement a physics driven skinned rope with two physics objects balancing each other connected at the two ends of the rope over two supports. I configured the rope joints as a series of “Hinge Joints” with colliders to generate the behavior and regulated the masses of the connected objects to keep them balanced. Also the mechanic required a large number of entities to be attached to the objects making their weights imbalanced to eventually make them fall down. Now, although I already had a large number of entities in the scene and systems driving them, if I were to configure the rope or the interaction of the rope with the objects with ECS, it would have taken an unnecessarily lengthy period of execution time. Instead, what I did was set up the complicated physics system of the rope and the weights with traditional unity physics and used separate DOTS physics on the entities and designed the systems in a way where these entities could pass data to and from the aforementioned physics system. The entities would collide with visible entity versions of the weights but would change the mass of the invisible rigidbodies actually connected to the rope system. In turn the entity version of the weights would follow the changes in position of the rigid bodies due to hanging from the rope with their masses changed which fakes the effect that the entity weights are indeed working with physics. This hybrid ECS approach allowed me a reasonably faster execution while still maintaining good performance on mobile devices. 

Building on Target Platforms

Another nuance of DOTS is that the usual build system doesn’t work for mobile devices. Even though a build is attainable it doesn’t play. What I instead had to do was install the com.unity.platforms package with which certain assets for specific target platforms called Build Configuration can be generated. This is where the build command is to be executed from in order to make the game work.

As you can see, getting into DOTS can be quite a struggle at first. But things do get better as you carry on. This has been the story for me so far. However, while the struggle is very real, the results are equally amazing. Never have I thought that having 10000 physics units bouncing around the screen on a 3 year old mobile device at a stable 60 fps was possible. DOTS made it possible. And not only that, it opened up so many new avenues before me. From being able to design reusable systems to harnessing the power of GPU in animation, DOTS has taught me a lot. Even though this technology has many shortcomings, mostly due to still being in the early phases of development, I’m excited to see what the future holds.