Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Featured Blog | This community-written post highlights the best of what the game industry has to offer. Read more like it on the Game Developer Blogs or learn how to Submit Your Own Blog Post
In this 2nd Blood Runs Cold tech postmortem post, I talk about the solutions we built for continuous server syncing of our game's player data and the data-driven balancing toolset that allowed us to juggle 400+ ScriptableObjects and have 0 merge conflicts
(This article was originally posted on my personal blog. You can hit me up on twitter @yankooliveira)
Last time, I talked a bit about the UI System architecture that we used in Blood Runs Cold. The good thing is that it was based in lots of best practices that I worked with in previous projects, which meant it was a low risk decision. This time, I’ll cover something that was a bit more unnerving, because it was a cornerstone of the project and completely custom built: the Player Profile schema.
Now this would, in theory, be a very simple topic: you just save some stuff piggybacking on Unity’s serialization and… that’s about it, right? Well… stay a while, and listen.
Almost all the implementation work regarding the Item Database was done by Marcelo Brunken. The whole design wouldn't really be possible without brainstorming with him and Filipe Silvestrim, who also did some improvements in the data storage later on. Grazie mille to Sergio De Natale, the Game Designer who worked with it the most, for not killing all of us in the process.
BRC was my first gig as a lead developer and I was very fortunate to have an incredible tech team to work with. When I got to the project, it was in pre-production and prototyping was in full steam, but production was supposed to begin about two weeks later. After assessing with the team, we decided that the best thing to do would be throwing the prototype code away, as it focused on speed of iteration and proving the concept way (way) more than maintainability and stability.
Being a hidden object game, there was a very strong intent from the Game Designers that it worked offline. However, as a mobile product, it was also important that the players could have their data saved and be able to synchronize across devices, not to mention enabling social features later on. That leaves us on an interesting spot, especially if you consider that the plan was that the programmers would be the first to enter production, since we’d have to build the codebase from scratch, and the designers would carry on pre-production for some time.
There was an external team working on an in-house server solution that we would most likely use, but we only had client-side developers – or, more specifically, the devs who had server-side experience were also our main gameplay programmers. The external team was dividing their time between building the server tech and working closely with another project that, unlike ours, was full online multiplayer. This meant we’d only get sparse support time slices. Thinking on a company-wide holistic approach, we’d be the perfect test project for them: our game was supposed to work even if there was no server connection established, unlike the other projects in development at that time. That was the main reason for me deciding to go with it as our server solution.
Here’s a fun thing about it though: for security reasons (both to prevent hacking and to guarantee serialization validity), if there’s a major change in the “transport layer” (eg: changes in the data structures that get sent to the server), the protocol purposely breaks backwards compatibility. This a life saver in many situations, but unlike web games where you can deploy a new client instantly, mobile has an overhead of at least a couple of days to get a new version online on the stores. Working in that scheme would require a lot of planning and preparation around releases.
So let’s summarize the constraints:
The tech team was gearing up to enter production…
…but the game designers would still be in pre-production for a while, so the requirements weren't fully defined
The game must work offline…
…but also synchronize with a server
We have backend-capable devs…
…but they’re also our gameplay programmers
We have in-house server tech…
…but it’s still being built…
…aaand the protocol is not guaranteed to be backwards compatible (see 1.a)
Did I mention this was my 3rd week? 😅
Now that you know the context, I think this solution will make more sense and, hopefully, it will provide some useful insight.
TL;DR: it was as easy as rolling a ball!
PART 1: TRANSMISSION
Life is great when you can go for naïve solutions: your player profile is a data blob and your players press a button to save it. Boom. Solved!
Before you say that’s a horrible solution, you have to remember no solution is horrible, you’re just not big enough to enforce it: if you develop for iOS, I’m pretty sure you’ve been at least once through Apple’s Snowball Update Effect (“oh geez! A new iOS! Oh, I need a new XCode as well. Wait, what, I gotta update MacOS for that? What do you mean Unity is crashing on High Sierra?!?!”).
The Sims Freeplay does (or did) exactly that: you’d go in the settings screen, press a button, and it’d say “wait a while, we’re saving your stuff!”. They can do that: it’s a great game with a huge franchise behind it. We most likely wouldn't be able to convincingly pull that off.
What GD wanted (and we were interested in the technical challenge as well), was silently and constantly synchronizing your data as you played. This means that sending a huge JSON blob to the server all the time would not only make the server team cry, but also chomp away the user’s monthly 3G allowance. It would be especially bad to our most dedicated players: possibly the more you played, the bigger your profile data would get, so more of your 3G gets eaten.
Being offline also implied we had to synchronize every step of the way while keeping the saves locally, since you can’t be completely sure that the user will be connected at the point where you generate some new data. Doing that meant that we would be able to recover from any failures on the next boot up if for some reason the server sync totally failed (eg: the game crashed mid-transmission).
PART 2: DATA
Independently of how we transmit this thing, however, there’s still the problem of how to break it down in bite sized chunks and still store all kinds of information, keeping in mind that if we changed the data structures, we would have to re-generate the communication protocol.
There’s 2 questions that you usually have to ask in regards to data: “where is it?” and “what is it?”. That means you need an address and the payload itself. The basis of the whole system was this question:
“What if everything was stored in an <id, data> pair?”
To which you would ask:
“But wait, what if ‘data’ is big? Aren't you back to the same problem?”
We would be, yes. That’s why our data would be… always a number.
PLAYER PROFILE
Besides some bureaucratic data, the Player Profile is a big inventory: every piece of information would be an item, and the relations between these items would define the player’s state. The <id, data> pair would be <int, long>, and all the rest would be metadata that doesn't need to be transmitted to the server.
This would solve all the communication issues: not only the protocol wouldn't change even if the structures we had to store changed completely, but we would also be able to easily break the transmission data in very small chunks.
For synchronizing the local and remote profiles the solution was using a transaction based schema. We would save the raw data locally, but with every change, we’d create a Transaction that represented the delta made. Those pending transactions would get saved locally and queued up for transmission in order, which meant that one’s data was safely stored locally, and the server would get to the same state as soon as the game finished transmitting all of them.
The good thing about having this “atomic” layout is that you can make everything extremely data driven: item A can always reference item B by id, and that creates a new relation, which makes new features very easy to implement. We could store all the metadata in ScriptableObjects, and those could even be deployed via Asset Bundles, avoiding going through the submission process if there was the need for a hotfix or rebalancing. At the end of the day, the only thing you can never, ever screw up on is the item ids.
Too bad we’re all human.
If you have a system that makes the computer happy, it probably means that it will be prone to human error (or just human annoyance). Imagine this: the GDs usually love working with spreadsheets, so their first approach could be creating one with all the game data.
In our schema, no breaks would be catastrophic, except an id change. Now imagine the GD spreadsheet automatically creates a new id for every row… and one day someone deletes a row in the middle by accident.
Enter the Item Database. We would build a tool that would control the id creation and would make ids completely transparent to the game designers.
We brainstormed a bit about how to do this. The first thing that popped to my mind would be an actual SQL database, but that would transport the problem from the server side to the client side: with every data structure change, we’d have to update the tables. Also, if we did this inside of Unity, you could be changing multiple ScriptableObjects in any commit, which meant a high chance of merge conflicts.
The winning solution was Marcelo’s: a REST server written in Spring that would control the id creation and store every item on an individual JSON file. You could check out the data in the form of ScriptableObjects that would be on .gitignore (making repo conflicts, at least, completely impossible). We would then make an Unity editor to make filling up the metadata easy for the GD department.
It was relatively risky because even if it sounded good, the whole project would hinge on this decision. However, it would take only a couple of days to cook up a proof of concept, so we could always backtrack. While I worked on the player profile and transaction systems, Marcelo got the ItemDatabase server up and running on a spare Mac Mini we had laying around. After the initial tests, the tech team thought it was great, and it opened up so many possibilities, and everything was made of rainbows.
Then I had to go tell the game designers how they would need to work throughout the rest of the project. And it wasn't spreadsheets.
Did I mention this was my first month? 😅
THE PROBLEMS IN THE SOLUTION TO THE PROBLEM IN THE SOLUTION
As I mentioned on the UI article, if you’re building a toolset, the most important thing is working with who you’re building it for. However, the tech department sometimes is the gatekeeper and needs to guide how things are made, especially when it comes to data and performance.
I called a meeting with all the Game Designers to explain how the system would work, and tell them all the cool things they’d be able to do with it. The response ranged from “that seems a bit involved, but could be interesting” to “screw that, I want spreadsheets!“, which is pretty much what I expected. We did compromise on one aspect tho: we created a spreadsheet exporter (with a human readable option that would replace id fields for the corresponding item names) and an importer for batch changes. It was a big leap for them to base their whole workflow on some crazy idea from the new guy, and I thank them for the vote of confidence (aka: I’m glad they didn't get pissed because they had no option :D).
Over the following months, we built several new item types and created many relations between them. No new features created issues with the schema… until one did: Timers.
We only stored one value, usually a quantity (eg: player has 10 units of Item A), but we also encoded a lot of stuff, like enums or the occasional flag (remember, it’s all numbers in black boxes). Timers made it necessary for us to save an additional timestamp value on items that already existed. This could have been done by creating a different item type and having “item pairs” that would carry the extra information, but I felt it was time for us to generalize.
I changed the Player Profile to contain N inventories, instead of just one, which was good because we had also a notification feature coming up that needed server-side storage. This also meant that any feature could affect any item that we wanted to. The whole thing took just a day and cost changing an array to an array of arrays, plus adding an extra parameter to the Transaction method calls.
We had 3 instances of the server running: development, staging and master. Our CI would fetch items from the appropriate environment at every build and there was a cronjob that would backup the items every few minutes on a git repo if changes existed. Every now and then, people would have to manually update their local copy of the Item Database via a menu option inside of Unity.
Partial view of the ItemDatabase. You could filter the items in multiple ways, create new ones from scratch (where an id would automatically be assigned upon commit) or copy from an existent item |
A very tiny thing that was crucial to the Item Editor’s success was a property drawer that allowed us to mark fields as “foreign keys“: this meant that we simply needed to tag a long field with an attribute and it would automagically turn into an object field in the inspector, to which you could drag and drop and use the search dot. In the item, we’d have something like
[SerializeField] [ForeignKey(typeof(Unlock))] private int unlockItem = -1;
And here’s the important bit on the PropertyDrawer’s OnGUI:
Item item = null; if (property.intValue >= 0) { itemDatabase.TryGetItemById(property.intValue, out item); } position = EditorGUI.PrefixLabel(position, GUIUtility.GetControlID(FocusType.Passive), label); item = (Item) EditorGUI.ObjectField(position, item, foreignKeyAttribute.ForeignKeyType, false);
Because the ItemDatabase is a service that is constantly running and accessible by everyone, we didn't want to allow for id deletions or changes. I did, however, add a “delete” functionality, where an item would be flagged as unavailable and would be ignored by the game upon startup, which means that to all effects, it didn't really exist anymore.
Item Editor for the Location item type. Notice that you can treat the “Unlocked By” field as an asset, but it’s actually stored internally as an id. |
The id was purely sequential, so there was no relation between id and item type. We could have done ids that would be 1En (eg: all Chapters are 100XX, all Unlocks are 200XX), but we honestly had no idea at that point about most of the item types, or how many items we’d have at the end. Our focus was making the Item Editor as friendly as possible so that wouldn't be a worry - although at the end of the day, you always end up memorizing a ton of ids just by looking at them so often.
The ItemEditor had filters for searching by type, name and id, and you could pull from the remote or push to it. In case your item was outdated during pushing, you’d be warned about it and had the option of forcing the push.
The transactions are lists of Added, Removed or Overwritten items on a given inventory. The good thing about centralizing is that we can hook up events when transactions finish or when specific items are changed, then trigger UI updates and other systems out of that if necessary while still keeping everything decoupled.
[System.Serializable] public struct TransactionElement { public long Quantity; public int ItemId; }
[System.Serializable] public class Transaction { public readonly List<TransactionElement> Debits = new List<TransactionElement>(); public readonly List<TransactionElement> Credits = new List<TransactionElement>(); public readonly List<TransactionElement> Overwrites = new List<TransactionElement>(); public PlayerInventoryType Inventory = PlayerInventoryType.Inventory; }
Since everything was based on the Transaction system, we could easily manipulate the PlayerProfile at runtime to create certain situations via our developer cheat commands. We also had the option of dumping and importing player profile snapshots, which means it was easier to replicate test cases. If someone noticed any weird behaviour on the build, they could simply open the debug console and e-mail the profile dump to one of the devs for verification. |
Locally, the player is indeed stored as a big JSON blob. There were plans on revising this, but we didn't get time (or need) to do it – as a matter of fact, for most of production, we saved all the data directly to a single PlayerPrefs entry, and that was just changed near release.
We used Unity's native JSONUtility and I highly recommend you to use it as well. I’ve been in projects where there were (I kid you not) 7 different JSON libraries.
Whenever transactions happen, the inventories are flagged as dirty and the JSON is only regenerated and saved if they’re marked as such.
Fun fact: we had a battery drainage problem that we couldn't pinpoint the reason for. We revised the TransactionManager and PlayerProfile, but couldn't find any evidence. Fast-forward to some weeks later: debugging some network issues, we came to the part of the code that synchronized the pending transactions and realized the profile was being saved every network tick, instead of only when the pending transaction was marked as resolved. Oops!
The whole system obviously has tons of room for improvement: polishing a few rough edges in the ItemEditor; optimizing the full reimports and deletions, which took longer and longer as the ItemDB grew; automating migration of items across environments; defining and documenting a full workflow for branching the DB data; better tools for batch changes (especially because you have to do them in a non-human-readable way)…
That said, I think for a decision taken so early on in the process and in which everything depended on, it was a huge success. The only “major” revision we had to do was changing the Player Profile from 1 inventory to N. We ended up with a bit over 400 items, zero merge conflicts and multiple releases without a single meltdown. GD got used to the workflow and not only didn't have a miserable time, but got a good enough hang of it to help us design the data scheme for some features. Best of all: we could trust them to do all the changes that they needed.
The 3 ItemDatabase instances are still busily idling on our spare Mac Mini.
Read more about:
Featured BlogsYou May Also Like