Trending
Opinion: How will Project 2025 impact game developers?
The Heritage Foundation's manifesto for the possible next administration could do great harm to many, including large portions of the game development community.
Outriders' launch was plagued by a significant amount of server woes but according to developers People Can Fly the actual source of those issues was a bit more complex.
Like many high-profile online games, Outriders' launch was plagued by a difficult amount of server woes but according to developer People Can Fly the actual source of those issues was a bit more complex than first anticipated.
The studio shared a transparent look at what went wrong and how they addressed it in a short Connectivity Post-Mortem shared to Reddit this week, offering both its community and fellow developers a deeper look at how it dealt with an issue that didn't show its face until launch.
"We’re committed to full transparency with you. Today, just as we have been over the past year, So we won’t give you the expected 'server demand was too much for us,'" reads that post. "We were in fact debugging a complex issue with why some metric calls were bringing down our externally hosted database. We did not face this issue during the demo launch earlier this year."
The team goes on to explain that they spent the weekend after launch increasing database servers and taking steps to lessen the load on individual servers, which "helped us improve the resilience of the database when under extreme loads, but none of them were the 'fix' we were looking for."
"We managed to understand that many server calls were not being managed by RAM but were using an alternative data management method ("swap disk"), which is too slow for the flow of this amount of data," explains the post. "Once this data queued back too far, the service failed. Understanding why it was not using RAM was our key challenge and we worked with staff across multiple partners to troubleshoot this."
The dev team goes on to note that it's still waiting for a final confirmation from its partners, but a tweak to how their database cache clearing runs seemed to solve the issue: "We reconfigured the database cache cleanup operations to run more often with fewer resources, which in turn had the desired result of everything generally running at a very comfortable capacity."
Find the full, more technical breakdown in the team's Reddit post here.
You May Also Like