How can developers improve the multiplayer experience

As games become more and more online and multiplayer, the physical limitations of the Internet become more apparent. These past few years have seen the rise of the games-as-a-service model, where a constant stream of new content is pushed to players to keep them engaged. As research on negativity bias indicates, to counter-balance just one bad experience, there needs to be at least five good experiences. Consistently providing a good multiplayer experience becomes a matter of life and death for games, who will see their retention numbers plummet if don’t pay attention to this important aspect.

Although there is both a behavioral and technical component to a good online experience, we will focus on understanding the technical aspects of multiplayer games in this article. What can game developers do on their end to improve the status quo?

What’s the problem?

First things first, the Internet was not created for gaming, it was initially created to share information between scientists; a constant stream of data being transmitted between 2 parties. Now compare that to the premise of multiplayer gaming: short bursts of data being exchanged between multiple users through an intermediate arbiter (game server). The difference is stark, and the implication is that multiplayer gaming cannot be as efficient in handling data as other modes of data transmission. The network it resides on, the Internet, did not take into account this possibility when it was first conceived.

Latency compensation techniques

Many game studios have decided to tackle the problems caused by multiplayer games by compensating through different means. For FPS (first-person shooters) that usually means client-side prediction and server reconciliation. It entails that the player’s client will try to predict what other players will do, where they will move for example, so that the player can react faster than the speed at which the data can travel on the Internet. We’re talking about fractions of seconds, but in most real-time competitive games, these fractions can be the different between an important win or a loss. The server is the arbiter, making sure a player actually had visibility to shoot his opponent for example in case both player clients disagree on what happened. Accounting for latency is one way to compensate for the limitation of the network.

Fighting games are especially reliant on latency, since timing and decisions taken in fractions of a second are of outmost importance. For this type of game, rollback netcode has been gathering the enthusiasm of many players in the community. To define it in simple terms, it means accelerating the character animation once the player client has received notification that the opponent has made a move. Since the character move animation is now visible more accurately in real-time, the player can react faster to dodge or counter his opponent. Bandai Namco has recently launched an update to their 2015 title Tekken 7 that includes an updated version of rollback netcode, which has been very well received by the competitive scene and produced a big boost in their concurrent players.

Distributed game hosting

Beyond these compensation techniques, being able to reduce latency overall is the holy grail of improvement in the multiplayer experience. Although many efforts have been done in improving the network side, it generally account for little more than 10% of the overall player latency. The left over 90% is lost to the distance between the game server and the players in a match. The advances made possible with cloud and edge computing infrastructures are where most of the improvements in latency can have a substantial effect on the online experience. Looking beyond what baremetal and cloud providers can offer is a promising way for developers to improve the online game experience of a huge number of players.

What does that mean for game developers?

At the end of the day, gaming is a business and game studios need to justify the better experience they can provide with a concrete improvement. In the gaming industry, that is measured in amount of new players, retention numbers, and monetization numbers. We have seen time and time again how an improved online player experience can improve the overall metrics of a game. But first, there has to be a willingness from developers to go beyond the status quo, and look into all options available to create a better experience with their toolset. After all, developers are the ones with the power to be the architects of the future instead of its victims.

What is edge computing, and what are the use cases?

At Edgegap, we’re often asked about edge computing by those that are unfamiliar with the term. Once that’s explained, the next question is what can you do with it? Here’s a quick intro into edge computing and a look at what industries it can benefit today, as well as in the future.

How do you define it?

Edge computing is the ability to store and compute data close to its source of origin. This is sometimes evoked as opposed or at least complimentary to cloud computing, which is the ability to store and compute data usually far from its source of origin, inside distant datacenters. The benefits of edge computing can be cost-savings, speed (latency improvement), security, and more.

A point to note is that it is theorized that the true value of edge computing will come from use cases that are currently unknown, since this disruptive new technology will bring innovative new uses in our daily lives.

What will be the impact of 5G, AI and the cloud?

5G: Edge computing can be deployed along with 5G antennas to enable computing at the base of the data reception tower. Many Telco companies are very interested in this new technology, especially since they were more or less left behind in the cloud revolution. Data can also be transmitted wirelessly this way, from point of origin to the edge compute, much faster and reliably than before.

AI: AI compute tasks can now be handled closer to the origin of the data, saving costs for the transfer of data and allowing real-time analysis and interaction.

Cloud: The final impact is still unknown, but edge is a disruptor for the cloud. It has the potential to woo some customers away from the big cloud providers, but since their value proposition is different, might not have a huge impact on their bottom line. With their expertise, cloud providers are also very interested in developing their own edge and have been in the process of developing their own offering in that sense.

See our post about Edgegap’s integration with AWS Wavelength:

What issues is edge computing facing?

The main issue for the development of edge computing is the chicken-or-the-egg problem: 

  1. Without a commercially viable use case, no company is willing to spend the very high amount of capital needed to develop a highly distributed edge computing infrastructure. 
  2. Without a highly distributed edge computing infrastructure, many use cases cannot give a good enough value proposition to be commercially viable.

What are some of the use cases of edge computing today? 

IoT is gathering interest from many companies, as data from sensors is analyzed locally for a faster and/or cheaper way to collect and analyze data. This is usually very local or at best regional in use, thus not solving the chicken or the egg problem worldwide.

Gaming is a very interesting prospect, since it is very latency-dependent. Many sub-categories of gaming can find a benefit in edge computing: Multiplayer game servers hosting, Cloud gaming, AR and VR processing, etc.

Other use cases in different industries are currently hypothesized, but are longer term propositions because of the infrastructure changes needed to make them possible: smart city applications, autonomous vehicles, AI virtual assistants, etc.

How can my company use edge computing?

First, there should be a business analysis to find out if handling data closer to the point of origin is either cost-saving or value-adding for your company. For companies that are currently storing and analyzing data in the cloud, there is a good chance that will be the case. Once that correlation has been made, you can either: 

  1. Build your own edge, called on-premise edge. This is expensive and complex from a hardware and software perspective, so few companies have the capital and skills needed to build it on their own.
  2. Work with edge vendors that have access to a distributed edge network. You will need to rent servers for a minimum amount of time, 1 month or sometimes 1 year, in all relevant locations, which can also be quite an investment. There are currently few edge vendors that provide worldwide coverage.
  3. Edge IaaS companies. These are companies that have built a solution to provide on-demand infrastructure on a highly distributed network, usually composed of an aggregation of edge and cloud infrastructure. OneEdge, Edgegap’s highly distributed edge network, is such a solution.


Edge computing will be a disruptor in many industries, and will create new use cases as it is being developed. The current applications are limited because of the limited distribution of edge infrastructure, and only a few use cases can currently see real benefits such as IoT and Gaming. The true potential of edge computing however will be unlocked once the “chicken-or-the-egg” problem is solved, and it becomes commercially relevant to build highly distributed edge computing infrastructures worldwide. Over time, as more and more technologies become latency-dependent due to the nature of our speed-obsessed society, it might even end up competing with the cloud in terms of usage for the processing and storage of data.

Why today’s multiplayer matchmaker need AI for the upcoming Infrastructure trend

Tl; dr:  Maintaining clusters in thousands of locations and asking game clients to ping all of them and report back is not technically & monetary viable once you reach thousands of locations.

Short history of matchmaker

Multiplayer games need, well, multiple players for them to happen (duh)! How those players can find each other is what a matchmaker is all about. Early days of matchmaking was done by sharing your IP with friends to connect to each other (Hello Doom!). Soon appears dedicated servers and games would allow you to choose from a list of available servers, sometimes giving you the latency between you and each server. In early 2000, the first matchmaker which would automate selecting a server was created. From there, the next-gen matchmaker started to incorporate rules to pool players together based on the game’s context. i.e. Players would play against others with a similar character level, same kind of cars, etc. 

There is a balance in applying rules to get a perfect match, versus the time you will have to wait for another player of your rank to play along. Ask Riot’s League of legends diamond players who are used to wait north of 15 minutes to get an opponent of their rank. This streamer waited for 5 hours and even after that he still hadn’t gotten anyone to play with. Matchmaking delays are not only caused by game policy rules (and lack of opponents) but the amount of data centers it can leverage.

Today’s process

The typical flow of a matchmaker is the following:

-This assumes the studio uses a centralized matchmaker. Some older systems will use one matchmaker per data center which makes things even worst.

-Game client will ask the matchmaker for a list of available data centers

-Game client will do basic ICMP ping to each of them

-Game client will send a request to the matchmaker for a new game, taking a “ticket” and reporting back the various latency per data center

-The matchmaker will put the ticket in a queue along with a timestamp.

-From there, various rules can be created by the game designers, but those will typically involve matching players within a certain region.

-Once pooled, players for a given match will be allocated a game server which is standby and running

-Players can play together

Why should we change?

Every matchmaker is different, and there are other steps especially around encryptions and such. But this list represents the general idea, and it has been like that for over 15 years. This model works well when you have a limited amount of data centers. It although has some caveats:

-you need to warm up (pre-start) instances, therefor you need clusters of running game servers in every location (and teams of people to nurse them). You incur a cost for a service that is not even used. Those are called “fleet”.

-Game clients need to initially ping each data center. This means that latency is looked at from a self-centric perspective and not at the match level as a whole. Some matchmakers will “add” latency between players, but at this stage, this is merely looking at the sum of latency instead of overall experience & fairness.

-Matching times it increased since you now need to add latency as a rule in matching players versus focusing on game-centric mechanisms.

-Decision is made initially and once started, cannot be changed. If a change in the network appears, nothing can be done to change this.

-Today’s solutions only look at latency, nothing else. Other elements have to be taken into account like the time of the day, previous experiences, player’s contexts, and many more.

When this model was initially put together, studios and publishers were buying hardware themselves, hosting machines, and network in a handful of centralized data centers. Simple, centralized, under control environment. A few years later cloud providers started to offer a similar offering but without the hassle to manage the infrastructure. They made it easy to have a data center on the other side of the globe without having to invest too much. This was still highly centralized (AWS today owns 22 data centers you can deploy on around the world), and offered fast backbone with multiple points of presence to get players quickly in those centralized DC. Regardless of the speed of those backbones, players still have to go from their house to those DC using networks and fibers. Cloud providers argue that they cover the metropolitan areas with sub 20ms latency in North America, but how much will this be true as people move away from large cities due to Covid-19 and work from home becomes the norm. If those statements were true, would lag still be the number 1 problem from a gamers perspective?

Edge Computing to supplement the public cloud

Looking at what’s coming, a new type of infrastructure is emerging called Edge Computing. Instead of using large server farms, providers are building smaller data centers, closers to users. For example, instead of building a handful of large DC in the US, they spread a bunch in each state. This process is accelerating as mobile service providers are looking at those edge nodes to add strength to their 5G network, and are starting to deploy one at the base of each cell antenna.

This trend is seen around the world, even public clouds realize this could be a threat to their business and started to partner with carriers to deploy smaller DC in those networks.

The network can be optimized, a faster path can be found, but nobody can bend the law of physics. Fibers will never be faster than (half) the speed of lights, and you will never have fiber between each point on the planet. This has nothing to do with technology, this is common sense. The remaining solution is to get closer to users.

Back to matchmaking. Today’s architecture may work to leverage a handful of data centers, maybe 50, 60. But considering it is clear the infrastructure market is going down a path where we will see north of thousands of data centers spread around the world, how will the actual model scale? How can you leverage this new capability?

What should you do?

You can leave things as is, “today’s service is good enough”, “lag is not a priority”, “my matchmaker does take into account latency already” …   Reality is, if you don’t innovate, others will. Leveraging this new set of capabilities in evolving infrastructure requires new methods and processes. Today’s matchmakers will have to be tweaked, well beyond adding a few DC. The sheer amount of new games being launched daily makes it hard for studios and publishers to compete. The number of players for a given game gets smaller as they are spread on many other titles. You will not be able to use thousands of locations by having a cluster in each, relying on client-side code, and hoping for the best. Tweaking networks has been done, and today’s gain is negligible compared to what edge computing allows studios to do.

Upcoming infrastructures are complex, each match of your game is different, and nobody controls every network from where your players will come. You need a solution which can adapt quickly in real-time, learn using advanced AI mechanism on what worked and what didn’t, and optimize every match as if they were tailor-made for your players.

If you are serious about your game and its future, reach out and we’ll help improve your player’s experience. Contact us at and we will make sure your players have the best experience possible using our cutting edge technology.

Containerized server hosting: global reach, scalability and cost savings

You’re building an amazing game! You’ve got a winning prototype, and you know for sure this game will be a hit once people start playing. But if you’re a small studio, you may be wondering how you can scale the product quickly, or even afford a global rollout of your new game. Being global might not be top of mind right now, but how can you know if your game would have a great market fit in Korea, or if a famous Brazilian Youtuber will start playing your game as happened for Among Us (Kotaku)?

Read on to see how Edgegap can help you solve the problems of global reach, scaling, and cost savings. 

Global Reach 

Microsoft corporate vice president of cloud gaming Kareem Choudhry believes “gaming is as culturally impactful as music, television, and movies”. He says that “of the world’s 8 billion human beings, over 2 billion are gamers” (via Wired). That’s a huge potential audience to play your game, but how can you afford to reach them?   

If you’re a smaller studio you won’t have your own global game infrastructure like the big studios. Instead, you’re probable going to deploy your back-end architecture (game servers, databases, etc.) to a public cloud service.  

What you need is a distributed on-demand approach to delivering your game server. If you use a service like Arbitrium to determine the best place to launch game servers, you won’t need to spin up the game servers until you actually have players ready to play.  Maybe you thought your game would be popular in Northern Europe because you have a Viking character, but it falls flat on that market? You don’t have to pay anything since the infrastructure there never had a match to deploy. You get a big player boost in South America because a famous Twitch streamer loved your game? No problemo, we’ll make sure to scale on-demand.

In the cloud, it’s true that you only pay when you have active virtual machines (VM) running. However, these machines can take a few minutes to start up, which doesn’t make sense when you have players ready to play. The traditional way to solve this is to start VM as you anticipate the demand, but you still end up with machines that can stay unused for minutes or hours. And you still have to pay for the time it takes for the VM to boot up…

With Edgegap, you never need to worry about extra on unused cloud infrastructure. Even if you’re a small studio you can make your game available to everyone with our distributed on-demand hosting, and Edgegap can help you do that.


Great – now you see there is an option to make my game servers available in more geographic locations, but how can I scale it to millions of players? I know anyone who sees this game will want to play it with their friends, but what if they all come to play at the same time?  

Big game studios have issues with scaling popular new games, and they have tons of people to manage it! How can I be sure I don’t have scaling problems when I launch? 

This is also something Edgegap can address. In addition to determining within seconds the best datacenter to use to deploy a game server for a new match, Arbitrium will scale your game servers within seconds, and monitor results throughout the duration of the match.

This is possible by using container-based architectures and the power of edge computing. Edgegap is happy to help you migrate your current services to a container-based solution that puts you in a position to be able to scale your games as needed. 

Cost Savings 

Of course, every studio wants to keep costs low. Edgegap helps with that by using a distributed on-demand gaming network. Game servers are launched only when there are players ready to play.

And since Edgegap not only finds the best place to launch servers so that the players have a great experience playing and can actually launch and monitor the game performance during the game, you’re going to have happy players and great reviews. It’s hard to put a price on that, but it sure costs less than dealing with online reviews telling everyone your awesome game was down on launch day! 

Ready to learn more? 

Are you trying to figure out if you can achieve global reach with your game, have the resources to scale it so everyone has a great time playing, and do all that while still keeping costs low?  

Edgegap can help you tap into a distributed on-demand gaming network. Don’t hesitate to reach out if you’re looking to benefit from the next generation of gaming infrastructure! Get in touch at 

The Edgegap Team is growing!

At Edgegap, there is nothing we value more than the people behind the organization. That’s why we want you to get to know the Edgegap Team. Our team is currently growing, and we’re excited to introduce you to the newest members!

We’d like to introduce you to Benjamin Denis, who joins us with his extensive background in business development and esports. Benjamin has in-depth knowledge of the gaming and the esports market. After working 4 years with all of the major esports brands in Quebec and initiating multiple esports programs inside colleges and high schools, he’s ready to bring that knowledge to Edgegap’s clients and partners.

His solid track record in business development and his ability to easily explain new or complex ideas will be invaluable for the Edgegap Team. From describing the many benefits of esports to schools and brands to demonstrating the value of edge computing, containers, global scalability and Arbitrium, he is ready to meet any challenge.

As a fervent Super Smash player and fan, he is well aware of how latency issues can significantly reduce the player experience in online games. Just like everyone on the Edgegap Team, he is on a mission to change that for gamers worldwide. A better online experience, no matter where you live. The next-gen gaming infrastructure is at our door!

You are a game studio that wants to know more, or you have a unique project that could lead to a case study? Get in his dm’s now!

Arbitrium: the deciding factor for multiplayer games

Edgegap Arbitrium was designed to help game studios lower latency, improve fairness, and increase the reach of game titles through the help of edge computing and machine learning.  

Currently most game servers are hosted in a very traditional way, in centralized data centers. However many players are usually far from this location, causing lag issues. Further, you may live in a country where you can’t reach these servers at all! This traditional method of hosting game servers also means that game studios may run game servers that may not be well utilized. 

Is it possible to use a different type of architecture to host online games? What if you could move the infrastructure to where the players are, and only run game servers when and where players were ready to play?  

Edgegap Arbitrium 

Arbitrium is a Latin legal term meaning a judgement (or the decision) of an arbitrator. And that is the role that Arbitrium plays, it makes a judgement call of the best location to deploy gaming servers, based on where the players are located. Launched in early 2019, Arbitrium has access to hundreds of regions worldwide through Edgegap’s edge locations aggregator, OneEdge.  

Arbitrium connects via API to the game studio’s matchmaker so it can act as the arbitrator between players waiting to start playing a game and the game services. It will start a game instance as needed and even monitor results throughout the duration of the match. This video gives a brief overview of Arbitrium: 

Fairness and Lag 

Every gaming studio wants their players to have the best possible experience playing their game, and one way to do this is to reduce lag. Lag is a delay occurring between a player’s action and the result of this action in a game. Many studies show that network latency (lag) affects games and players’ performance. These conditions are observable in all video games because of the traditional method used to host game servers.  

Traditionally, game studios use Virtual Machines (VMware and OpenStack) either in large datacenters or in the public cloud. The challenge with using VMs is that they still are a highly centralized environment, which makes it hard to provide a game server close to players to get the latency needed to combat lag. They also can be hard to scale quickly, even if you have a reserve it takes time to boot the VMs and make the services available to players. But using a modern, container-based approach can up level this game very quickly.  

Arbitrium addresses the problem of lag with a modern approach. Its patented technology helps chose the best location on-the-fly to deploy game servers. It does this via API integration with the game studio’s matchmaker server. The matchmaker server provides the IP addresses of the players who have been matched to face each other in a game to Arbitrium. Within seconds, Arbitrium uses multiple data points and measurements to make the decision of where the game server should be deployed to provide the best game experience for all players.  

Arbitrium’s Modern Approach

Edgegap’s OneEdge uses edge computing infrastructures to increase the number of locations available on which to deploy game servers and moves the infrastructures to where the players are. In the image below, a zone delimiter is drawn around the player’s locations, each available location in that area is given a score based on Arbitrium’s proprietary algorithm that uses a mix of latency, jitter, packet drop, sessions context, etc. Once the best sever location is chose, the game is launched for these players.  

This white paper evaluates the results of re-processing live game data to determine if the decisions made by Arbitrium would have reduced lag and improved fairness in real world matches. The results were stunning: 36% reduction in latency for relay-based matches and a 66% improvement in fairness. Here’s one example of a match between players on the West Coast. In the original game, the game studio chose Washington State for the relay server. Edgegap suggested using LA and reduced the average latency, RTT gap, and lag and made this a much fairer match for both players. 

A close up of a map

Description automatically generated

Global Reach and Cost Savings 

Edgegap can also help game studios tap into new geographical markets by providing a better player experience for players who do not live close to the centralized data centers currently being used. Imagine being able to have game services ready to deploy, but only actually fire up a game server when players are ready to play, on any continent. 

This not possible with the traditional centralized approach to hosting games, you must use a container-based approach to take advantage of edge computing. By moving to containers, you don’t have to build reserve infrastructure and deal with allocation to handle demand for your game. Containers can be started in a few seconds, sometimes less.  You only deploy a server when you have players ready to start the game, and this can save money. 

Additionally, if you wish to launch to a new geography you don’t need to build out a traditional infrastructure in that location. You can deploy a modern container-based to that location when you have players waiting for your game. This type of on demand infrastructure opens new markets for games by improving the player experience globally. 

Need Help? 

It is possible to take game architecture closer to where the players are, improving lag and fairness while at the same time saving money and opening markets in new geographies. Arbitrium does this by using modern container-based architecture at the edge, and a proprietary algorithm to help game studios find the best place to launch game servers.  

This results in happier players as server instances run in the best location for player experience, as well as cost savings from only running instances when they are needed. Edgegap can help you migrate your current services to a container-based solution that leverages our platform’s strength and helps you keep your players happy. Don’t hesitate to reach out if you’re looking to benefit from the next generation of gaming infrastructure! Get in touch at 

This text was created in collaboration with Gina Rosenthal,

History of Containers: The Future of the Gaming Industry

A lot has been written about the history of containers over the years, and it is still surprising to me that a lot of the folks we talk to in the gaming industry have not yet started looking for ways to use them in their efforts for scalability and automation. However, it’s understandable: they focus on what matters to them, which is their games, the fun players have, and keeping the lights on.

In the mean time however, app and website developers have been making full use of the scalability and cost-saving potential of this technology. I’ll introduce my (very humble) perspective on the history of containers, and why this is all about to change for game studios in the next few months.

The History of Containers starts with Virtualization

The concept of virtualization is a great one. Create a layer of abstraction to use physical resources in a better way. I first heard about it in 2003 when I was at Bell Mobility, looking at launching EVDO, from a rep at Sun Microsystems who had this “new tech” he wanted to show. That was VMWare, a company we did not know, who had just launched virtual center & vmotion.

As geeks, we had a lot of interest in the technology, but the entire engineering team had significant doubts. Those doubts were since we were already pushing our hardware to its limit. CDMA traffic was skyrocketing, people were starting to use phones for data, and we had to grow at a crazy pace. Using some of this precious resource to “split” them in multiple VM was not bringing a lot of value. On top of that, application vendors prevented us from moving to VM for “support reason.” It was not making much sense and had little value back then.

Challenges of Virtual Machines

We now know what happened to VMWare and such technologies. SDN/NFV, Openstack and such became the norm. This allowed for a much more flexible “applications management” through this abstract layer. Hat tips to the engineer at Amazon who saw this wave coming. Virtual machines brought us ease of use, and a plethora of tools to make SysAdmin, Developers & DevOps life much easier. Servers got much more powerful, so suddenly, the heap of resources needed for virtualization was not as harmful than the benefits it could bring.

One problem persisted with this virtual layer; “The operational system”. For each virtual machine, we’ve been forced to package the OS according to the application requirements. Even for the same application, for each instance, you would have to store twice the operational system, run twice every component for this OS, and add yet another layer of unnecessary elements for the end-user. Solving this problem is the next chapter in the history of containers.

The Solution

Three years after I first heard about VMware, some smart guys at Google started to work on something called Cgroup. Their goal was to isolate resources from applications. This work in junction with namespace isolation created what we now called Containers.

One of the key benefits of containers is to prevent having to duplicate an entire OS for each instance of a given application. It allows you to use CPU & memory resources in an optimized way, preventing you from paying for items not meant to improve your service. Along with cost savings, it brings many other benefits like rapid deployment, ease of use for developers, faster testing (+ automation with pipeline), close to unlimited migration, and many more.

What is a Container?

Think of a container as a cooking recipe. You list what you want for your application to work, and the end service, what you care about, will be up and running. “I need this kind of OS, this specific release, please include those packages, change X and Y, add my application, and run it like that.” Once your recipe is completed, you can create a running container.

The nature of a container is to be stateless. This is quite key in understanding how to use them. The typical analogy between VM and container is “cat vs cattle.” If a VM is a cat, and your cat is sick, you will bring it to the vet for a checkup (the same way you will log in your VM to fix it).  If a container crashes, like a cow on a large farm, you may not be able to save it and choose to replace it with a younger one.

The Stateless Nature of Containers

That’s where the stateless nature comes into play. Storing information in a container is supported by various mechanisms (either map a stateful volume within a container, extract/push the data outside, etc.). If you write anything in the container without those mechanisms, once the container is killed due to a restart, you will lose this information.

Containers are typically not “restarted,” they are shut down, and a new one is started once you want the service back. We’ve had customers who were writing log files in their virtual machines, and were retrieving those daily for analytics. Converting them to a container-based solution required this process to be changed so that those logs get pushed in real-time to prevent losing them. This was not a significant change and brought a few benefits on top of leveraging containers, but this shows that sometimes what can be considered as a walk in the park may need some planning.

The History of Containers is Still Being Written

This new technology brought new capabilities around mobility and scalability, which created a whole new ecosystem. Kubernetes has been a hot topic for the last few years, and we now have a slew of solutions and alternatives around that.

Note that I have not talked about Docker yet. The reason is that Docker is one of many technologies to use containers. Here at Edgegap, we typically use Docker as it is the most popular, but we’ve had to deal with others like LXC, ContainerD and Rkt. Each one has pros and cons, and we’ve seen that some specific markets “prefer” one vs the other. At Edgegap, we’ve been using Dockers for game elements with a lot of success. Those customers who were already using containers were mainly leveraging Docker’s power.

The Debate About Userspace vs Kernel Space

Not everything is pretty around containers. We’ve heard a few objections over the last two years. One of the concerns we hear a lot involves the core of the technology. Containers will run in user space vs kernel space. Kernel space is where your operating system’s heart will run, where resources like memory are managed and such. Userspace is where applications would typically run. There is a perceived risk of doing so, especially when you have two applications from different customers in a shared environment.

This can be true if the environment is not configured correctly. Whether it be from a shared resource poorly allocated (quota not enforced), run with user rights above what’s needed (root anymore?), image management, or a virtual network perspective, there is a series of best practices that need to be followed. Edgegap is proud to list that we support those and actively follow the market to make sure we fix 0-day attacks and use best practices.

Containers for Games?

Major game studios have been using Windows virtual machines for years. Some of them moved to Linux based, but are still using VM. Be it VMWare based infrastructure, or Openstack based, games that are more than two years old will mainly run in VM. It has been like that for as long as cloud vendors have been on the market.

Multiple tools emergent over the years to manage those virtual machines. For example, AWS Gametech has a tool to scale up and down your VM needs, based on past traffic. This leverages their highly centralized data centers and can be seen as a fix on a problem that has been around for years.

The history of containers for games is just starting

Google created a project called Agones to create a plugin on top of Kubernetes to use it as a game server manager. This is a step in the right direction as it helps studios move to containers while leveraging existing infrastructures like matchmakers.

The flow of the communication remains the same with game allocations, clusters and such. The downside is that you still have to use “clusters,” i.e. a highly centralized environment. You cannot get closer to your players and provide lower latency. You will have to “reserve” resources and pay for some of them even when they are not used.

The real power of a container is to be started only when needed, stopped when its no longer used, and moved around as if it were a simple tiny lego block. Forget “hot/cold warmup” for virtual machines, starting a container takes a few seconds, if not milliseconds.

All the studios we’ve met in the last two years have told us they were interested in the benefits of container-based technologies. The question is not “if” this is the moment for containers to be added to the toolset of multiplayer game developers worldwide, but “when”.

Need help?

At Edgegap, we are specialized in container-based solutions like micro-services. Our platform and our team helps studios provide an improved online experience to players worldwide to increase retention and monetization of live service games. We’re here to help you migrate your services to a container-based solution and leverage our platform’s strength to get the most of the new values containers can bring to your studio.

Edgegap is pleased to announce it is among the first to support AWS Wavelength

We are today pleased to announce that Edgegap was selected to be among the first to support AWS Wavelength on the Verizon 5G network, along the likes of Sony Corporation, LG Electrics and Tata Consultancy Services. By integrating its orchestrator to AWS Wavelength, Edgegap was able to successfully deploy game servers on the new AWS edge regions and provide low latency gaming for players. AWS Wavelength regions will soon be available in production for game studios who leverage Edgegap solutions. Through Edgegap’s solutions, game titles can provide the best player experience through lower latency and higher fairness.

Edgegap is a pioneer in building next-generation infrastructures for global multiplayer gaming titles and esports tournaments, and leveraging edge computing to help studios reduce latency, increase player fairness, track and improve experiences, and scale on demand worldwide. “Allowing studios to get a real understanding of players’ experiences and their online services is at the heart of what we do. As global gaming rapidly shifts from centralized to localized environments, Edgegap’s ability to dramatically reduce latencies improve gaming experiences wherever players are located,” said Mathieu Duperré, Edgegap founder and CEO. “With AWS Wavelength on the Verizon 5G edge, we get the ultra-low latency and proximity needed to reduce the distance between servers and end users with a single, elegant solution. By tightly integrating game studio backends and edge computing infrastructures with AWS Wavelength, Edgegap can reduce lag by more than half, increase fairness, improve total player experiences, and help reduce churn and increase revenue for studios.

Full article by AWS available here:

Edgegap’s platform measures and selects the best location out of their distributed network of edge computing locations based on context. Each time a decision is made by Edgegap’s platform, every user is taken into account to minimize latency issues and thus ensure a fun and fair experience for all players. This allows game studios to bring a consistently positive online player experience, maximizing retention and monetization. By adding AWS Wavelength to its existing pool of 220+ regions, Edgegap is on its way to reach thousands of locations in 2021, increasing its footprint to lower lag even further for players worldwide.

Fairness vs. latency: what really matters for esports?

With prize pools in the millions, tournament organizers have undertaken the challenging task of creating a reliable online environment for professional players to compete. Will there be a push back from players if studios can’t guarantee a low latency, high fairness experience in these high-stakes online matches?

We have seen many esports competitions move online recently due to the current pandemic, with varying degrees of success. The 24h Le Mans Virtual comes to mind as a mixed bag of high profile, high viewership event that was unfortunately troubled by technical issues. Time will tell what to make of the recently launched PUBG Mobile World League, and the upcoming Call of Duty League Playoffs online competitions. We have also seen many cancellations, the biggest probably being the EVO Championship Series, the biggest fighting game tournament of the year, cancelling their EVO Virtual after many personalities and game studios pulled out due to allegations of sexual misconduct against the championship’s co-founder and president.

Against this backdrop of successes and failures, the state of the network connection between players and game servers has been a point of contention between tournaments organizers and players. What are the issues at play and what can be done to improve the state of these competitions?

Why low latency isn’t king

Low latency is great, but a one-sided difference in latency can create an insurmountable advantage for one of the participant. Consider a 1v1 player scenario where the average latency of the two players is 60ms. It sounds good in theory, 60ms of latency is quite manageable for most online competitive game genres. But taking in consideration only the average latency doesn’t tell the whole story; both players could have 60ms of latency, or one could have 20ms while the other one 100ms. A difference of 80ms between players makes for a match that is “unfair” by most standards, since the player with the lower latency will have a huge advantage in terms of reaction time.

This can create issues like the one experienced during the Call of Duty League back in April, where Crimsix, one of the game most accomplished player, complained that the servers were “unplayable” because his team had to play on “neutral” servers. A neutral server, in this case, means a server that would not give an unfair advantage to one team over the other. Since his team is geographically close to one of the CoD game server location, it was deemed that it would confer them an unfair advantage against the other teams, and they had to play on a different location which gave them a much higher latency. This trade-off is a typical example of the problem of latency vs. fairness, and without an understanding of how these two interact, studios and tournament organisations are bound to receive complaints from their most valuable and vocal ambassadors, professional players.

No alt text provided for this image

The fairness score

So how do we measure fairness from a network perspective? The simplest way to define the fairness of a match is to take the difference in round-trip time between 2 players in a 1v1 scenario. In the case of team-based matchmaking, a team is considered a single entity and so the average latency of the group is compared to the other group(s).

If the difference in latency is low, it means the match is fair since there is only a small gap between each player’s latency. If it is high, the match becomes unfair from a network perspective as the gap in latency between players means one of them has a sizable advantage. To take a specific example, if a player has 100ms of latency and the other player 20ms, the fairness factor is quite high at 80ms and as such the match is skewed towards the player with a lower latency. As such, a low fairness score is a sign a fair match up.

Through the looking-glass

Let’s take a look behind the scenes with a concrete example from an actual game. Through our work at Edgegap, we had the privilege to work on a case study in collaboration with a leading game studio to take a look into what can be done to create a better online experience for their players. Providing a lower latency is a great step forward, but it cannot be the only variable taken into account. Compromises must be made to make sure each match is fair from both sides, enhancing the overall experience.

Let’s take a look at a specific match that was played between a player in New York and another in Ivory Coast. Although it is an “edge case” scenario, it gives insight into what can be done to improve fairness when latency is in play:

No alt text provided for this image

We can see that Player 1 had a low latency of 39ms at first, and a higher latency of 93ms after moving the game server location. However, for Player 2 his latency went from 176 to 106, and as such the fairness factor went from a large 137 to a meager 13, creating a much fairer environment. In that sense, making sure the game server is situated about halfway between both players makes the most sense. That is why having a large global infrastructure footprint is key to provide both a lower latency, and a more fair environment for all players, as each available location has the ability to improve the quality of every match.

Taking fairness into account

There is no one-size-fits all in video games, and what might be great for one type of genre makes no sense in another one. Every game, every match and every player is unique, and requires a decision based on the unique parameters of the match. This is why policy-based decision-making is the best tool available to create a fair experience.

A policy-based system is quite simple in theory. Take all the parameters that helps create a fair experience for the players involved (level, ranking, latency, jitter, fairness, etc.) and weight them one against another so that you determine which one has priority. Is latency more important than the level of the players? Is fairness more important than the player ranking? It all depends on the game, and at first the game designers are the best to answer these important questions. In the long term however, machine learning will be able to provide a better insight into how to create a fairer, and more fun experience for all players involved.

Fairness is a fundamental principle in every competitive titles, yet it is rarely taken into account from the network perspective in a game’s matchmaker. Beyond simply comparing latency between players, a holistic view of the match is needed to provide the best conditions for the opposing players. When taking into account the meteoric rise of esports betting under the pandemic, fairness becomes especially critical to provide a healthy competitive scene.

What gets measured gets managed

Starting a match in a state of fairness does not mean it will stay fair for the whole duration. That’s why it’s important to track the network for each player during a match and react when a player’s online experience degrades. Different actions can now be taken: pausing the match, sending a QoS improvement request, or even increasing lag artificially for the low latency player to insure a fair competitive environment.

Once data has been accumulated for thousands of matches, we can see patterns emerge and use machine learning to make better decisions in the matchmaker. These can be about specific sites not being able to guarantee a good experience throughout a match, players coming from different ISPs that creates a degrading experience once a match is launched, etc. The decision making can then be adjusted to provide a better gameplay experience with the whole duration of the match in mind.

No alt text provided for this image


There are many things to take into account when creating an esport title, and networking has to be a priority for game studios that aspire to release such a title. The online competitive scene is taking off under the restrictions brought by the current pandemic, but many issues remain to be solved. Studios will have to find solutions to reduce latency, improve fairness and have visibility over the network issues their players are experiencing for professional and casual players so that they do not move to the next best game that manages to get things right.

The team at Edgegap is focused on providing solutions for game studios to lower latency, improve fairness and increase the reach of game titles through the help of edge computing and machine learning. They have developed solutions specifically for esports titles such as a policy-based decision maker solution, player network monitoring and control technologies, and network visibility tools to ensure a fun and fair online experience for players. Get in touch at

Edgegap on a Nascar race car!

We were sponsoring an official Nascar PINTY racecar over the week-end. The driver, Jocelyn Fecteau #77, could be seen on live national TV. We’ve posted below his virtual racecar with our logo, good job Jocelyn!

I’m told lag and fairness is important in those races, will the market start listening? 🙂