New World Game Architecture

Bill Hegazy
5 min readJan 13, 2022


How Amazon Games Scaled “New World” Using AWS Services

AWS re:invent 2021 was fascinating, with a lot of interesting announcements. One of the most interesting parts for me was the last 10–15 minutes of Dr. Werner Vogels Keynote, when he shared the architecture of the online MMORPG Game New World.

New World is a Massively Multiplayer Online Role-Playing Game (MMORPG) developed by Amazon Games It’s an open-world online game where you create a custom character, level up, craft your weapons and armors, etc…

I have been playing New World since it was launched in September, 2021. Although there are some bugs in the game 😅, it is fun with much, much stuff to do.

This part of the keynote was interesting to me because I have been managing services in AWS for a few years now, so I wanted to share some points from the keynote.

New World lives fully in the cloud (AWS)

Aeternum, which is the name of the game’s world, is built out in 14 smaller regions (Windsward, Everfall, Brightwood, etc…).


The game has 4 EC2 instances in each world (each new world server). Amazon Games call those EC2s “remote entry points” or “REPS”.

Those EC2s act as Application routers (Nginx/Haproxy maybe?) and are the only public-facing instances. This is where security and resilience are handled.

There was no mention of any ALB or NLB, so my guess is:

  1. Those EC2 instances are behind NLB, but why do the instances have to be public if they are behind a public NLB?
  2. Those EC2s are behind Route53 using weighted, latency, or geo route. But why did Amazon Games not leverage AWS ALB/NLB in this case and make the instances private?


Hubs handle the computing of a portion of the world. There are 7 Hubs (EC2s).

Hubs are basically where the core game backend servers are running.

In a single world (server), the hubs process:

  1. 2,500 Players
  2. Around 7,000 A.I Entities
  3. 100,000s Objects

Overlaid on top of the Aeternum map is a series of grids. This is where the 7 hubs work together and spread the load.

Each hub picks up 2 pieces of the grid, but the 2 pieces are not in order, so if you move from grid 1 to grid 2 in the map, you will move from hub to hub (instance to instance).

All the hubs’ EC2 instances are stateless, which is good, meaning that if a few hubs fail, they can always be replaced quickly.

Shared Instance Pool

The instance pool is where all the single gameplay happens. For example, running an expedition or any other session-based mode in the game. Each session-based game mode will claim one or more EC2 instances, then when the session-based game is over (completed expedition), the EC2 instances return to the shared instance pool to be used by other players.


The stateless hubs store the game state and write everything to AWS DynamoDB, which is around 800k writes every 30 seconds.

Data Analytics

New World logs 23M events/minute, pushed to AWS Kinesis into AWS S3 and then analyzed by AWS Athena, etc…

Multiple Amazon Games team members can use the data collected, the data analyst can discover which wolves have been followed the most, or which paths are most traveled. These data allow game designers to figure out how players enjoy the game and change the game in real-time based on the data.

Non-Core Gameplay

All the non-core gameplay services such as creating a character, creating a company, trading, are running as Serverless microservices using AWS Lambda and AWS API Gateway.


There are many MMORPG Games that live in AWS or other cloud providers, but I have never come across something similar to New World, which is a scalable and simple architecture. As Dr. Werner said.

“This truly a MMORPG game born in the cloud and it would only have been possible to actually run this in the cloud”

Over to you

What do you think of the New World Game Architecture? and what would you change if you were the one who maintained this massive game?

Like this post? Consider following me on Medium billhegazy, If you have questions or would like to reach out, add me on LinkedIn.

Originally published at on January 13, 2022.