How to Succeed at Container Migrations on AWS
How to Succeed at Container Migrations on AWS
When making significant architectural changes it is essential to assess the impact to supportability. When making a change like moving legacy applications to containers or upgrading your existing containerized applications and microservices, it is easy to only have a technical impact focus. However, being ready to support and maintain the new infrastructure should be at the forefront of the migration process along with operational impact.
So your organization has some old-school application stacks that you’re ready to move to a container platform. We want to avoid the “one size fits all” approach as much as we want to avoid the “latest and greatest” approach.
Embrace doing it your own way. This can mean partial, hybrid, or staggered migration strategies that fit your budget, strategy, and most importantly - supportability. As we said before, we want to be careful with the one size fits all mentality because we may have applications that benefit from partial or staggered migrations.
Let’s say you have an application that keeps track of market based pricing that is totally in house. It’s a bit of a bespoke application that includes all of the nuances of your pricing team’s strategy and the culture you have built over time. However, this application was built in an “old school” manner and runs as a monolith on a fleet of autoscaling Linux EC2s. We have an excellent candidate for containerization, but is our team prepared to rip the band-aid off? To get a holistic view of this we need to ask -
What are the common support needs for this application? What does our cloud team spend most of their time doing when it comes to this particular AWS hosted application? Some examples -
a. Adding and maintaining storage (infrastructure)
b. Observability and alerting maintenance (app / infrastructure)
c. Database issues (app)
d. Scaling issues (infrastructure)
Will the support needs drastically change when moving to containers? (The answer to this is almost always yes unless in step 1 everything you identified was application specific).
Is my cloud infrastructure team prepared for this type of migration?
Are any other support teams prepared to immediately support this application post-migration?
In this scenario, you may be tempted to go ahead and move to the most sophisticated container architecture you can instead of having to re-migrate later. Why spin up something in docker running directly on your Linux EC2’s when you can go straight to Amazon Elastic Kubernetes service? Isn’t EKS bigger, better, newer, faster, and so on? Let’s take a look.
Although something like K8s for container orchestration may be a bit more “advanced”, it is much more of an overhaul and replatforming than leveraging part of your existing infrastructure. If your team is prepared to support this, and you have the budget to run parallel stacks until a full cutover, jumping from a legacy monolith to a fully managed K8s service can be a huge benefit. There is a big “if” however - if your team is ready to support it. Ask the following questions.
Are we comfortable with K8s networking and setting up ingress controllers?
Are we ok with some of the infrastructure being “obscured” meaning it is fully managed, or would it benefit our scenario to have more control down to the server level? Down to the container platform level?
Are we comfortable with other K8s specifics like storage classes, advanced deployments like helm charts, and other nuances?
Doing a simpler docker on Linux migration in this scenario would allow you to leverage all of your existing Linux EC2’s, storage, and networking. The only new thing your team would be expected to support is the docker administration. In the EKS scenario, your team would need to be prepared for container administration and much more. It’s easy to get into the mindset that you’re moving from one legacy setup to another, but really you’re iterating on your infrastructure just like you do on the application itself.
Let’s say you opted to move your monolithic application into some microservices running into docker containers on Linux EC2’s. Maybe it wasn’t the latest and greatest - but it was low cost, highly supportable and most of all it was fast to move. Now that your team is comfortable with Docker administration, and your container images are ready to run anywhere, maybe it’s time to move into a container platform. There are plenty of options, but let's look at ECS and EKS. We'll discuss how to choose between the two, and why we would want to move them at all.
First, the obvious advantage to moving to a service is less maintenance. When moving to a container platform, we no longer have to worry about OS level patching, and can keep our containers truly separate, not just separate on the same machine. In the first example, our team was already comfortable with these things for the time being. It doesn’t make sense to move to a service to avoid maintenance just to introduce new maintenance. For example, in our first scenario our team may not be comfortable with troubleshooting container images, much less K8s networking. Now that we have spent some time with Docker admin, it makes more sense to leave behind Linux patching and pick up K8s networking. We want a smooth approach to what we are dropping and what we are picking up.
Now for choosing between ECS and EKS. Time for the simplest answer - do you want to continue with essentially the same support model from scenario one but drop the EC2 specific needs like Linux patching? ECS may be a good candidate. ECS also has options for EC2 or Fargate (serverless hosting) so you can choose what level of control you have over the infrastructure vs what AWS can handle. Do you want to allow for containerization complexity and want K8s specific controls and portability to other Kubernetes platforms? Maybe EKS would be a good candidate here, but more commonly it would be for when K8s was already in place somewhere else. One significant draw for EKS is that Kubernetes is a standardized tool. EKS is simply the AWS way of hosting Kubernetes deployments. If your configuration runs, it runs. Meaning you can drop your K8s deployment into any private cloud with K8s installed and configured, or use another public cloud K8s service with little to no modifications.
Let’s nuance the discussion a bit further to suit your organization in particular. Is this application going to scale not only in workload size but in complexity? Many lean towards Kubernetes native / EKS in this situation. Scaling your compute can mean scaling your network and underlying infrastructure. EKS can be better for highly complex containerized deployments because it has orchestration tools you won’t have in ECS. If your application maintenance and scaling doesn’t require much more than the container itself, or you’re paired with other AWS services or outside infrastructure, ECS can be a better option.
Consider the size and scope of your current teams. If your organization already has a dedicated networking team, they would be expected to learn K8s specific networking (it is indeed specific to K8s). Whereas a team that is new to AWS in general may be better suited to start with ECS. Same goes for other specialized teams such as database engineering, data engineering, and so forth. ECS is something a generalized infrastructure team would generally be responsible for deploying, but be careful of the same approach with EKS. There are many specialized components which are better suited for distributed support teams. Make sure the support expectations are clear for either scenario. What types of events warrant a support response? Are there basic triage actions already documented? What team supports what category of events? What is their escalation path and who ultimately owns the operation of the deployments? Having answers to support questions before building any infrastructure can ensure the success of the application stack when something breaks.
Now for the budgeting and cost. Here is a sample breakdown of a small deployment. 2 EKS clusters running one task for 12 hours a day using fargate with 2 vCPU’s, 40 GB of storage and 8 GB of memory cost $189.34/month or $2272.08/year. Fargate can always be swapped for EC2, meaning swapping a serverless compute model for an actual managed compute instance running EKS. The exact same deployment on ECS would only cost you $43.43/month or $520.08 a year. The difference is simply because EKS clusters have a base cost per cluster whereas on ECS you’re only charged for the Fargate compute. In the ECR example, you would have to pay for ECR or some other container registry, but the costs are around $10/month per 100 GB of storage, so it’s still much less even with large container images. You would need a way of fetching container images in either scenario. There are various other costs you can incur with data transfer, DNS names in route 53 ($0.50/month but highly variable depending on traffic).1
Although moving to containerization or upgrading your existing container setup with the most advanced architecture can be a huge benefit, it may be better for your organization to take a staggered approach. Take a look at how you can leverage your existing infrastructure and create a hybrid architecture with room to grow. Your team (and budget) may thank you. Container platforms can be a huge benefit when leveraged properly.
Read more about the latest and greatest work Rearc has been up to.
How to Succeed at Container Migrations on AWS
Rearc at AWS re:Invent 2024: A Journey of Innovation and Inspiration
A People-First Vocation: People Operations as a Calling
Use Azure's Workload Identity Federation to provide an Azure Pipeline the ability to securely access AWS APIs.
Tell us more about your custom needs.
We’ll get back to you, really fast
Kick-off meeting