How to REALLY scale Magento on AWS
Cloud platforms like AWS are a great fit for Magento merchants because they can scale infrastructure dynamically. This helps retailers keep their sites running during peak shopping periods like Black Friday and Cyber Monday.
Would you be surprised to know that even platforms like AWS have limits on how fast you can scale?
If you’re a merchant who sees dramatic traffic spikes during peak periods (think 20x or more), scaling AWS can still be challenging. Fortunately, AWS’ Elastic Load Balancer can help.
AWS’ Elastic Load Balancer service is an excellent load balancing platform that allows you to:
- Offload SSL requests from your servers.
- Configure custom health checks and cookie persistence.
- Automatically add new EC2 instances to meet growing demand.
- Balance across multiple availability zones and regions to provide redundancy.
Out of the box, Elastic Load Balancer can only support increasing traffic at a rate of 50% every five minutes. For those used to bare metal infrastructure, this level of scalability is unheard of and provides much relief.
But what if your site traffic grows even faster than this?
You’ll need to “pre-warm” your Elastic Load Balancer. First, you need to know when you are expecting to see a spike in growth and have a general sense of the size of that spike. Take that info to the AWS support team, and they can manually configure your Elastic Load Balancer with the appropriate level of capacity.
If you work with an AWS managed services provider like Tenzing, it’s even easier. We will help you gather the necessary information and work directly with the AWS support team to get you set up.
Once everything is in place, your Magento store will be able to handle the traffic spike without a problem and your customer experience will be protected.
If you’re interested in learning more about pre-warming your Elastic Load Balancer, check out AWS’ ELB Best Practices. To find out more about how Tenzing can help you manage Magento on AWS, visit our site or contact us.
Thanks to James Pate, our Deployment Coordinator and Change Manager for contributing to this post.