I have been using WordPress to run several web sites that I built and manage. It is a great open-source content management system that has grown significantly in capabilities over the years. However it was originally developed to run on a single server. In other words, the database, data store, processing, HTML, security, logic – everything runs on a single server. Need more speed? Run it on a bigger server. That has been the solution for years.
But servers are expensive. When I originally built my wordpress sites I ran them on my own physical server at my office. This was not very scalable so eventually I migrated them to a shared hosting server. This was better and easier to maintain than my own physical server and provided much better reliability and backup, but eventually it became too slow. So I paid my hosting provider for my own virtual private server. This was better, but after a couple of years my sites have outgrown this option and performance has slowed?
Faced with the option of paying even MORE each month for my own dedicated physical server at my hosting provider, I started looking into other options. Introduced to Amazon Web Services by my nephew who works for AWS, I started reading up on how AWS operates. I was very impressed, but still could not understand how I was going to get more performance for WordPress through AWS.
I then found a white paper from AWS on how to re-architect WordPress for today’s new, scalable processing architecture based on “micro-services”. AHA! After reading this paper a light bulb went off in my head. “This makes a lot of sense!”
AWS offers several tools to speed up WordPress – and most other web platforms. You can just throw hardware at it – the expensive approach. You can cache some of the oft-used files in Amazon’s Cloudfront caching service. This is a quick and easy way to get more performance.
But the best and most long-term solution is to re-architect my wordpress site to take advantage of the scalability and “elasticity” of new technology offered by AWS. This requires more work but is a much better long-term solution with nearly unlimited scalability, at a fraction of the cost of “throw more hardware at it”.
The AWS white paper offers a detailed explanation of this approach, but to simplify things this is what my plans are for migrating my web sites to AWS for increased performance and expandability.
Amazon Aurora Database
Step 1 is to move my databases to Amazon Aurora database. This is a MySQL compliant database architected for scalability and elasticity. Right now my database runs on the processor – and competes for resources – as the rest of my web site. WordPress makes it really easy to use an external database. With AWS I have created my own “virtual private cloud” (VPC). It is basically my “data center in the cloud”. My virtual servers, firewalls, load balancers, and my databases all exist in this VPC, accessible only to who I authorize. But rather than dedicated one computer – or compute instance – to my SQL database, I just connect to Amazon’s Aurora database in my VPC. They handle scalability, redundancy, and even replication. I can have one “write” instance of my database, and an unlimited number of “read” copies of my database. And since my web sights are not write intensive, I can easily expand database performance by just creating multiple copies of my master database in various Amazon “availability zones” so that users are querying data locally rather than remotely. I don’t think I will need this scalability right now, but it is really easy to implement once my data is in Amazon Aurora. And since Aurora is MySQL compliant, I can easily move my data out later if I change hosting providers.
Shared Storage of Images, plugins, and themes
The next way to scale my web sites is to create a shared storage container for my static content such as images, wordpress plugins and themes. This is done easily using Amazon Elastic File Service (EFS). It is sort of like having my own networked file server (NFS) in my data center. Except that Amazon manages the whole thing. I don’t have to worry about adding drives, monitoring drives, replacing drives if they fail, adding more drives or a bigger server if my NFS server runs out of space. Instead I just create an EFS volume with an IP address and DNS name, then point my “compute servers” (or compute instances as AWS likes to call them) at this shared storage. In WordPress, this means mapping my /wp-content/ folder – the place that WordPress stores images, themes and plugins – to this shared NFS. So now I can have one, two, three, ten, or twenty compute instances processing inbound web requests, and they all work off a common database (Aurora) and a common image and file store (EFS).
Load Balancing
Of course to spread the load across all these compute instances, I will need some way to balance and allocate the inbound web requests to the various compute instances. That is where Amazon Load Balancing comes int. It will receive the inbound requests for web pages and allocate the requests to the least used compute instance. My plan is to start with only two compute instances – which is double what I have now – and then just scale as needed.
One really cool thing about this is that I can scale my instances up and down automatically as needed, only using – and more importantly paying for – the amount of compute power I need, when I need it. This will be a bit later down the line for me, but it is pretty cool that AWS lets me increase power automatically as my load grows, and shrink capacity when load is low. Pretty darn cool.
Caching
I can also use Amazon Cloudfront to cache my images, pages, and other static information closer to my end-users who are querying my web site. I don’t think this is going to be crucial just yet, since most of our clients are local in nature. If we were getting queries from overseas, this would be a fantastic option and is easy to implement when I determine I need it.
Amazon Key Services
A last piece of the puzzle I had was wondering how I could provide each compute instance with the parameters it needed to run properly. WordPress uses a WP-CONFIG file to tell wordpress how to log into the database, where the files are located, etc. I could maybe store this on an EFS mounted volume, but I think a better way is going to be a service called Amazon Key Services. I can create keys such as user name, password, database name, etc. and then programmatically call that information into my system boot up using simple PHP commands. So I can create a custom WP-CONFIG file that is the same for each compute instance, yet it queries AWS for the values it needs rather than having those values stored locally. And if I need to update that information, I just update it in the Key Service database.
Summary
There are some additional AWS services I can implement to further speed up processing – such as caching my PHP queries. But the above approach is what I plan to implement first and then scale as needed. I am excited that with a bit of re-architecting of my wordpress site I can easily build it so that it can scale in performance as I need it, and I only have to pay for the additional performance when and if I need it. Pretty cool beans if you ask me!
Leave a Reply