Every technology company starts with a great idea. And in the early stages of application design, the decisions you make can have a long-term impact. These design decisions are critical and can make or break both the product and the company. At GoGrid, we’ve helped a lot of customers architect applications for the cloud and along the way we’ve learned a thing or two about the decisions you need to make. Here are 3 key questions to help you get started.
1. Traditional data center or cloud infrastructure (IaaS)?
One of the first and most important decisions is whether to go with a traditional data center or architect in the cloud by leveraging an infrastructure-as-a-service (IaaS) provider. Although a traditional data center provides absolute control over hardware and the software, there’s a significant downside to maintaining the hardware. These costs can be significant, but if you move to the cloud, you can avoid them completely. The GoGrid, Amazon Web Services, and Microsoft clouds are all maintained by professionals, allowing you to focus on your application rather than the hardware. By going with an IaaS provider, you also gain application flexibility that lets you scale resources up and down as needed. And we can all agree that in most cases, scaling an application horizontally is preferable to scaling vertically. This option is especially important when your application reaches global proportions and you require specialized features like global load balancing to ensure minimal application latency or even support for regional application variations (think multiple languages for global applications).
2. Where does multi-tenancy reside?
In most cases, you’ll also need to make a decision about where multi-tenancy resides. If you were to architect in a traditional data center, you might take a shortcut and decide to put each customer on a separate machine. However, doing so would be a mistake for a few reasons. First, applications no longer run on a single box that’s scaled up, which means isolating users to individual machines no longer makes much sense. What’s worse, that approach would create a management nightmare by requiring you to monitor thousands of machines as your application scales users. Plus, this type of architecture locks you into a particular vendor or service provider, and you probably don’t want that. So where should multi-tenancy reside? The answer is easy: It should reside in the application layer above the virtual machine or server layer. By architecting multi-tenancy into the application layer, you’re free from lock-in and able to scale resources as needed, avoiding costly over-provisioning. You’ve also allowed customers to scale beyond the resource constraints of a single server. Equally important, this approach lets you architect failover scenarios that ensure high availability and consistency even if the underlying platform has an issue.
Now that you understand why you should architect multi-tenancy into the application, you still need to decide on cloud or dedicated servers. This decision is critical, and in most cases, the flexible, elastic benefits of virtual machines far outweigh any gain on dedicated hardware. There are numerous articles about the benefits; TheResearchpedia lays out a few of them, for example. The 3 key points I’d like to make are these:
- IaaS reduces TCO, allowing you to put more IT muscle into your application versus managing the infrastructure.
- IaaS enables application platform infrastructure to scale horizontally as needed. The “just-in-time” nature of virtual compute, network, and storage resources lets application architects do things that simply aren’t possible with traditional hardware solutions—at least not without paying a lot more than you need to pay.
- IaaS allows for a flexible, global offering without the need to build a global data center footprint.
3. Do I need to design for failover?
The short answer is “Yes!” because you can’t afford not to. You need to prepare yourself for the reality that something will fail and design around this eventuality by integrating failover scenarios. Without cloud infrastructure, this setup could get very costly, but because of the cloud, it’s actually easier than ever to do on a budget.
What is a failure? I think of it as anything that involves a service interruption. In the cloud, service interruptions can extend beyond the hardware and data center to a particular region or geographic zone. Mother Nature or other circumstances can also sometimes make an entire group of resources unreachable (don’t forget Hurricane Sandy). Designing for failover in this scenario means those customers running a distributed application in multiple geographic zones can fail over from one zone to the other if service is interrupted for any particular reason. With today’s public cloud infrastructure, you can design for high availability (HA) from the start by leveraging load balancing and monitoring solutions to ensure that when a problem occurs, your customers aren’t impacted. Pair the dynamic nature of cloud infrastructure with clustered database architectures and caching services such as CDN and Memcache and you’ll be able to deliver 99.99% guarantees of uptime for your applications—just like we do for our cloud.
So now you have the answers to 3 of the most common questions we receive regarding architecting for HA in the cloud. For more information about designing for failover, integrating caching services, or architecting application platforms for graceful failover, check out the white paper: “Three Critical Steps to Engineering Highly Available Applications in the Cloud.”
Latest posts by Kole Hicks (see all)
- Selecting a Provider and Infrastructure for Running an In-Memory Database - August 19, 2014
- Architecting for High Availability in the Cloud - July 22, 2014
- Comparing Cloud Infrastructure Options for Running NoSQL Workloads - April 11, 2014