KML_FLASHEMBED_PROCESS_SCRIPT_CALLS

Archive for the ‘Disaster Recovery’ Category

 

Architecting for High Availability in the Cloud

Tuesday, July 22nd, 2014 by

An introduction to multi-cloud distributed application architecture

In this blog, we’ll explore how to architect a highly available (HA) distributed application in the cloud. For those new to the concept of high availability, I’m referring to the availability of the application cluster as well as the ability to failover or scale as needed. The ability to failover or scale out horizontally to meet demand ensures the application is highly available. Examples of applications that benefit from HA architectures are databases applications, file-sharing networks, social applications, health monitoring applications, and eCommerce websites. So, where do you start? The easiest way to understand the concepts is simply to walk through the 3 steps of a web application setup in the cloud.

Step 1: Setting up a distributed, fault-tolerant web application architecture

In general, the application architecture can be pretty simple: perhaps just a load-balanced web front end running on multiple servers and maybe a NoSQL database like Cassandra. When you’re developing, you can get away with a single server, but once you move into production you’ll want to snapshot your web front end and spread the application across multiple servers. This approach lets you balance traffic and scale out the web front end as needed. In GoGrid, you can do this for free using our Dynamic Load Balancers. Point and click to provision the servers as needed, and then point the load balancer(s) to those servers. The process is simple, so setting up a load-balanced web front end should only take a few minutes. Any data captured or used by the servers will of course be stored in the Cassandra cluster, which is already designed to be HA.

image

Deploying the Cassandra cluster. In GoGrid, you can use our 1-Button Deploy™ technology to set up the Cassandra cluster in about 10 minutes. This will provision the cluster for your database. Cassandra is built to be HA so if one server fails, the load is distributed across the cluster and your application isn’t impacted. Below is a sample Cassandra cluster. A minimal deployment has 3 nodes to ensure HA and the cluster is connected via the private VLAN. It’s a good idea to firewall the database servers and eliminate connectivity to the public VLAN. With our production 1-Button Deploy™ solution, the cluster is configured to include a firewall on-demand (for free). In another blog post I’ll discuss how to secure the entire environment: setting up firewalls around your database and your web application as well as working with IDS and IPS monitoring tools and DDoS mitigation services. For the moment, however, your database and web application clusters would look something like this:

image

(more…) «Architecting for High Availability in the Cloud»

Public Cloud Appealing to Those Needing Disaster Recovery

Friday, May 9th, 2014 by

These days, businesses are aggregating an incredible amount of data from a lot of different silos. Whether they’re using the information to create enhanced marketing campaigns, conduct research for product development, or look for a competitive edge in the market, these companies are taking whatever steps are necessary to protect that data. Between data breaches and natural occurrences like severe weather that can cause companies to lose their data, many are moving their disaster recovery initiatives to cloud servers.

A broken disk.

A broken disk.

A practical solution
One of the most popular deployment options, public cloud models offer companies the opportunity to back up their data in encrypted, secure environments that can be accessed whenever it’s convenient. However, businesses are looking to take this capability to the next level. Redmond Channel Partner referenced a study sponsored by Microsoft titled “Cloud Backup and Disaster Recovery Meets Next-Generation Database Demands,” which was conducted between December 2013 and February 2014 by Forrester Consulting.

The research firm polled 209 organizations based in Asia, Europe, and North America, with 62 percent of survey participants consisting of large-scale enterprise IT managers. Many of the businesses reported having mission-critical databases larger than 10 terabytes. Respondents claimed that some of the top reasons for using public cloud computing models for backups included saving money on storage (61 percent) and reducing administration expenses (50 percent).

Forrester noted that a fair number of enterprises often omit encrypting their database backups due to the complexity involved and the possibility of data corruption. A number of participants also acknowledged they neglect to conduct tests regarding their disaster recovery capabilities.

The available opportunities
Despite these drawbacks, Forrester’s study showed that cloud-based backup and disaster recovery (DR) models have matured over the past 4 years. In addition, there’s the option of using a hybrid approach that involves combining on-premise DR solutions with public cloud storage. For example, an enterprise could keep all its data in in-house databases and orchestrate a system that would either duplicate or transfer all data into a cloud storage environment in the event of a problem.

(more…) «Public Cloud Appealing to Those Needing Disaster Recovery»

Be Prepared with a Solid Cloud Infrastructure

Thursday, April 10th, 2014 by

The more Big Data enterprises continue to amass, the more potential risk is involved. It would be one matter if it was simply raw material without any clearly defined meaning; however data analytics tools—combined with the professionalism of tech-savvy employees—allow businesses to harvest profit-driving, actionable digital information.

Recovery disks shattering

Compared to on-premise data centers, cloud computing offers multiple disaster recovery models.

Whether the risk is from a a cyber-criminal who gains access to a database or a storm that cuts power, it’s essential for enterprises to have a solid disaster recovery plan in place. Because on-premise data centers are prone to outages in the event of a catastrophic natural event, cloud servers provide a more stable option for companies requiring constant access to their data. Numerous deployment models exist for these systems, and most of them are constructed based on how users interact with them.

How the cloud can promote disaster recovery 
According to a report conducted by InformationWeek, only 41 percent of respondents to the magazine’s 2014 State of Enterprise Storage Survey stated they have a disaster recovery (DR) and business continuity protocol and regularly test it. Although this finding expresses a lack of preparedness by the remaining 59 percent, the study showed that business leaders were beginning to see the big picture and placing their confidence in cloud applications.

The source noted that cloud infrastructure and Software-as-a-Service (SaaS) automation software let organizations  deploy optimal DR without the hassle associated with a conventional plan. Traditionally, companies backed up their data on physical disks and shipped them to storage facilities. This method is no longer workable because many enterprises are constantly amassing and refining new data points. For example, Netflix collects an incredible amount of specific digital information on its subscribers through its rating system and then uses it to recommend new viewing options.

The news source also acknowledged that the issue isn’t just about recovering data lost during the outage, but about being able to run the programs that process and interact with that information. In fact, due to the complexity of these infrastructures, many cloud hosts offer DR-as-a-Service.

(more…) «Be Prepared with a Solid Cloud Infrastructure»

Connect from Anywhere to the Cloud

Thursday, August 29th, 2013 by

Bay Bridge in the dusk

The cloud is an important part of many companies’ IT strategies. However, there are many companies that have already made a large investment in infrastructure in their data centers. How can they take advantage of all the cloud has to offer without abandoning their investment? The answer is Cloud Bridge – private, dedicated access to the GoGrid cloud from anywhere.

Connecting to the Cloud

Cloud Bridge is your access point into the GoGrid cloud. It supports Layer 2 connections from cross-connects within a partner data center or with carrier connections from just about anywhere. Cloud Bridge is designed to be simple –  just select the port speed you prefer: 100 Mbps, 1 Gbps, or 10 Gbps (only in US-East-1). There’s also no long-term commitment required to use Cloud Bridge – pay only for what you use and cancel anytime. Traffic across Cloud Bridge is unmetered, so you only pay for access to the port. You also have the option of selecting a redundant setup: Purchase two ports in a redundant configuration and you’ll get an aggregate link. Not only will your traffic have physical redundancy, but you’ll also get all the speed available to both ports (for example, 2 Gbps of bandwidth with redundant 1-Gbps ports selected). You can access Cloud Bridge from equipment that you have in GoGrid’s Co-Location Service, a partner data center (like Equinix via a cross-connect), or from your data center using one of your carriers or with one of our partner resellers.

Why Cloud Bridge

Customers that want to use Cloud Bridge are typically looking to solve the following use cases: (more…) «Connect from Anywhere to the Cloud»

Geographic Load Balancing and Disaster Recovery Best Practices for Global Websites

Wednesday, August 21st, 2013 by

Technological World

If you’re running a global website, you’ll want to reduce the latency for customers around the world. GoGrid offers the global infrastructure and robust network to support this setup. With Geographic Load Balancing, GoGrid can also improve performance to your website from around the world. Here are recommended best practices for building a reliable, high-performing global website.

Deploying the Correct Infrastructure Setup

Global websites still require local infrastructure to be truly effective in reducing latency. GoGrid has data centers around the world where you can deploy infrastructure to better serve your customers. Deploy infrastructure to the Western United States (in our US-West-1 data center), Eastern United States (in US-East-1) and Europe (EU-West-1). Although your specific configuration is unique to your setup, you’ll most likely have database and webservers in each of these data centers.

In addition, you’ll want to keep your servers in-sync. One option between US-West-1 and US-East-1 is to use Cloud Link, a dedicated, private line between our data centers. This connectivity makes synching your servers secure and easy. Once you have your back end in place, you’ll want to configure your front end.

Geographic Load Balancing

(more…) «Geographic Load Balancing and Disaster Recovery Best Practices for Global Websites»