Archive for the ‘Iaas’ Category

 

Comparing Cloud Infrastructure Options for Running NoSQL Workloads

Friday, April 11th, 2014 by

A walk through in-memory, general compute, and mass storage options for Cassandra, MongoDB, Riak, and HBase workloads

I recently had the pleasure of attending Cassandra Tech Day in San Jose, a developer-focused event where people were learning about various options for deploying Cassandra clusters. As it turns out, there was a lot of buzz surrounding the new in-memory option for Cassandra and the use cases for it. This interest got me thinking about how to map the options customers have for running Big Data across clouds.

For a specific workload, NoSQL customers may want to have the following:

1. Access to mass storage servers for files and objects (not to be confused with block storage). Instead, we’re talking on-demand access to terabytes of raw spinning disk volumes for running a large storage array (think storage hub for Hadoop/HBase, Cassandra, or MongoDB).

2. Access to High RAM options for running in-memory with the fastest possible response times—the same times you’d need when running the in-memory version of Cassandra or even running Riak or Redis in-memory.

3. Access to high-performance SSDs to run balanced workloads. Think about what happens after you run a batch operation. If you’re relating information back to a product schema, you may want to push that data into something like PostgrSQL, SQL, or even MySQL and have access to block storage.

4. Access to general-purpose instances for dev and test or for workloads that don’t have specific performance SLAs. This ability is particularly important when you’re trialing and evaluating a variety of applications. GoGrid’s customer’s, for example, leverage our 1-Button Deploy™ technology to quickly spin up dev clusters of common NoSQL solutions from MongoDB to Cassandra, Riak, and HBase.

(more…) «Comparing Cloud Infrastructure Options for Running NoSQL Workloads»

Be Prepared with a Solid Cloud Infrastructure

Thursday, April 10th, 2014 by

The more Big Data enterprises continue to amass, the more potential risk is involved. It would be one matter if it was simply raw material without any clearly defined meaning; however data analytics tools—combined with the professionalism of tech-savvy employees—allow businesses to harvest profit-driving, actionable digital information.

Recovery disks shattering

Compared to on-premise data centers, cloud computing offers multiple disaster recovery models.

Whether the risk is from a a cyber-criminal who gains access to a database or a storm that cuts power, it’s essential for enterprises to have a solid disaster recovery plan in place. Because on-premise data centers are prone to outages in the event of a catastrophic natural event, cloud servers provide a more stable option for companies requiring constant access to their data. Numerous deployment models exist for these systems, and most of them are constructed based on how users interact with them.

How the cloud can promote disaster recovery 
According to a report conducted by InformationWeek, only 41 percent of respondents to the magazine’s 2014 State of Enterprise Storage Survey stated they have a disaster recovery (DR) and business continuity protocol and regularly test it. Although this finding expresses a lack of preparedness by the remaining 59 percent, the study showed that business leaders were beginning to see the big picture and placing their confidence in cloud applications.

The source noted that cloud infrastructure and Software-as-a-Service (SaaS) automation software let organizations  deploy optimal DR without the hassle associated with a conventional plan. Traditionally, companies backed up their data on physical disks and shipped them to storage facilities. This method is no longer workable because many enterprises are constantly amassing and refining new data points. For example, Netflix collects an incredible amount of specific digital information on its subscribers through its rating system and then uses it to recommend new viewing options.

The news source also acknowledged that the issue isn’t just about recovering data lost during the outage, but about being able to run the programs that process and interact with that information. In fact, due to the complexity of these infrastructures, many cloud hosts offer DR-as-a-Service.

(more…) «Be Prepared with a Solid Cloud Infrastructure»

Moving Apps to the Cloud Results in New Agility

Wednesday, October 16th, 2013 by

As the need for flexible, agile, and efficient infrastructure tops business priorities, company executives are looking to the cloud for solutions. In the past, the majority of IT architecture and mission-critical applications were maintained within in-house data centers. That isn’t the situation that exists today, however, because roughly 69 percent of organizations are planning to migrate crucial systems to the cloud by the end of 2014.

Moving apps to the cloud results in new agility

Moving apps to the cloud results in new agility

This was among the key findings in a new Virtustream survey, which also revealed that senior-level decision-makers are accepting this new cloud trend. In fact, the majority of executives now understand that implementing the cloud and migrating crucial infrastructure resources to the hosted environment delivers numerous benefits, and more than half of respondents said the cloud enables them to strengthen business agility.

Although speed and functionality were cited as the most common advantages that come with the use of cloud environments, 42 percent of decision-makers also said the solutions give them a competitive advantage, while another 40 percent stated that the technology allows employees to be more productive.

“The end of 2014 will be a pivotal moment for the enterprise cloud,” Virtustream executive Simon Aspinall said. “ERP and other mission-critical applications have mainly been deployed conventionally–the cuckoos in cloud land. The next 18 months will see these critical applications pushed out of their in-house data center nests and migrated to the cloud.”

Although some still harbor a few lingering doubts about migrating to the cloud, most decision-makers are taking these concerns with a grain of salt and taking the plunge anyway. In fact, nearly 90 percent of decision-makers said they were aware of why they should migrate applications to the cloud.

(more…) «Moving Apps to the Cloud Results in New Agility»

Connect from Anywhere to the Cloud

Thursday, August 29th, 2013 by

Bay Bridge in the dusk

The cloud is an important part of many companies’ IT strategies. However, there are many companies that have already made a large investment in infrastructure in their data centers. How can they take advantage of all the cloud has to offer without abandoning their investment? The answer is Cloud Bridge – private, dedicated access to the GoGrid cloud from anywhere.

Connecting to the Cloud

Cloud Bridge is your access point into the GoGrid cloud. It supports Layer 2 connections from cross-connects within a partner data center or with carrier connections from just about anywhere. Cloud Bridge is designed to be simple –  just select the port speed you prefer: 100 Mbps, 1 Gbps, or 10 Gbps (only in US-East-1). There’s also no long-term commitment required to use Cloud Bridge – pay only for what you use and cancel anytime. Traffic across Cloud Bridge is unmetered, so you only pay for access to the port. You also have the option of selecting a redundant setup: Purchase two ports in a redundant configuration and you’ll get an aggregate link. Not only will your traffic have physical redundancy, but you’ll also get all the speed available to both ports (for example, 2 Gbps of bandwidth with redundant 1-Gbps ports selected). You can access Cloud Bridge from equipment that you have in GoGrid’s Co-Location Service, a partner data center (like Equinix via a cross-connect), or from your data center using one of your carriers or with one of our partner resellers.

Why Cloud Bridge

Customers that want to use Cloud Bridge are typically looking to solve the following use cases: (more…) «Connect from Anywhere to the Cloud»

How to Build Highly Available Applications with Cloud Infrastructure

Tuesday, July 30th, 2013 by

Every technology company starts with a great idea. And in the early stages of application design, the decisions you make can have a long-term impact. These design decisions are critical and can make or break both the product and the company. At GoGrid, we’ve helped a lot of customers architect applications for the cloud and along the way we’ve learned a thing or two about the decisions you need to make. Here are 3 key questions to help you get started.

Uptime

1. Traditional data center or cloud infrastructure (IaaS)?

One of the first and most important decisions is whether to go with a traditional data center or architect in the cloud by leveraging an infrastructure-as-a-service (IaaS) provider. Although a traditional data center provides absolute control over hardware and the software, there’s a significant downside to maintaining the hardware. These costs can be significant, but if you move to the cloud, you can avoid them completely. The GoGrid, Amazon Web Services, and Microsoft clouds are all maintained by professionals, allowing you to focus on your application rather than the hardware. By going with an IaaS provider, you also gain application flexibility that lets you scale resources up and down as needed. And we can all agree that in most cases, scaling an application horizontally is preferable to scaling vertically. This option is especially important when your application reaches global proportions and you require specialized features like global load balancing to ensure minimal application latency or even support for regional application variations (think multiple languages for global applications).

2. Where does multi-tenancy reside?

In most cases, you’ll also need to make a decision about where multi-tenancy resides. If you were to architect in a traditional data center, you might take a shortcut and decide to put each customer on a separate machine. However, doing so would be a mistake for a few reasons. First, applications no longer run on a single box that’s scaled up, which means isolating users to individual machines no longer makes much sense. What’s worse, that approach would create a management nightmare by requiring you to monitor thousands of machines as your application scales users. Plus, this type of architecture locks you into a particular vendor or service provider, and you probably don’t want that. So where should multi-tenancy reside? The answer is easy: It should reside in the application layer above the virtual machine or server layer. By architecting multi-tenancy into the application layer, you’re free from lock-in and able to scale resources as needed, avoiding costly over-provisioning. You’ve also allowed customers to scale beyond the resource constraints of a single server. Equally important, this approach lets you architect failover scenarios that ensure high availability and consistency even if the underlying platform has an issue.

(more…) «How to Build Highly Available Applications with Cloud Infrastructure»