Archive for July, 2013

 

Managing Your World (At Least Your Infrastructure) Just Got Easier: New Managed Services

Wednesday, July 31st, 2013 by

If I think back over my years in tech, the term “managed services” usually meant one of two extremes: At one end were pricey consultants that advised you how to do something extremely complex like business process reengineering (remember BPR?) and then ended up doing it for you—usually at an astounding cost. At the other end were providers of services like email that have since become commodities and are now typically free for individuals (Gmail anyone?) or bundled with other value-added services into an office productivity “suite” for businesses.

In both cases, the mere phrase “managed services” used to create fear within the IT team as they faced the prospect of either changing a proven, established process or figuring out how to integrate a new solution with existing systems and equipment. Luckily, managed services have come a long way since then. That’s why GoGrid’s new Managed Services offer 3 things customers have told us they need:

  1. Insight
  2. Intelligence
  3. Integration

Grab a peek under the covers (insight)

Both our new Managed Monitoring Service and our Managed Security Service provide something indispensable: the ability to know what’s happening in your environment in real time—and act on it, if necessary. As John Joyner noted in TechRepublic, “If you care about uptime and maintaining good performance of any server or application, you need monitoring, too.” That type of insight is clearly valuable to a business when it results from a valid threat about a security breach or notification of a down server. However, our customers tell us it’s equally useful to see when a change occurs from deploying new code, for example. Receiving comprehensive, focused data in context about their infrastructure is what elevates that information above mere “noise.”

MS-screenshot4L

(more…) «Managing Your World (At Least Your Infrastructure) Just Got Easier: New Managed Services»

How to Build Highly Available Applications with Cloud Infrastructure

Tuesday, July 30th, 2013 by

Every technology company starts with a great idea. And in the early stages of application design, the decisions you make can have a long-term impact. These design decisions are critical and can make or break both the product and the company. At GoGrid, we’ve helped a lot of customers architect applications for the cloud and along the way we’ve learned a thing or two about the decisions you need to make. Here are 3 key questions to help you get started.

Uptime

1. Traditional data center or cloud infrastructure (IaaS)?

One of the first and most important decisions is whether to go with a traditional data center or architect in the cloud by leveraging an infrastructure-as-a-service (IaaS) provider. Although a traditional data center provides absolute control over hardware and the software, there’s a significant downside to maintaining the hardware. These costs can be significant, but if you move to the cloud, you can avoid them completely. The GoGrid, Amazon Web Services, and Microsoft clouds are all maintained by professionals, allowing you to focus on your application rather than the hardware. By going with an IaaS provider, you also gain application flexibility that lets you scale resources up and down as needed. And we can all agree that in most cases, scaling an application horizontally is preferable to scaling vertically. This option is especially important when your application reaches global proportions and you require specialized features like global load balancing to ensure minimal application latency or even support for regional application variations (think multiple languages for global applications).

2. Where does multi-tenancy reside?

In most cases, you’ll also need to make a decision about where multi-tenancy resides. If you were to architect in a traditional data center, you might take a shortcut and decide to put each customer on a separate machine. However, doing so would be a mistake for a few reasons. First, applications no longer run on a single box that’s scaled up, which means isolating users to individual machines no longer makes much sense. What’s worse, that approach would create a management nightmare by requiring you to monitor thousands of machines as your application scales users. Plus, this type of architecture locks you into a particular vendor or service provider, and you probably don’t want that. So where should multi-tenancy reside? The answer is easy: It should reside in the application layer above the virtual machine or server layer. By architecting multi-tenancy into the application layer, you’re free from lock-in and able to scale resources as needed, avoiding costly over-provisioning. You’ve also allowed customers to scale beyond the resource constraints of a single server. Equally important, this approach lets you architect failover scenarios that ensure high availability and consistency even if the underlying platform has an issue.

(more…) «How to Build Highly Available Applications with Cloud Infrastructure»

The 2013 Hadoop Summit

Monday, July 29th, 2013 by

hadoop_summit_logo

I recently attended the Hadoop Summit in San Jose. This is one of two major conferences organized around Hadoop, the other being Hadoop World. Nearly all the companies with Hadoop distributions were present along with several big users of Hadoop like Netflix, Twitter, and Linkedin.

Crossing The Chasm

If you’re not deeply involved with Hadoop, attending one of these conferences a year apart can be shocking. The advancements made in just the span of a year are amazing. The conference seemed notably larger this year, and I noticed more non-tech companies in the audience. I think it’s safe to say that Hadoop has crossed the chasm, at least for enterprise IT users.

Other than the type of attendees at the event, the other signal to me was the emergence of Hadoop 2.0. This second version of Hadoop focused on features that are important for users who want to run production-grade software for mission-critical systems. High-availability finally arrived for the name node (for the Open Source project, not the version Cloudera released for its distribution), a new version of Hive with more SQL-friendly features, and YARN which allows users to run just about anything on the Hadoop Distributed File System (HDFS). These types of stability and availability features tend to show up when there is a critical mass of users who want to use software for production.

Hadoop_0790

Quite A YARN

(more…) «The 2013 Hadoop Summit»

Building private cloud infrastructure

Friday, July 5th, 2013 by

Although the proliferation of public cloud computing technologies has encouraged a large portion of the business world to migrate resources to an off-site environment, many decision-makers believe managing their own assets can be more beneficial. For this reason, among others, enterprise executives often prefer to leverage a private cloud architecture that enables them to satisfy numerous goals that cannot be met while using only the public cloud.

Building a private cloud infrastructure

Building a private cloud infrastructure

Yet constructing a private cloud is not a simple one-and-done process. A recent InfoWorld report highlighted how constructing a private infrastructure is similar to building a data center, though it is distinct in several ways. For one, the management layer capabilities are different in a private cloud than they are in a premise-based virtualization architecture.

InfoWorld noted that private clouds, for the most part, will offer some level of self-service, which is important for organizations that need to manage various solutions throughout their life cycle. Unlike conventional data centers, however, these management capabilities must be available to business units, not just the IT department. This is because it is often too time-consuming to have business teams consult with IT every time servers must be commissioned or other processes need to happen.

By working with a trusted service provider, companies can be sure they implement private clouds with the appropriate management capabilities for the workforce as a whole.

Leveled security
In traditional IT environments, IT controls the majority of security controls, which makes administrative considerations unnecessary. Because the private cloud enables individuals to decommission, deploy and manage servers on their own, decision-makers need to ensure they have the ability to protect sensitive information and resources during these procedures, InfoWorld noted.

(more…) «Building private cloud infrastructure»