KML_FLASHEMBED_PROCESS_SCRIPT_CALLS

Archive for the ‘Iaas’ Category

 

How To Enable & Manage the New, Free GoGrid Firewall Service

Wednesday, May 1st, 2013 by

Security and infrastructure don’t always go hand in hand. In fact, many non-adopters of cloud computing have cited the lack of good security as one of the primary reasons they are not wholeheartedly embracing the cloud and all its glory. In some ways, these naysayers are correct: You shouldn’t deploy a cloud or frankly any type of infrastructure without some type of security, whether it’s software-based controls or a hardware device. At GoGrid, it is this desire to overcome security concerns that compelled us to release our free (that’s right FREE) Firewall Service.

When we developed our Firewall Service, we wanted to do more than simply offer a set of blocking rules or a hardware device. We wanted our solution to be centrally managed, easy to use and configure, fully featured, integrated across all our data centers, reliable, programmatically controlled, highly available, flexible, elastic, self-healing…whew! And did I mention, free? As we did for our new Dynamic Load Balancers, we embraced the concepts of software-defined networking (SDN) when architecting our Firewall Service.

Our research showed that for small environments, software-based firewalls (like IPtables or a Windows Firewall) worked just fine, provided the infrastructure didn’t need to scale. Similarly, hardware-based firewalls were great for enterprise-grade installations (but remember, if you get one hardware device, you typically need another one ready as a failover). We wanted to do it better. You can read more about the theory behind our cloud Firewall Service in this article.

As with my previous How To articles, there are 3 easy steps in the Firewall Service setup:

1. Create a Security Group
2. Define
a Policy
3. Add
a Connection

GoGrid’s Firewall Service is distributed and global. That means that once it’s configured, it automatically synchronizes across all our data centers. If you have multiple web servers in multiple GoGrid data centers, you simply define the Security Groups and Policies, connect the servers, and you’re done. Any future policy changes are automatically synchronized to the connected servers. Simple, right? Let’s see how to set up the Firewall Service. (more…) «How To Enable & Manage the New, Free GoGrid Firewall Service»

How To Create an Auto-Scaling Web Application on GoGrid (Part 1 – Theory)

Tuesday, April 23rd, 2013 by

Creating an auto-scaling web application is an ideal use of cloud computing. Although manually scaling your infrastructure is easy in the GoGrid cloud, programmatically controlling your infrastructure to scale automatically is an even better example of the power of the cloud. This scenario–an application that can increase and decrease its server count, and therefore capacity, based on the load it’s experiencing at any given time–makes IT professionals, sysadmins, and application developers alike extremely happy. And it’s also something you can build using out-of-the-box tools in GoGrid.

We’ve divided this topic into two articles:

Part 1 (this article) – The Theory of Auto-Scaling:

  • Background: traditional vs. cloud hosting
  • Programmatically architecting a solution
  • The underlying Orchestration methodology

Part 2 – A Proof of Concept of Auto-Scaling:

  • Do-it-yourself Orchestration
  • Proof-of-concept examples

(more…) «How To Create an Auto-Scaling Web Application on GoGrid (Part 1 – Theory)»

Is Your High-Tech Company Ready For An SDN-Enabled Cloud?

Thursday, April 18th, 2013 by

When it comes to technology, there are many companies on the “bleeding edge” these days. Sometimes these companies achieve greatness by being visionary, producing products or services that others haven’t thought of, or investing heavily in R&D. But they all have one thing in common: They use the latest high-tech, innovative solutions to power their journeys.

image

When it comes to the underlying infrastructure powering a technology-oriented company, “cutting edge” means success. Sites and services need to perform, be reliable, be resilient, and have the flexibility to expand and contract based on the ebb and flow of day-to-day business. For me, that means cloud infrastructure is the best solution for companies looking to stay ahead of the curve.

Over the past few months, GoGrid has released a variety of services and features designed to give companies a leg up on the competition. It’s all centered on providing cloud infrastructure that’s flexible, yet forward-thinking. It’s much more than simply needing faster and bigger clouds—it’s about architecting our cloud solutions to provide customers with a highly available and distributed set of infrastructure components. And it’s architected according to software-defined networking (SDN) concepts.

SDN architecture isn’t focused on internetworked commodity hardware or new ways to provide networking services. It’s designed to distribute a variety of formerly hardware-based solutions across nodes, data centers, and clouds. When you think about “old school” infrastructure architecture, you probably think of physical devices. And if you think about one device, you really need to think about two, for redundancy and backup. If your hardware load balancer or firewall fails, you have to be sure you have a warm or hot standby available to immediately take its place. That requires time and money. And if you want to be cutting edge, you don’t want to be spending your precious time and money planning for the inevitable. You want to be innovating and iterating.

That’s where SDN is truly powerful and why many of the leading technology companies are adopting solutions that use it. With SDN, you can build in fault tolerance and redundancy. Take our recently released Dynamic Load Balancers as an example. Instead of relying on a single hardware device for routing traffic between available servers, our Dynamic Load Balancers are distributed and highly available across our Public Cloud. If one of the Dynamic Load Balancers fails, another instance, complete with configurations, is spawned immediately elsewhere thanks to our self-healing design. And these load-balancing services can be controlled programmatically via our API.

This month we announced another service that operates in the same distributed manner, our Firewall Service. Although many companies choose to use Cisco ASAs as a security front end for their cloud and physical infrastructure environments (an offering we also provide), these are physical devices that require management. However, our SDN architecture lets us provide more resilient and creative solutions. Like our Dynamic Load Balancers, our Firewall Service is built around SDN concepts and distributed across nodes and our data centers. When you create a security group (that has policies assigned to it), it’s automatically replicated across all our data centers within seconds. If you have distributed infrastructure, you can simply assign a security group to any similarly configured Cloud Server, regardless of that server’s location. If you subsequently change a policy, it’s automatically synchronized to all servers across all data centers that are part of that security group. In other words, you configure once, assign the security group to the server(s), and then watch the SDN magic happen.

(more…) «Is Your High-Tech Company Ready For An SDN-Enabled Cloud?»

The Top 3 Private Networking Use Cases for CloudLink

Tuesday, April 2nd, 2013 by

Public clouds are fantastic for a majority of infrastructure use cases. And interconnectivity between clouds enables myriad solutions to empower businesses to have multiple synchronized points of presence across the world. Companies can easily set up connections that traverse the public Internet as a means to transmit and potentially synchronize data between cloud data centers. But these connections need to be reliable and more often than not, private.

CloudLink private network between cloud data centers

CloudLink private network between cloud data centers

With public network connections between clouds, users are at the mercy of hops and latency. For example, data may take one route with a particular number of hops, and a second later, may follow a completely different path and take a longer or shorter amount of time based on the connection.

In terms of securing the transport, some companies rely on point-to-point VPN connections using a hardware or software solution or some combination of the two. However, these solutions are also constrained by the connection and have limited speeds.

There are some scenarios or use cases that warrant using dedicated private networking to join geographically dispersed clouds. This is where GoGrid’s CloudLink service comes into play.

GoGrid’s CloudLink is a data center interconnect product—a redundant 10 Gbps pipe that is isolated to GoGrid traffic only. CloudLink enables private network traffic between different servers in GoGrid’s US data centers. As part of our “Complex Infrastructure Made Easy” mission, we designed this service to be basic yet powerful and still meet the needs of demanding organizations. Because this is a private network, much like the private network within GoGrid’s standard cloud infrastructure, there are no bandwidth costs. You simply decide on the connection speed (10 Mbps, 100 Mbps, or 1 Gbps), configure your connection, and pay for just the dedicated connection. (more…) «The Top 3 Private Networking Use Cases for CloudLink»

What is Auto-Scaling, How Does it Work, & Why Should I Use it?

Monday, March 11th, 2013 by

When I think about the phrase “auto-scaling,” for some reason it conjures up the word “Transformers.” For those not familiar with the Transformers genre of cartoons, toys, games, and movies, it is essentially about cars that turn into robots or vise versa, depending on how you look at it. When they need to fight or confront a challenge, Transformers will scale up from a vehicle (a car, truck, airplane, etc.) into a much larger robot. Then, when the challenge subsides, they scale back down to a vehicle.

Transformers 4 Movie

Image source: teaser.trailer.com

Scaling Explained

Scaling – in terms of infrastructure – is a similar concept, but applied to the horizontal or vertical scaling of servers. Horizontal scaling means adding (or removing) servers within an infrastructure environment. Vertical scaling involves adding resources to an existing server (like RAM).

Let’s look at an example. An author of a content creation website may write an article that attracts the attention of the social media community. What starts as a few views of the article per minute, once shared by many in social media, may result in hundreds or thousands of requests for this article per minute. When this spike in demand occurs, the load to the server or servers handling the website’s content may experience extreme load, affecting its ability to respond in a timely manner. The results can vary from long page loads to the server actually crashing under the additional peak load. In the past, this scenario used to be known as the “Digg effect” or “Slashdot effect.”

Although this type of success is great publicity for the author, it’s bad for the brand hosting the content. And, if users encounter slow or inaccessible websites, they’re less likely to return for other content at a later point, which can eventually result in a loss of revenue.

(more…) «What is Auto-Scaling, How Does it Work, & Why Should I Use it?»