Archive for the ‘Features’ Category

 

The Top 3 Private Networking Use Cases for CloudLink

Tuesday, April 2nd, 2013 by

Public clouds are fantastic for a majority of infrastructure use cases. And interconnectivity between clouds enables myriad solutions to empower businesses to have multiple synchronized points of presence across the world. Companies can easily set up connections that traverse the public Internet as a means to transmit and potentially synchronize data between cloud data centers. But these connections need to be reliable and more often than not, private.

CloudLink private network between cloud data centers

CloudLink private network between cloud data centers

With public network connections between clouds, users are at the mercy of hops and latency. For example, data may take one route with a particular number of hops, and a second later, may follow a completely different path and take a longer or shorter amount of time based on the connection.

In terms of securing the transport, some companies rely on point-to-point VPN connections using a hardware or software solution or some combination of the two. However, these solutions are also constrained by the connection and have limited speeds.

There are some scenarios or use cases that warrant using dedicated private networking to join geographically dispersed clouds. This is where GoGrid’s CloudLink service comes into play.

GoGrid’s CloudLink is a data center interconnect product—a redundant 10 Gbps pipe that is isolated to GoGrid traffic only. CloudLink enables private network traffic between different servers in GoGrid’s US data centers. As part of our “Complex Infrastructure Made Easy” mission, we designed this service to be basic yet powerful and still meet the needs of demanding organizations. Because this is a private network, much like the private network within GoGrid’s standard cloud infrastructure, there are no bandwidth costs. You simply decide on the connection speed (10 Mbps, 100 Mbps, or 1 Gbps), configure your connection, and pay for just the dedicated connection. (more…) «The Top 3 Private Networking Use Cases for CloudLink»

Software Defined Networking on the Edge

Thursday, March 14th, 2013 by

One of the recent trends in technology is the movement toward software-defined networks (SDN). With SDN, networking is no longer tied to a specific proprietary device but rather integrated via software. GoGrid has adopted this software defined networking architecture for its new product offerings starting with Dynamic Load Balancers and now with our new Firewall Service.

SDN typically means that the control plane is separated from the forwarding plane and is centralized. This setup is easier to manage and enables a more distributed system. In addition, management of the network is typically programmatic with SDN. In GoGrid’s architecture, for example, management is centralized while the activities are distributed. This design allows for greater resiliency and self-healing capabilities, meaning there’s always a way to return a failed distributed node to its previously stable state. We also enable access to these services via our management console and a public RESTful API.

Although most people think of SDN as it applies to the core (switches and routers), GoGrid’s strategy has been to start at the edge and then work toward the core. Dynamic Load Balancers and the Firewall Service are considered to be on the network edge. However, other services closer to the core, such as Private Network Automation (PNA), have adopted this architecture as well. Details about the Dynamic Load Balancer are explained in this previous blog post.

Firewall Service

GoGrid is introducing a new Firewall Service designed to be self-healing and available to all customers in all our data centers. Customers can deploy this service through the management console or API. Having a Firewall Service available to all our customers is an important step in further securing infrastructure in the cloud. Although GoGrid has secured its data centers and has built-in security measures to protect our customers’ infrastructure, our customers want greater granular control of port access for their individual servers. Our new Firewall Service is designed to meet and exceed those needs by making it easy to set up security wherever Cloud Servers are located.

This service comes with several key features: (more…) «Software Defined Networking on the Edge»

How To Create a Distributed, Reliable, & Fault-Tolerant GoGrid Dynamic Load Balancer

Tuesday, February 26th, 2013 by

As Rupert Tagnipes outlined in his article “High Availability with Dynamic Load Balancers,” crafting a fault-tolerant, reliable website is critical to a company’s online success. There’s nothing worse than going to a website to do a transaction only to have it either be slow to respond or have an interaction time out. By setting up a load balancer in front of transactional web or application servers, companies can ensure their web presence is resilient, responsive, and gets information to their customers reliably.

image

GoGrid launched with a free load-balancing service in 2008. This year, we introduced our next-generation cloud load-balancing service on GoGrid. Embracing the software-defined networking (SDN) mantra, we created our load-balancing service to embrace the key characteristics of cloud computing: on-demand, usage-based, and distributed. I encourage you to read more about our Dynamic Load-Balancing service in Rupert’s article.

Although understanding why load balancing is critical to success is important, knowing how to create a new GoGrid Dynamic Load Balancer is equally important. This How-To article will guide you quickly and easily down that path.

Dynamic-load-Balancer

As always, I like to boil the process down to 3 easy steps. In the case of the Dynamic Load Balancer creation process, these steps are:

(more…) «How To Create a Distributed, Reliable, & Fault-Tolerant GoGrid Dynamic Load Balancer»

How To Scale Your GoGrid Infrastructure

Wednesday, February 13th, 2013 by

Scalability is one of the biggest benefits of cloud computing. Compared to traditional physical servers, cloud servers offer dynamic elasticity that allows businesses to scale “up” or “out” based on load or demand. Scaling “out” means adding more servers to your infrastructure and scaling “up” means adding resources (like RAM) to an existing cloud server.

Adding more cloud servers to your GoGrid infrastructure is easy, as is creating a GoGrid Server Image (GSI). Just a quick refresher: you would use a GSI to deploy copies of a particular server configuration or setup—this is horizontal scalability: create a GoGrid cloud server, save an image of it, and deploy copies of that server.

GoGrid-server-scale

But let’s say that you want a particular server to have a little more power. One of the best “upgrades” you can make to any computer or server is to add more RAM. Running applications consumes RAM (as does the underlying operating system). Giving that server more RAM will make it run even more efficiently.

So, how do you add more RAM to an existing GoGrid Cloud Server? Just like the 3-step processes before (Create a GoGrid Cloud Server – Select. Configure. Deploy. & Create a GoGrid Server Image – Select. Save. Share.), this process is equally easy:

1. Select
2. Configure
3. Scale

Before we walk through this process, it’s important to remember that RAM scaling only works on “hourly” GoGrid Cloud Servers. If your server is on a monthly, semi-annual, or annual plan, you won’t be able to scale your server. In that case, you’ll want to create a GSI of an existing server and then deploy a new hourly server based on that GSI. If you do have an hourly cloud server, the process is easy. (more…) «How To Scale Your GoGrid Infrastructure»

High Availability with Dynamic Load Balancers

Monday, February 4th, 2013 by

Building out a highly available website means that it is fault-tolerant and reliable. A best practice is to put your web servers behind a load balancer not only to distribute load, but also to mitigate the risk of an end user accessing a failing web server. However, traditional load balancing funnels traffic into a single-tenant environment—a single point of failure. A better practice is to have a distributed load balancer that takes advantage of the features of the cloud and increases the fault-tolerance abilities on the load balancer. GoGrid’s Dynamic Load Balancer service is designed around a software-defined networking (SDN) architecture that turns the data center into one big load balancer.

traffic_light_2

GoGrid’s Dynamic Load Balancer offers many features, but one of its core features is high availability (HA). It is HA in two ways.

First, on the real server side, deploying multiple clones of your real servers is a standard load-balancing practice. That way, if one of your servers goes down, the load balancer will use the remaining servers in the pool to continue to serve up content. In addition, each GoGrid cloud server that you deploy as a web server (in the real server pool) is most likely on a different physical node. This setup provides additional protection in the case of hardware failure.

Second, on the Dynamic Load Balancer side, the load balancers are designed to be self-healing. In case of a hardware failure, Dynamic Load Balancing is designed to immediately recover to a functioning node. The Virtual IP address of the Dynamic Load Balancer (the VIP) is maintained as well as all the configurations, with all the changes happening on the back end. This approach ensures the Dynamic Load Balancer will continue to function with minimal interruption, preventing the Dynamic Load Balancer from being a single point of failure. Because the load balancer is the public-facing side of a web server, whenever it goes down the website goes down. Having a self-healing load balancer therefore makes the web application more resilient.

Users with websites or applications that need to always be available would benefit from including GoGrid’s Dynamic Load Balancing in their infrastructure. The load balancer is important for ensuring the public side of a service is always available; however, including easily scalable cloud servers, the ability to store images of those servers in persistent storage, and the option to replicate infrastructure between data centers with CloudLink are all important elements of a successful HA setup.

(more…) «High Availability with Dynamic Load Balancers»