KML_FLASHEMBED_PROCESS_SCRIPT_CALLS
 

Big Data Cloud Servers for Hadoop

January 13th, 2014 by - 5,486 views

GoGrid just launched Raw Disk Cloud Servers, the perfect choice for your Hadoop data node. These purpose-built Cloud Servers run on a redundant 10-Gbps network fabric on the latest Intel Ivy Bridge processors. What sets these servers apart, however, is the massive amount of raw storage in JBOD (Just  a Bunch of Disks) configuration. You can deploy up to 45 x 4 TB SAS disks on 1 Cloud Server.

These servers are designed to serve as Hadoop data nodes, which are typically deployed in a JBOD configuration. This setup maximizes available storage space on the server and also aids in performance. There are roughly 2 cores allocated per spindle, giving these servers additional MapReduce processing power. In addition, these disks aren’t a virtual allocation from a larger device. Each volume is actually a dedicated, physical 4 TB hard drive, so you get the full drive per volume with no initial write penalty.

Hadoop in the cloud

Most Hadoop distributions call for a name node supporting several data nodes. GoGrid offers a variety of SSD Cloud Servers that would be perfect for the Hadoop name node. Because they are also on the same 10-Gbps high-performance fabric as the Raw Disk Cloud Servers, SSD servers provide low latency private connectivity to your data nodes. I recommend using at least the X-Large SSD Cloud Server (16 GB RAM), although you may need a larger server, depending on the size of your Hadoop cluster. Because Hadoop stores metadata in memory, you’ll want more RAM if you have a lot of files to process. You can use any size Raw Disk Cloud Server, but you’ll want to deploy at least 3. Also, each Raw Disk Cloud Server has a different allocation of raw disks, which are illustrated in the table below. The Cloud Server in the illustration is the smallest size that has multiple disks per Cloud Server. Hadoop defaults to a replication factor of three, so to protect your data from failure, you’ll want to have at least 3 data nodes to distribute data. Although Hadoop attempts to replica data to different racks, there’s no guarantee that your Cloud Servers will be on different racks.

Note that the example below is for illustrative purposes only and is not representative of a typical Hadoop cluster; for example, most Cloudera and Hortonworks sizing guides start at 8 nodes. These configurations can differ greatly depending on if you intend to use the cluster for development, production, or production with HBase added. This includes the RAM and disk sizes (less of both for development, most likely more for HBase). Plus, if you’re thinking of using these nodes for production, you should consider adding a second name node.

Hadoop-cluster

Assuming failure

It’s important to remember that Hadoop is specifically designed to handle infrastructure failure. If a single disk or even an entire node fails, Hadoop will continue to run and most likely a copy of that data already exists on another data node. This ability is why JBOD is recommended for Hadoop data nodes; RAID is less important for availability because Hadoop already handles that. The added advantage is having all available space from the attached disks and their high I/O, low-latency characteristics. One of the tenants of Hadoop is to assume failure and have the framework handle it. That said, the name node can be a point of vulnerability because it’s essential to a cluster and can be a single point of failure. Hadoop 2.0 resolves this situation by offering highly available name nodes out-of-the-box, but this setup isn’t available on distributions other than Hadoop 2.0. If you don’t implement Hadoop 2.0, at least configure a secondary name node. Note that this secondary name node is NOT used for failover, but rather to store checkpoints. You won’t be able to use the secondary name node to replace the primary name node; with a copy of the latest image, however, you’ll at least be able to restart the primary name node with backup data.

You can use our Raw Disk Cloud Servers for uses other than Hadoop, but they should typically be deployed as part of a cluster. You should at least have an application that is able to handle replication and failover conditions. As a JBOD, any data on a failed disk is most likely lost if you don’t have some type of replication or backup. You can use the MyGSI feature for Raw Disk Cloud Servers, but it only backs up the Cloud Server itself and NOT the data on the JBODs.

High-performance infrastructure

The Raw Disk Cloud Server is deployed on a redundant 10-Gbps high-performance fabric, which allows both private and public traffic to communicate at up to 10 Gbps and takes advantage of redundant network hardware. Raw Disk Cloud Servers maintain the OS disks separately from the data disks. Only the data disks are configured as JBODS, so any failures on the data disks have no impact on the OS. The JBODs are also direct attached so there is no difference between the Raw Disk Cloud Servers and similar Dedicated Servers with JBOD. The storage is not apportioned but rather is a dedicated, physical SAS disk for each volume. For the X-Large Raw Disk Cloud Server, for example, there are 3 physical 4-TB disks attached to the server. Raw Disk Cloud Servers currently support only Linux. The disks will be attached, but you need to lay down a file system and  mount the drive. The 8X-Large and 16X-Large Servers are not available via GoGrid’s management console; you should contact Sales if you’re interested in these options. The number of disks per Cloud Server is fixed according to the following allocation:

Raw Disk Cloud Servers RAM Cores Storage
Large 8 GB 4 1 x 4 TB
X-Large 16 GB 8 3 x 4 TB
2X-Large 32 GB 16 6 x 4 TB
4X-Large 64 GB 24 12 x 4 TB
8X-Large* 128 GB 32 24 x 4 TB
16X-Large* 240 GB 40 45 x 4 TB

* Contact Sales if you’re interested in these options. You can see the full version of this table on the GoGrid Raw Disk web page.

Deploying a Raw Disk Cloud Server

You can deploy a Raw Disk Cloud Server from GoGrid’s management console or through an API call. From the management console, use the Add button and select the “Cloud Server” option. Make sure that you’re in the US-West-1 data center because that’s the only location that currently supports Raw Disk. You’ll be presented with an image selector – select any Linux 64-bit OS. You’re then presented with some options for your Cloud Server.

There is a drop-down called “Server Flavor” with the following options: “All,” “Standard,” “SSD,” and “Raw.” This is a filter for the “Server Size” drop-down. If you select “Raw,” then you’ll only see Raw Disk Cloud Server options under “Server Size.” Select the Cloud Server size you’re interested in and hit the Next button to select your subscription term and deploy your Cloud Server.

Raw-Disk-Add2

Configuring your JBOD disks

The Raw Disk Cloud Servers use a different disk for the OS files (including root and swap). They’re not on the JBODs, so you only need to use those raw disks to store data. To find your volumes, run fdisk –l. You should see them attached as devices. They will appear as 4-TB devices because all the attached raw disks are that size. It most cases, the first volume will be called “/dev/xvdfa” and each volume will be an iteration of that.

Disk /dev/xvdfa: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdfa doesn’t contain a valid partition table

You’ll see an entry similar to this for all disks attached to your Cloud Server.

You have the option of creating a partition, but doing so isn’t required. If you want to use a partition, then you’ll need to use GNU parted if you want it larger than 2 TB. Otherwise, you can just format the disk directly. If you’re using these Cloud Servers for Hadoop, ext3 has been extensively tested (it’s been publicly tested on Yahoo’s cluster), but ext4 should also work (and should have better performance with large files).

  mkfs.ext4 /dev/xvdfa

Mounting the drive

You’ll need to create a new location for the new drive on the file system. For example, you can create a directory called “mydisk1” and enter mkdir /mydisk1 at the prompt. Once you’ve created the directory, you can then mount your disk:

mount /dev/xvdfa /mydisk1

You should now be able to read and write files in your mydisk1 directory. If you run df- h then you’ll see your drive and the mydisk1 mount point.

Making the drive permanent

The steps above are core to getting the new device up and running. If you want the drive to mount automatically following reboots, however, you’ll need to add a line to your “/etc/fstab” file.

 /dev/xvdfa /mydisk1 ext4 defaults,nobootwait,noatime 0 0

This is a slight change from the typical “fstab” entry: nobootwait prevents Linux from stalling the boot if the share doesn’t exist. “0 0” means no automatic backup (if activated on your Cloud Server) and no automatic file system check. If you leave both of these options turned on, it will cause the Cloud Server to stall. Noatime prevents reads from turning into unnecessary writes, which helps improve performance (this is an optional setting, typically recommended for Hadoop).

Reboot and verify that you still see the drive and mount point. The easiest way to do so is to run df –h. It will look like this:

  Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda2       36G  1.3G   33G   4% /
udev            7.8G   12K  7.8G   1% /dev
tmpfs           3.2G  224K  3.2G   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            7.9G     0  7.9G   0% /run/shm
/dev/xvda1      184M   42M  133M  25% /boot
/dev/xvdfa      3.6T  196M  3.4T   1% /mydisk1

Start storing stuff!

Now that you’ve mounted your drive, you can start using it as a data node for your Hadoop cluster. You can deploy any distribution of Hadoop that you prefer or you can wait until we release our 1-Button Deploy of HBase. You can also use these nodes for a large disk array. You’ll want some sort of software that can manage replication or you can configure software RAID on your server. Either way, you’ll want to have multiple servers to protect against failure. GoGrid is committed to releasing infrastructure that is designed to support Big Data applications, and you can expect to see more applications and infrastructure options coming soon!

The following two tabs change content below.

Rupert Tagnipes

Director, Product Management at GoGrid
Rupert Tagnipes is Director of Product Management at GoGrid who is responsible for managing and expanding the company’s multiple product lines. His focus is on leveraging his technical background and industry knowledge to drive product innovation and increase adoption of the cloud. He has extensive software product experience at technology companies in Silicon Valley solving data analytics and cloud infrastructure problems for customers across multiple industries.

Latest posts by Rupert Tagnipes (see all)

Leave a reply