Archive for the ‘Open Data Services’ Category

 

Infographic: 2014 – The Year of Open Source?

Tuesday, April 8th, 2014 by

If you’re a software developer, you’ve probably already used open-source code in some of your projects. Until recently, however, people who aren’t software developers probably thought “open source” referred to a new type of bottled water. But all that’s beginning to change. Now you can find open-source versions of everything from Shakespeare to geospatial tools. In fact, the first laptop built almost entirely on open source hardware just hit the market. In the article announcing the new device, Wired noted that, “Open source hardware is beginning to find its own place in the world, not only among hobbyists but inside big companies such as Facebook.”

GoGrid_OpenSource200_blog

Why now?

Open source technology has moved from experiment to mainstream partly because the concept itself has matured. Companies that used to zealously guard their proprietary software or hardware may now be building some or all of it on open-source code and even giving back to the relevant communities. Plus repositories like GitHub, Bitbucket, and SourceForge make access to open-source code easy.

In its annual “Future of Open Source Survey,” North Bridge Venture Partners summarized 3 reasons support for open source is broadening:

1. Quality: Thanks to strong community support, the quality of open-source offerings has improved dramatically. They now compete with proprietary or commercial equivalents on features–and can usually be deployed more quickly. Goodbye vendor “lock-in.”

(more…) «Infographic: 2014 – The Year of Open Source?»

Deploying Cassandra with the Push of a Button on GoGrid

Thursday, April 3rd, 2014 by

Let’s say you’ve already done your due diligence and decided you want to run a NoSQL database. The only problem is that you’ve now got to figure out how to deploy the cluster in an environment that lets you scale within a single data center and also across multiple data centers. To save money, this is when many people trial Cassandra on cheap hardware with limited RAM across clusters that are simply inadequate for the job.

That’s a mistake, but luckily, there’s a better way. At GoGrid, we’ve made it possible to deploy a production-ready 5-node Cassandra cluster on robust, high-performance machines with the click of a button. Check out the specs of the orchestrated deployment we’re providing using our 1-Button Deploy™ technology:

  • SSD nodes: 16 GB RAM, 16 cores, and 640 GB storage per node
  • 10-Gbps redundant, high-performance network
  • 40-Gbps private network connectivity to additional Block Storage volumes (as needed)

image

Once you’ve deployed the first cluster, you can add more nodes as you need them via simple point-and-click. Consider for a moment what you can do with this technology: You can run a user/session store for your application, run a distributed priority job queue, use it to manage sensor data, or any number of other things with just a few clicks of the mouse. And you can do it all in 3 easy steps:

Step 1: 1-Button Deploy™

(more…) «Deploying Cassandra with the Push of a Button on GoGrid»

How to Easily Deploy MongoDB in the Cloud

Monday, February 3rd, 2014 by

GoGrid has just released it’s 1-Button Deploy™ of MongoDB, available to all customers in the US-West-1 data center. This technology makes it easy to deploy either a development or production MongoDB replica set on GoGrid’s high-performance infrastructure. GoGrid’s 1-Button Deploy™ technology combines the capabilities of one of the leading NoSQL databases with our expertise in building high-performance Cloud Servers.

MongoDB is a scalable, high-performance, open source, structured storage system. MongoDB provides JSON-style document-oriented storage with full index support, sharding, sophisticated replication, and compatibility with the MapReduce paradigm. MongoDB focuses on flexibility, power, speed, and ease of use. GoGrid’s 1-Button Deploy™ of MongoDB takes advantage of our SSD Cloud Servers while making it easy to deploy a fully configured replica set.

Why GoGrid Cloud Servers?

SSD Cloud Servers have several high-performance characteristics. They all come with attached SSD storage and large available RAM for the high I/O uses common to MongoDB. MongoDB will attempt to place its working set in memory, so the ability to deploy servers with large available RAM is important. Plus, whenever MongoDB has to write to disk, SSDs provide for a more graceful transition from memory to disk. SSD Cloud Servers use a redundant 10-Gbps public and private network to ensure you have the maximum bandwidth to transfer your data. You can use can GoGrid’s 1-Button Deploy™ to provision either a 3-server development replica set or a 5-server production replica set with Firewall Service enabled.

Development Environments

The smallest recommended size for a development replica set is 3 servers. Although it’s possible to run MongoDB on a single server, you won’t be able to test failover or how a replica set behaves in production. You’ll most likely have a small working set so you won’t need as much RAM, but will still benefit from SSD storage and a fast network.

(more…) «How to Easily Deploy MongoDB in the Cloud»

Big Data = Big Confusion? Hint: The Key is Open Data Services

Wednesday, November 6th, 2013 by

When folks refer to “Big Data” these days, what is everyone really talking about? For several years now, Big Data has been THE buzzword used in conjunction with just about every technology issue imaginable. The reality, however, is that Big Data isn’t an abstract concept. Whether you like it or not, you’re already inundated with Big Data. How you source it, what insights you derive from it, and how quickly you act on it will play a major role in determining the course—and success—of your company. Let’s talk specifics…

Big-Data-3Vs

Handling the increased volume, variety and velocity (the “3V/s”) of data requires a fundamental shift in the makeup of the platform required to capture, store, and analyze the data. A platform that’s capable of handling and capitalizing on Big Data successfully requires a mix of structured data-handling relational databases, unstructured data-handling NoSQL databases, caching solutions, and map reducing Hadoop-style tools.

As the need for new technologies to handle the “3V/s” of Big Data has grown, open source solutions have become the catalysts for innovation, generating a steady launch of new, relevant products to tackle Big Data challenges. Thanks to the skyrocketing pace of innovation in specialized databases and applications, businesses can now choose from a variety of proprietary and open source solutions, depending on the database type and their specific database requirements.

Given the wide variety of new and complex solutions, however, it’s no surprise that a recent survey of IT professionals showed that more than 55% of Big Data projects fail to achieve their goals. The most significant challenge cited was a lack of understanding of and the ability to pilot the range of technologies on the market. This challenge systematically pushes companies toward a limited set of proprietary platforms that often reduce the choice down to a single technology. Perpetuating the tendency to seek one cure-all technology solution is no longer a realistic strategy. No single technology such as a database can solve every problem, especially when it comes to Big Data. Even if such a unique solution could serve multiple needs, successful companies are always trialing new solutions in the quest to perpetually innovate and thereby achieve (or maintain) a competitive edge.

Open Data Services and Big Data go hand-in-hand

(more…) «Big Data = Big Confusion? Hint: The Key is Open Data Services»