Posts Tagged ‘Hybrid Hosting’

 

Press Release & Case Study: Martini Media Delivers Prized Consumer to Advertisers Using GoGrid’s Big Data Solution

Wednesday, May 9th, 2012 by

Hitting the wires in the cloud this morning was our announcement of Martini Media’s customer success story. When we work with our customers, we discover a lot of innovation at work and throughout the process, we assist in crafting the best cloud solution wherever possible. Martini Media’s unique digital platform that advertisers use to reach affluent consumers is a fantastic example of how Big Data and cloud computing can be used to drive business success.

press_release_GoGrid_logo_sm

In case you missed the Press Release, it is available here as well as below. But I encourage you, especially if you are looking for a Big Data solution, to download the Martini Media case study and then talk with one of our Cloud Solutions Architects. Through the use of our Big Data solution, hosted within the GoGrid cloud, Martini Media has been able to:

  • Support 100 percent annual growth
  • Realize the performance benefits of Big Data and the cost advantages of cloud computing
  • Serve targeted ads in as little as 150 milliseconds
  • Reduce latency and increase throughput speed

Case Study - Martini Media - Funnel image

And if you need a primer on Big Data, where it came from and where it can take you, I highly recommend these two articles by GoGrid’s Rupert Tagnipes: (more…) «Press Release & Case Study: Martini Media Delivers Prized Consumer to Advertisers Using GoGrid’s Big Data Solution»

The Big Data Revolution – Part 2 – Enter the Cloud

Wednesday, March 21st, 2012 by

In Part 1 of this Big Data series, I provided a background on the origins of Big Data.

But What is Big Data?

Port Vell Barcelona

The problem with using the term “Big Data” is that it’s used in a lot of different ways. One definition is that Big Data is any data set that is too large for on-hand data management tools. According to Martin Wattenberg, a scientist at IBM, “The real yardstick … is how it [Big Data] compares with a natural human limit, like the sum total of all the words that you’ll hear in your lifetime.” Collecting that data is a solvable problem, but making sense of it, (particularly in real time), is the challenge that technology tries to solve. This new type of technology is often listed under the title of “NoSQL” and includes distributed databases that are a departure from relational databases like Oracle and MySQL. These are systems that are specifically designed to be able to parallelize compute, distribute data, and create fault tolerance on a large cluster of servers. Some examples of NoSQL projects and software are: Hadoop, Cassandra, MongoDB, Riak and Membase.

The techniques vary, but there is a definite distinction between SQL relational databases and their NoSQL brethren. Most notably, NoSQL systems share the following characteristics:

  • Do not use SQL as their primary query language
  • May not require fixed table schemas
  • May not give full ACID guarantees (Atomicity, Consistency, Isolation, Durability)
  • Scale horizontally

Because of the lack of ACID, NoSQL is used when performance and real-time results are more important than consistency. For example, if a company wants to update their website in real time based on an analysis of the behaviors of a particular user interaction with the site, they will most likely turn to NoSQL to solve this use case.

However, this does not mean that relational databases are going away. In fact, it is likely that in larger implementations, NoSQL and SQL will function together. Just as NoSQL was designed to solve a particular use case, so do relational databases solve theirs. Relational databases excel at organizing structured data and is the standard for serving up ad-hoc analytics and business intelligence reporting. In fact, Apache Hadoop even has a separate project called Sqoop that is designed to link Hadoop with structured data stores. Most likely, those who implement NoSQL will maintain their relational databases for legacy systems and for reporting off of their NosQL clusters.

(more…) «The Big Data Revolution – Part 2 – Enter the Cloud»

The Big Data Revolution – Part 1 – The Origins

Tuesday, March 20th, 2012 by

data-security

For many years, companies collected data from various sources that often found its way to relational databases like Oracle and MySQL. However, the rise of the internet and Web 2.0, and recently social media began not only an enormous increase in the amount of data created, but also in the type of data. No longer was data relegated to types that easily fit into standard data fields – it now came in the form of photos, geographic information, chats, Twitter feeds and emails. The age of Big Data is upon us.

A study by IDC titled “The Digital Universe Decade” projects a 45-fold increase in annual data by 2020. In 2010, the amount of digital information was 1.2 zettabytes. 1 zettabyte equals 1 trillion gigabytes. To put that in perspective, the equivalent of 1.2 zettabytes is a full-length episode of “24” running continuously for 125 million years, according to IDC. That’s a lot of data. More importantly, this data has to go somewhere, and this report projects that by 2020, more than 1/3 of all digital information created annually will either live in or pass through the cloud. With all this data being created, the challenge will be to collect, store, and analyze what it all means.

Business intelligence (BI) systems have always had to deal with large data sets. Typically the strategy was to pull in “atomic” -level data at the lowest level of granularity, then aggregate the information to a consumable format for end users. In fact, it was preferable to have a lot of data since you could also “drill-down” from the aggregation layer to get at the more detailed information, as needed.

Large Data Sets and Sampling

Coming from a data background, I find that dealing with large data sets is both a blessing and a curse. One product that I managed analyzed share of wireless numbers. The number of wireless subscribers in 2011 according to CTIA was 322.9 million and growing. While that doesn’t seem like a lot of data at first, if each wireless number was a unique identifier, there could be any number of activities associated with each number. Therefore the amount of information generated from each number could be extensive, especially as the key element was seeing changes over time. For example, after 2003, mobile subscribers in the United States were able to port their numbers from one carrier to another. This is of great importance to market research since a shift from one carrier to another would indicate churn and also impact the market share of carriers in that Metropolitan Statistical Area (MSA).

Given that it would take a significant amount of resources to poll every household in the United States, market researchers often employ a technique called sampling. This is a statistical technique where a panel that represents the population is used to represent the activity of the overall population that you want to measure. This is a sound scientific technique if done correctly but its not without its perils. For example, it’s often possible to get +/- 1% error at 95% confidence for a large population but what happens once you start drilling down into more specific demographics and geographies? The risk is not only having enough sample (you can’t just have one subscriber represent the activity of a large group for example) but also ensuring that it is representative (is the subscriber that you are measuring representative of the population that you want to measure?). It’s a classic problem of using panelists that sampling errors do occur. It’s fairly difficult to be completely certain that your sample is representative unless you’ve actually measured the entire population already (using it as a baseline) but if you’ve already done that, why bother sampling?

(more…) «The Big Data Revolution – Part 1 – The Origins»

Thanks to All Who Attended Cloud Connect 2012

Friday, March 9th, 2012 by

From February 13-16, 2012, in Santa Clara, CA, GoGrid sponsored Cloud Connect 2012, an expo devoted to educating professional seeking to learn more about the benefits of Cloud Computing. We have been a long time sponsor of this show and each year it seems to get better, not only from the caliber of content being presented, but also in terms of the level of expertise on cloud computing that attendees profess.

GoGrid-CloudConnect-2012-booth-04

As I attend many of these conferences as a sponsor, exhibitor and interested party, I have seen a great evolution not only of knowledge and education but also in the cloud services being presented by various companies at the show. A few years ago, it was all about “what is cloud” and how do we define it. The past years have allowed us to really fine-tune the definition and really move beyond this to rolling up the sleeves and implementing cloud solutions. I’m definitely encouraged by the progress of companies with their cloud innovations and the individuals looking to capitalize on this influx of knowledge.

IMG_3267

Talking with customers and prospects looking for or implementing cloud infrastructure solutions gives and insight into what is working in the cloud and what people are really looking for. For example, a few years ago, we introduced the concept of Hybrid Hosting – the ability to mix and match virtual and physical servers within the same architecture, all managed through a single pane of glass, so to speak. In fact, many of our recent Case Studies show that hybrid environments are really the reason why these companies turned to GoGrid for their cloud solution.

GoGrid Customer Presentation – Microgroove

(more…) «Thanks to All Who Attended Cloud Connect 2012»

GoGrid at Cloud Connect 2012: Personalizing the Cloud

Monday, February 13th, 2012 by

GoGrid is one of the Platinum Sponsors of this week’s Cloud Connect 2012 conference and Expo at the Santa Clara Convention Center. The event promises to be a memorable one for cloud newcomers as well as those of us trying to keep up with the blazing pace of cloud innovation.

cc12_date-loc_PMS

This year, we’re particularly excited to be focusing on GoGrid’s hybrid infrastructure solution, which we think combines the best of both the physical and virtual worlds. We believe that your company is unique, and your infrastructure should be, too. Stop by our booth 709 to find out what your unique “cloud fingerprint” looks like. Chances are it’s a flavor of our hybrid solution.

Cloud-Fingerprint

Presentations

Maybe you’re wondering whether to keep your dedicated servers or move to the cloud. What if you could have it all? Join one of our solutions architects as he walks through real-life examples of how hybrid hosting can improve your business’s infrastructure: Tuesday, Feb. 14, 3:35 – 3:55pm in the Cloud Solutions Theater on the Expo Floor. Here’s the presentation description: “Different businesses have different infrastructure needs. And the choices of clouds, colocation, or dedicated services can be daunting if not confusing. So why choose just one when GoGrid’s hybrid architecture (a union of the best of virtual and physical) provides options for both flexibility and growth? Physical hardware provides guaranteed, dedicated, high performance coupled with an assurance of strict data control and security, while cloud architecture scales when your business demands it. Learn the secrets of hybrid hosting and how it can improve your business’s infrastructure in this 20-minute walk-through.

(more…) «GoGrid at Cloud Connect 2012: Personalizing the Cloud»