April 16th, 2014 by Team GoGrid - 515 views
Just as cloud computing has revolutionized how corporate IT departments interact with their networks, the way in which business is conducted across all markets has also changed significantly. Because the technology provides employees with a different way of performing tasks, the manner in which managers and executives make decisions has been radically influenced by an influx of data points.
A construction crew surveys an ongoing project.
When it comes to traditional business practices, everything has become a lot easier thanks to cloud computing. For most large enterprises, it’s not an arduous chore for employees to access a Word document from a tablet, edit the file, and share it with coworkers. As far as the industrial sector is concerned, reporting mechanical deficiencies or malfunctions can happen in near real time because many workers are now equipped with smartphones, some of them supplied by their employers.
Digital information changes everything
In an interview with InformationWeek, former Chief Cloud Architect for Netflix Adrian Cockcroft noted that a strong integration of all teams and departments is imperative for a company to ensure its survival. Cockcroft spent 7 years with the company developing the necessary architecture to launch new ways of finding and showcasing films. In 2008, Netflix ceased operating through on-premise databases and moved to cloud servers. Afterward, the former CCA began noticing some fundamental changes throughout the organization.
Cockcroft told the news source that the increased speed and flexibility offered by the off-premise solution gave Netflix its competitive edge. During its fledgling years, the company’s size couldn’t compare to that of its competitors, requiring it to develop and act on particular incentives quicker than others film distributors. Basically, the company had to make a consorted effort to eliminate inefficient communication between software designers and engineers.
“We put a high-trust, low-process environment in place with few hand-offs between teams,” said Cockcroft.
Read the rest of this entry » «What Cloud Computing Means for Industrial Infrastructure»
April 11th, 2014 by Kole Hicks - 1,349 views
A walk through in-memory, general compute, and mass storage options for Cassandra, MongoDB, Riak, and HBase workloads
I recently had the pleasure of attending Cassandra Tech Day in San Jose, a developer-focused event where people were learning about various options for deploying Cassandra clusters. As it turns out, there was a lot of buzz surrounding the new in-memory option for Cassandra and the use cases for it. This interest got me thinking about how to map the options customers have for running Big Data across clouds.
For a specific workload, NoSQL customers may want to have the following:
1. Access to mass storage servers for files and objects (not to be confused with block storage). Instead, we’re talking on-demand access to terabytes of raw spinning disk volumes for running a large storage array (think storage hub for Hadoop/HBase, Cassandra, or MongoDB).
2. Access to High RAM options for running in-memory with the fastest possible response times—the same times you’d need when running the in-memory version of Cassandra or even running Riak or Redis in-memory.
3. Access to high-performance SSDs to run balanced workloads. Think about what happens after you run a batch operation. If you’re relating information back to a product schema, you may want to push that data into something like PostgrSQL, SQL, or even MySQL and have access to block storage.
4. Access to general-purpose instances for dev and test or for workloads that don’t have specific performance SLAs. This ability is particularly important when you’re trialing and evaluating a variety of applications. GoGrid’s customer’s, for example, leverage our 1-Button Deploy™ technology to quickly spin up dev clusters of common NoSQL solutions from MongoDB to Cassandra, Riak, and HBase.
Read the rest of this entry » «Comparing Cloud Infrastructure Options for Running NoSQL Workloads»
April 10th, 2014 by Team GoGrid - 792 views
The more Big Data enterprises continue to amass, the more potential risk is involved. It would be one matter if it was simply raw material without any clearly defined meaning; however data analytics tools—combined with the professionalism of tech-savvy employees—allow businesses to harvest profit-driving, actionable digital information.
Compared to on-premise data centers, cloud computing offers multiple disaster recovery models.
Whether the risk is from a a cyber-criminal who gains access to a database or a storm that cuts power, it’s essential for enterprises to have a solid disaster recovery plan in place. Because on-premise data centers are prone to outages in the event of a catastrophic natural event, cloud servers provide a more stable option for companies requiring constant access to their data. Numerous deployment models exist for these systems, and most of them are constructed based on how users interact with them.
How the cloud can promote disaster recovery
According to a report conducted by InformationWeek, only 41 percent of respondents to the magazine’s 2014 State of Enterprise Storage Survey stated they have a disaster recovery (DR) and business continuity protocol and regularly test it. Although this finding expresses a lack of preparedness by the remaining 59 percent, the study showed that business leaders were beginning to see the big picture and placing their confidence in cloud applications.
The source noted that cloud infrastructure and Software-as-a-Service (SaaS) automation software let organizations deploy optimal DR without the hassle associated with a conventional plan. Traditionally, companies backed up their data on physical disks and shipped them to storage facilities. This method is no longer workable because many enterprises are constantly amassing and refining new data points. For example, Netflix collects an incredible amount of specific digital information on its subscribers through its rating system and then uses it to recommend new viewing options.
The news source also acknowledged that the issue isn’t just about recovering data lost during the outage, but about being able to run the programs that process and interact with that information. In fact, due to the complexity of these infrastructures, many cloud hosts offer DR-as-a-Service.
Read the rest of this entry » «Be Prepared with a Solid Cloud Infrastructure»
April 8th, 2014 by Mario Duarte - 827 views
A major vulnerability with the OpenSSL libraries was announced this morning. According to PCWorld, “The flaw, nicknamed ‘Heartbleed’ is contained in several versions of OpenSSL, a cryptographic library that enables SSL (Secure Sockets Layer) or TLS (Transport Security Layer) encryption. Most websites use either SSL or TLS, which is indicated in browsers with a padlock symbol. The flaw, which was introduced in December 2011, has been fixed in OpenSSL 1.0.1g, which was released on Monday [April 7].”
We want to ensure all our customers are aware of this vulnerability so those impacted can take appropriate measures. The following description of Heartbleed is from http://heartbleed.com:
“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.”
GoGrid has already performed an extensive audit of our environment and has determined that none of our customer-supporting sites—including our management console, wiki, and secure signup—is exposed to this vulnerability.
If you are permitting SSL/TLS traffic to your servers, however, a firewall won’t block against this attack. This is a serious vulnerability with the ability to significantly expose your environment. GoGrid recommends you review the National Vulnerability Database CVE-2014-0160 as soon as possible to determine if the OpenSSL vulnerability applies to your organization and then take corrective action based on your specific security policies, if necessary.
April 8th, 2014 by Barbara Jurin - 975 views
If you’re a software developer, you’ve probably already used open-source code in some of your projects. Until recently, however, people who aren’t software developers probably thought “open source” referred to a new type of bottled water. But all that’s beginning to change. Now you can find open-source versions of everything from Shakespeare to geospatial tools. In fact, the first laptop built almost entirely on open source hardware just hit the market. In the article announcing the new device, Wired noted that, “Open source hardware is beginning to find its own place in the world, not only among hobbyists but inside big companies such as Facebook.”
Open source technology has moved from experiment to mainstream partly because the concept itself has matured. Companies that used to zealously guard their proprietary software or hardware may now be building some or all of it on open-source code and even giving back to the relevant communities. Plus repositories like GitHub, Bitbucket, and SourceForge make access to open-source code easy.
In its annual “Future of Open Source Survey,” North Bridge Venture Partners summarized 3 reasons support for open source is broadening:
1. Quality: Thanks to strong community support, the quality of open-source offerings has improved dramatically. They now compete with proprietary or commercial equivalents on features–and can usually be deployed more quickly. Goodbye vendor “lock-in.”
Read the rest of this entry » «Infographic: 2014 – The Year of Open Source?»