When is “good enough” the right product decision?

April 23rd, 2014 by - 133 views

“If you are not embarrassed by the first version of your product, you’ve launched too late.”
– Reid Hoffman, Founder, LinkedIn

Here’s the scenario: You started with a great idea, partnered with an excellent tech founder, and got $1M in funding so you could get the first release out the door. Part of your new-found funding went to hiring 3 Engineers. As the weeks of product development pass, you review the usability, demo it for prospects, and get feedback on how to make it better. The Engineers are working long hours to complete the first useful release for beta. When you review the usability with the Advisory Board or prospects, you get lots and lots of feedback about what works and what doesn’t, and you’re making changes—often daily.

One night you wake up and wonder, “Will we be tweaking this product forever? Will it ever get out the door so we can close some sales?” It’s time to have the conversation about what is “good enough” to ship. That means it’s time to revisit the original set of product requirements—the ones you and your team agreed needed to be implemented to ship the product. Go back to work with the team to completely scrub the bare minimum of what needs to be in the first version. Everyone will have opinions about what needs to be in the product when you ship. Justifications for including requirements may sound something like these:

“We won’t be able to reach one of our vertical market targets.”
“We’ll have a product that will only scale to 1M requests/timeframe and we need 10M.”
“Beta users hate the UI.”

During the scrub remember to ask, “What’s the cost of not implementing this functionality? Will we be able to add this functionality later without re-architecting the product?” Asking these questions lets you and the team make an informed business decision about minimal viable functionality. And at the end of your discussion, remember to reassure the team this sort of dialogue is healthy because it helps the company stay focused by prioritizing functionality into the releases on your road map—and ultimately drives your success.

A few years ago I had a great team that was working endless hours on a new workflow product. We started with requirements that were loosely defined and easily interpreted differently by each member of the team. Our usability expert seemed to re-interpret the same requirement each week, for example, but with the honest intent off making the product better. When it became clear we weren’t going to meet our functional complete date, I called the Engineers, PM, and QA together. As we scrubbed the requirements, we realized we were going to deliver 60% of what we originally thought was needed, but we still had very useful product. We finalized our definition by doing an in-scope/out-of-scope as a team for the rest of the company. And although it was a difficult conversation for the team to have, we delivered the first version—and got first mover advantage. So in the end, our 60%-ready first release actually turned out to be “good enough.”

How Public Organizations Should Treat Big Data

April 22nd, 2014 by - 302 views

Though the “only human” argument certainly doesn’t apply to Big Data, enterprises and public organizations often expect too much out of the technology. Some executives are frustrated by results that don’t necessarily correlate with their predetermined business plans, and others consider one-time predictive conclusions to be final. The problem is, there’s no guarantee that analytical results will be “right.”

A government-themed action key

A government-themed action key

Public authorities interested in integrating Big Data into their cloud servers need to understand two things. First, digital information possess no political agenda, lacks emotion, and perceives the world in a completely pragmatic manner. And second, data changes as time progresses. For example, just because a county in Maine experienced a particularly rainy Spring doesn’t mean that farming soil will remain moist — future weather conditions may drastically manipulate the environment.

Benefiting from “incorrect” data
If a data analysis program harvests information from one source over the course of 1 hour and then attempts to develop conclusions, the system’s deductions will be correct to the extent that it accurately translated ones and zeroes into actionable intelligence. However, because the place from which the data was aggregated continues to produce new, variable knowledge, it may eventually contradict the original deduction.

Tim Hartford, a contributor to Financial Times, cited Google’s use of predictive analytics tools to chart how many people would be affected by influenza by using algorithms to scrutinize over 50 million search terms. The problem was, 4 years after the project was underway, the company’s system was disenfranchised by the Center for Disease Control and Prevention’s recent aggregation of data, showing that Google’s estimates of the spread of flu-like illnesses were overstated by a 2:1 ratio.

Taking the good with the bad
Although Hartford exemplified Google’s failure as a way of implying that Big Data isn’t what software developers are claiming it to be, Forbes contributor Adam Ozimek noted that the study displayed one of the advantages of the technology: The ability to reject conclusions due to consistently updated information. Furthermore, it’s important to note that Google only collected intelligence from one source, whereas the CDC was amassing data from numerous resources.

Read the rest of this entry » «How Public Organizations Should Treat Big Data»

What Cloud Computing Means for Industrial Infrastructure

April 16th, 2014 by - 777 views

Just as cloud computing has revolutionized how corporate IT departments interact with their networks, the way in which business is conducted across all markets has also changed significantly. Because the technology provides employees with a different way of performing tasks, the manner in which managers and executives make decisions has been radically influenced by an influx of data points.

A construction grew surveys an ongoing project.

A construction crew surveys an ongoing project.

When it comes to traditional business practices, everything has become a lot easier thanks to cloud computing. For most large enterprises, it’s not an arduous chore for employees to access a Word document from a tablet, edit the file, and share it with coworkers. As far as the industrial sector is concerned, reporting mechanical deficiencies or malfunctions can happen in near real time because many workers are now equipped with smartphones, some of them supplied by their employers.

Digital information changes everything 
In an interview with InformationWeek, former Chief Cloud Architect for Netflix Adrian Cockcroft noted that a strong integration of all teams and departments is imperative for a company to ensure its survival. Cockcroft spent 7 years with the company developing the necessary architecture to launch new ways of finding and showcasing films. In 2008, Netflix ceased operating through on-premise databases and moved to cloud servers. Afterward, the former CCA began noticing some fundamental changes throughout the organization.

Cockcroft told the news source that the increased speed and flexibility offered by the off-premise solution gave Netflix its competitive edge. During its fledgling years, the company’s size couldn’t compare to that of its competitors, requiring it to develop and act on particular incentives quicker than others film distributors. Basically, the company had to make a consorted effort to eliminate inefficient communication between software designers and engineers.

“We put a high-trust, low-process environment in place with few hand-offs between teams,” said Cockcroft.

Read the rest of this entry » «What Cloud Computing Means for Industrial Infrastructure»

Comparing Cloud Infrastructure Options for Running NoSQL Workloads

April 11th, 2014 by - 1,624 views

A walk through in-memory, general compute, and mass storage options for Cassandra, MongoDB, Riak, and HBase workloads

I recently had the pleasure of attending Cassandra Tech Day in San Jose, a developer-focused event where people were learning about various options for deploying Cassandra clusters. As it turns out, there was a lot of buzz surrounding the new in-memory option for Cassandra and the use cases for it. This interest got me thinking about how to map the options customers have for running Big Data across clouds.

For a specific workload, NoSQL customers may want to have the following:

1. Access to mass storage servers for files and objects (not to be confused with block storage). Instead, we’re talking on-demand access to terabytes of raw spinning disk volumes for running a large storage array (think storage hub for Hadoop/HBase, Cassandra, or MongoDB).

2. Access to High RAM options for running in-memory with the fastest possible response times—the same times you’d need when running the in-memory version of Cassandra or even running Riak or Redis in-memory.

3. Access to high-performance SSDs to run balanced workloads. Think about what happens after you run a batch operation. If you’re relating information back to a product schema, you may want to push that data into something like PostgrSQL, SQL, or even MySQL and have access to block storage.

4. Access to general-purpose instances for dev and test or for workloads that don’t have specific performance SLAs. This ability is particularly important when you’re trialing and evaluating a variety of applications. GoGrid’s customer’s, for example, leverage our 1-Button Deploy™ technology to quickly spin up dev clusters of common NoSQL solutions from MongoDB to Cassandra, Riak, and HBase.

Read the rest of this entry » «Comparing Cloud Infrastructure Options for Running NoSQL Workloads»

Be Prepared with a Solid Cloud Infrastructure

April 10th, 2014 by - 914 views

The more Big Data enterprises continue to amass, the more potential risk is involved. It would be one matter if it was simply raw material without any clearly defined meaning; however data analytics tools—combined with the professionalism of tech-savvy employees—allow businesses to harvest profit-driving, actionable digital information.

Recovery disks shattering

Compared to on-premise data centers, cloud computing offers multiple disaster recovery models.

Whether the risk is from a a cyber-criminal who gains access to a database or a storm that cuts power, it’s essential for enterprises to have a solid disaster recovery plan in place. Because on-premise data centers are prone to outages in the event of a catastrophic natural event, cloud servers provide a more stable option for companies requiring constant access to their data. Numerous deployment models exist for these systems, and most of them are constructed based on how users interact with them.

How the cloud can promote disaster recovery 
According to a report conducted by InformationWeek, only 41 percent of respondents to the magazine’s 2014 State of Enterprise Storage Survey stated they have a disaster recovery (DR) and business continuity protocol and regularly test it. Although this finding expresses a lack of preparedness by the remaining 59 percent, the study showed that business leaders were beginning to see the big picture and placing their confidence in cloud applications.

The source noted that cloud infrastructure and Software-as-a-Service (SaaS) automation software let organizations  deploy optimal DR without the hassle associated with a conventional plan. Traditionally, companies backed up their data on physical disks and shipped them to storage facilities. This method is no longer workable because many enterprises are constantly amassing and refining new data points. For example, Netflix collects an incredible amount of specific digital information on its subscribers through its rating system and then uses it to recommend new viewing options.

The news source also acknowledged that the issue isn’t just about recovering data lost during the outage, but about being able to run the programs that process and interact with that information. In fact, due to the complexity of these infrastructures, many cloud hosts offer DR-as-a-Service.

Read the rest of this entry » «Be Prepared with a Solid Cloud Infrastructure»