bigdatasafety

This is a guest post by technology executive Bruno Aziza

“Big data” is everywhere. From social media startups to New York’s Central Park, everyone seems to be deploying big data analytics these days.

Gartner, one the biggest analyst firms in the world, backs up the trend with big numbers: a recent report shows that $28 billion was spent on big data technologies this year, and over $230 billion will be spent through 2016. $230 billion is almost as much as the GDP of Portugal.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

However, you’ll need big bucks to be able to deploy big data technology solutions. Most companies don’t have the IT budget, and can’t afford to hire a data scientist or data services team.

If the trend is to succeed the companies of all sizes, there are a few problems that will need to be addressed and ironed out.

Big data is too expensive!

You might have heard all about the exploits of the biggest players: Facebook stores about 100 terabytes of data about its users, and NASA streams approximately 24 terabytes every day (full disclosure: NASA is one of my company’s customers). These numbers are indeed impressive.

How much does it cost to work with so much data? Even with Amazon Redshift’s aggressive pricing, NASA would have to pay more than a $1 million for 45 days in data storage costs alone. This number is consistent with New Vantage Partners survey, which evaluates the average big data project to cost between $1 million and $10 million.

Most CIOs, according to a recent survey, responded that their budget won’t cover the cost of a big data deployment. We need to approach big data differently, and design solutions that allow smaller companies take advantage of this opportunity. The cost of storing and processing this data is simply too high.

Big data does not necessarily have to be big 

This brings us to the second reason why the big data market is flawed. Today, a big deal is made of the vendors who partner with the largest technology firms to work on petabyte-scale data. Yet, even SAP’s own research shows that 95 percent of companies only use between 0.5 terabytes to 40 terabytes of data.

The amount of data that Facebook and NASA are crunching remains the exception, not the norm. Truth is, you don’t have to be a large company to leverage your data. If you looked at range of companies in the U.S., you’ll find that there are over 50,000 that only have between 20 and 500 employees — most of which, I’d argue, are trying to solve data problems at scale.The biggest market for big data is not just with the Fortune 50, it is with the Fortune 500,000. Why do we then focus so much on the exceptional few, when the majority of companies that need help are neither Fortune 50 and do not have petabyte-scale problems?

Sometimes, I wonder what would happen if we changed the definition of big data. What if, instead of focusing of the proverbial 3 V’s (velocity, volume and variety), we tried something like this: “Big data is a subjective state that describes the situation a company finds itself in when its infrastructure can’t keep pace with its data needs.”

This definition might not be as glamorous as others, but it sure would be closer to the reality most companies are trying to grapple with today.

WhyBigDataSucks_BABruno Aziza is Vice President of Marketing of Big Data Analytics Company SiSense.

Big data image via Bruce Rolff // Shutterstock

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More