Amazon Web Services, the currently undisputed heavyweight champion of cloud infrastructure providers, is looking at adding big-data services to its already wide spectrum of capabilities, with new programs that will simplify and lower the cost of crunching torrents of information coming down hard and fast.

Andy Jassy, the Amazon cloud’s senior vice president, alluded to the plans in an interview AllThingsD published today:

And I think you’ll see ius [sic] adding capabilities for companies with large data sets that want to do compute and processing, and then make that data useful. That’s the whole big data thing that everyone is talking about. I think you’ll see us add services there that make it easier and less expensive for customers to do that.

Between the lines of that passage lies the idea that maybe existing big data services on Amazon’s cloud — such as Elastic MapReduce for Hadoop, DynamoDB for hosted NoSQL database needs, and the Redshift data-warehouse service — aren’t enough.

Let’s break it down. In the NoSQL area, MongoDB, for one, has serious mindshare and a large user base — surely you recall that $150 million funding round from last month.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

Amazon cloud competitors have been advancing their own Hadoop offerings, with news coming last week from Rackspace, Windows Azure, IBM’s SoftLayer, Verizon Terremark, and CenturyLink’s Savvis.

Meanwhile, data-warehouse territory got rocked two weeks ago with Teradata announcing it was bringing its well used data warehouse into the cloud as a service. That matters because Teradata has loads of big customers that might be enticed by the idea of analyzing data in the cloud while sticking with a trusted vendor.

What might Amazon want to roll out? One worthy idea would be something along the lines of Joyent’s Manta service, which computes data where it is stored. Then again, maybe Amazon will try its hand at a “bare-metal” offering that does not virtualize servers. That configuration has been popular on SoftLayer’s infrastructure, which is increasingly important now that SoftLayer is part of IBM.


Editor’s note: Our upcoming DataBeat/Data Science Summit, Dec. 4-Dec. 5 in Redwood City, will focus on the most compelling opportunities for businesses in the area of big data analytics and beyond. Register today!


Of course, some companies might be hesitant to store valuable data on external infrastructure. Revelations on the National Security Agency’s snooping this year could divert revenue that otherwise would have gone to cloud companies. But hey, we’ve seen that the CIA is willing to use the cloud, particularly Amazon’s. Perhaps sentiment on cloud security this year will turn out to be a wash.

What could change, though, is adoption of big data services among startups and larger organizations, if Amazon does follow through on Jassy’s latest statements about making big data in the cloud easier and cheaper. And given Amazon’s scale and track record of popularizing cloud tools, it’s hard to doubt the company can’t pull it off.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More