Skip to main content [aditude-amp id="stickyleaderboard" targeting='{"env":"staging","page_type":"article","post_id":1758084,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,","session":"C"}']
Guest

Game changer: Goodbye disk, hello superfast in-memory databases

Image Credit: Trifonenko Ivan. Orsk/Shutterstock

When it comes to enterprise computing, to paraphrase the always-eloquent Meghan Trainor, it’s all about that database. Whether you’re a Netflix serving up the latest videos to millions of consumers, a national retailer trying to figure out what’s in stock, or a bank or hospital trying to analyze trends among all the big data you’ve collected, there’s a database management system that makes it possible.

But as the volume and velocity of data grows exponentially, speedy access to databases has become problematic, so much so that hard disks, and even the faster solid-state disks, (SSDs) can’t keep up. That’s where in-memory computing comes in.

[aditude-amp id="flyingcarpet" targeting='{"env":"staging","page_type":"article","post_id":1758084,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,","session":"C"}']

In-memory computing has been maturing quickly over the last three years. It had been used mostly in small-scale databases but is now ready to hit the big time due to changes in the technology and the economics of handling big data. In-memory computing will have a huge impact on how most data is accessed, resulting in quicker transactions (via in-memory transactional databases) and better analysis (via in-memory analytical databases).

Simply put, in-memory computing is about holding boatloads of data – perhaps all your data – in faster, more expensive DRAM – rather than on disk. Some people contort the meaning of in-memory computing to include SSDs. When I talk about in-memory computing, I mean DRAM.

AI Weekly

The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.

Included with VentureBeat Insider and VentureBeat VIP memberships.

It’s a significant architectural shift – and one that’s right for the times. With millions of clients (some of them devices) making simultaneous requests to servers, apps are drowning in requests, data, and concurrency; yet they must deliver responsiveness and performance.

That performance comes at a cost, but it’s a cost that’s dropped quickly of late. In 1990, a terabyte of state-of-the-art DRAM cost $1,000,000; five years ago, it was about $10,000; soon it’ll be $1,000.

Also, the in-memory databases of today have become better and broader than they were just a few years ago, and they’re able to address more general-purpose use cases.

Software architects have seen the light and are designing applications that can leverage in-memory databases rather than relying on stodgy orthodox disk-based transactions.

Reading and writing from memory is more than a thousand times faster than reading and writing from disk. (In-memory performance improvements vary by application, data volume, data complexity, and concurrent-user loads, so, depending on the app, the speed advantage may only be a still eye-popping 100x).

This all translates to significant boosts in performance for the same investments in dollars and manpower. You may no longer need to hire, for example, an optimization engineer for a traditional database for $150K a year, since you can hold and serve all the data you need in fast DRAM a thousand times faster, for the same amount of money.

[aditude-amp id="medium1" targeting='{"env":"staging","page_type":"article","post_id":1758084,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,","session":"C"}']

Both analytics and transactional databases stand to benefit from in-memory.

But in-memory computing is particularly sweet for transactional databases because most transactional databases are only a few terabytes (analytics databases tend to be larger because they analyze big data). That’s why I predict that in the next five years all transactional databases will be in-memory (DRAM). DRAM stands to become the new Disk, while Disk will become the new “tape” or archival technology.

(Full Disclosure: I’ve invested in several in-memory companies: Redis, Hazelcast, and DataStax.)

Salil Deshpande is Managing Director at Bain Capital Ventures and has invested $150 million over nine years into 32 companies, mostly open source and software infrastructure, and mostly early, such as Redis Labs (the Redis NoSQL database), Hazelcast (in-memory computing platform for Java),  DataStax (the Cassandra NoSQL database), Iron.io (lambda architectures), Typesafe (the Scala language; Akka and Play frameworks for Java),  ZeroTurnaround (tools for Java developers), MuleSoft (integration), Buddy Media (social media marketing platform), SpringSource (Java app servers), Dynatrace (application performance management), Dropcam (wifi webcams with cloud DVR), Aria Systems (ERP for recurring revenue businesses), Vaxart (oral vaccines), Junglee Games (real-money gaming in India), and Lending Club (P2P lending). Salil previously worked for Sun Microsystems on CORBA, taught summer courses at Stanford University, and started three companies.

[aditude-amp id="medium2" targeting='{"env":"staging","page_type":"article","post_id":1758084,"post_type":"guest","post_chan":"none","tags":null,"ai":false,"category":"none","all_categories":"business,enterprise,","session":"C"}']

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More