Everyone is talking about the “cloud,” but is there anything new here? How is the “cloud” different from “internet” or “web?”
There is something new — and it’s a big deal — but it’s not what everyone usually talks about.
A few years ago, when someone would store his photos on Picasa, he would say, “I am using this website to store my photos” or “I put my photos on the Internet.” Now he would say, “I store my photos in the cloud.” What’s the difference ? In most cases, none. In consumer language, the word “cloud” has replaced “Internet” or “web” — it is just more trendy!
And yet, something real is happening.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
In the business-to-business world, the word “cloud” is just as pervasive as it is in the consumer world. The National Institute of Standards and Technology has even attempted to define it, although the definition is incredibly complex:
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
This is actually accurate. But what is revolutionary and new here? Based on this description, the cloud is an evolution, not a revolution. It is the continuation of a series of changes initiated in the nineties with the advent of the commercial Internet, but its roots go back even further.
The first network-accessible “cloud” computing resources were already coming online some 50 years ago when American Airline launched SABRE in the early sixties.
In the eighties, bulletin board systems (BBSes) and Minitel were full of applications that were used through a network. The leading applications were yellow pages (search), travel reservations, order input systems and online dating.
In the late nineties, online services all converted to the Internet to benefit from lower costs and larger audience. Shortly after, a new concept appeared: “SaaS” or Software-as-a-Service. Some innovative software vendors realized that the Internet had become reliable enough for corporations to depend on it. We all know the success of Salesforce.com, essentially continuing and enhancing the order input systems from the BBS era.
A decade later, bandwidth has become so ubiquitous that it is in the air, and it is now common to access the Internet using smartphones. Most users have multiple devices from which they want to access their service, to the point that location has become nearly irrelevant.
With decades of improvement in available bandwidth and reliability, it is increasingly possible to rely on systems that are not on the same premises as the end user of the system. An innovative vendor, Amazon.com, saw that not only software but the computing infrastructure itself could be offered as a service. And thus the cloud was born. But fundamentally, there is not much difference between provisioning an instance on Amazon Web Services (AWS) and using a distant computer on a BBS 30 years ago.
And yet, a revolution is happening, but it is elsewhere. It is the consumerization of IT.
How consumerization has driven infrastructure
For decades, innovation in information technology has been driven by enterprises, government, and military needs. With the cloud wave, the place for innovation has changed, and it is now the consumer which is the driving force in IT.
Serving consumers at the scale of the Internet is mind-boggling. The service has to be on, all the time, for everyone, despite the variety of situations, devices, time zones, character sets…. When you address such a large population, there are no ‘safe’ maintenance windows. You can be almost certain that someone is using your service at any weird hour of the day or night, even on Christmas eve! Not only do you need to deliver 24×7, ubiquitous, highly reliable computing, but you need to do it cheaply, so cheaply that you can sell it for pennies or make money from advertising.
Large web sites such as Amazon, Google and Facebook have faced these challenges since the mid-2000s. They have independently found the same solution: distributing IT over many generic servers in a completely distributed architecture, where components can fail and be upgraded or changed individually without material impact on the whole system, thus reducing manual operations to a fraction of those required with traditional IT systems. This approach is now the foundation for the cloud wave.
Amazon founder and chief executive Jeff Bezos documented through many interviews the process by which Amazon became the leader of public cloud services. Here is an excerpt of an interview with Bezos in Wired:
Approximately nine years ago we were wasting a lot of time internally because, to do their jobs, our applications engineers had to have daily detailed conversations with our networking infrastructure engineers. Instead of having this fine-grained coordination about every detail, we wanted the data-center guys to give the apps guys a set of dependable tools, a reliable infrastructure that they could build products on top of. The problem was obvious. We didn’t have that infrastructure. So we started building it for our own internal use. Then we realized, “Whoa, everybody who wants to build web-scale applications is going to need this.”
Google and Facebook had to build similar technology; they just took different business routes to get there. Google kept everything in-house powering its multiple applications, simply publishing a few white papers on what it had created (notably MapReduce, which is the foundation to Hadoop). For its part, Facebook contributed much of its infrastructure work to the open-source community (including projects like Cassandra and OpenCompute.org).
This kind of infrastructure has revolutionized software development. Now instead of having to deal with infrastructure as a hindrance (because it is too slow or too expensive) developers can program the infrastructure to deliver exactly the amount of computing power, network resources or storage capacity that is required.
A new kind of developer
I am too old to talk about this revolution with any level of detail. However, I can notice that the kids developing applications today use completely different languages and paradigms than what I was using twenty years ago. Their approach to development is completely focused on what they want the application to do. They do not have to manage the hardware in any way.
Typically these applications leverage a “web service” type of software architecture, making it extremely easy to link applications together. Look at how easy it is to have your Twitter feed show on Facebook and LinkedIn at the same time, or how you can easily log in to a site using your Facebook credentials. This is the result of a web service architecture.
Furthermore, since the infrastructure is programmable, these developers can treat the infrastructure itself as just another service. Now, an application can request 1,000 servers but only for the time it needs to get your result, and then free up these 1,000 servers for some other task, all without any manual intervention. That’s the cloud!
In fact, the NIST definition with “with minimal management effort or service provider interaction” is really a understatement. If an application becomes popular and requires more resources, it will simply request those resources from the Infrastructure-as-a-Service. This is what has made Amazon Web Services so popular with start-ups. Now developers can program the infrastructure and operations as part of their software development. That’s led to a new term, “devops,” that makes explicit the merger of what used to be completely different skill sets, software development and IT operations.
This new style of application development results in applications which are more fun, easier to use, more practical, and more reliably scalable, both in terms of functionality and capacity. All of this can be achieved with a higher productivity in the development process.
The old man’s initial reaction is to dismiss the young kids’ approach. But when you see the success and stability of applications that have been built this way, such as Facebook, SmugMug or Dropbox, to name a few, I am willing to bet the opposite. Within ten years, this style of development will have become standard in the enterprise.
On the hardware side also, innovation is also driven by consumer technology. The cost of silicon-based components is mostly capital cost: the cost of R&D plus the cost of building a fab. Consequently, the cost of silicon based components is inversely proportional to the number of components sold. This is how solid state drives, which were originally used for digital cameras, smartphones and USB keys, got down to a cost point where they are now competitive with hard drives for certain business applications. Without the billions of SSDs sold to the mass market, there would be no way for startups to use SSDs as a viable storage alternative today.
Employees seizing control
Finally, there is yet another way in which consumer technology is driving innovation. Until recently, employees had to make do with what was supplied by their IT department. They would sometimes complain that an application was slow, or a process not practical, but at the end of the day, they would use the tools they were given.
With so many applications and business processes now available through the web, this has changed. How many times have I tried to send a large attachment to someone, and after it was rejected by the corporate mailbox, the person recommended I use their private Gmail account? Actually, the more senior the person and the more confidential the document, the more likely it is to happen!
The tables are turning. Employees are savvy users of IT at home. Devices such as the iPad, Kindle, and Android smartphones, websites such as Facebook or Netflix, applications such as Xfinity, Skype or Evernote have become part of daily life. People can easily listen to the same music at home, in their car, or on vacation. With Xfinity, they can select a movie on their iPad and launch it on their home HDTV. They can share pictures and videos of their last party with all their friends, or with only some of them at the touch of a button.
And then they arrive at work, only to discover that it is impossible to validate a purchase order from their corporate enterprise resource planning (ERP) system from their smartphone. The gap between the amazing technologies they have at home and the lame ones they have at work is widening, and it is becoming intolerable for employees. Intolerable situations cause revolutions.
The consumerization of IT is the real revolution. It is the wind pushing the cloud. The real debate is not about public or private cloud. The real challenge for corporate IT is to embrace this revolution, and accept the fact that an IT made of many simple web processes and many generic servers actually delivers better applications, more functionality, more agility and more reliability, at a fraction of the cost of the big iron.
It is counterintuitive, but it is real. This is what the cloud is about.
Jérôme Lecat is the chief executive of Scality, a large-scale storage management startup. He is a serial entrepreneur and business angel with 15 years of internet start-up experience.
Top image credit: Alexander Kirch/Shutterstock
CloudBeat 2011 takes place Nov 30 – Dec 1 at the Hotel Sofitel in Redwood City, CA. Unlike any other cloud events, we’ll be focusing on 12 case studies where we’ll dissect the most disruptive instances of enterprise adoption of the cloud. Speakers include: Aaron Levie, Co-Founder & CEO of Box.net; Amit Singh VP of Enterprise at Google; Adrian Cockcroft, Director of Cloud Architecture at Netflix; Byron Sebastian, Senior VP of Platforms at Salesforce; Lew Tucker, VP & CTO of Cloud Computing at Cisco, and many more. Join 500 executives for two days packed with actionable lessons and networking opportunities as we define the key processes and architectures that companies must put in place in order to survive and prosper. Register here. Spaces are very limited!
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More