Archive for the ‘Cloud’ Category

Is it time to privatize the cloud?

Sunday, May 18th, 2014

Sure thing, the talking king heads of cloud have been preaching for a long time how public clouds will take over the world and become the new utility behemoths. The big (or early) ones – AWS, Azure, Google – have certainly seen rapid growth and this growth will likely continue for a long period of time. But, is the future all public cloud or does the private cloud have any chance to exist?

To answer this question, there are several dimensions to be considered. To start with, is the question of privacy of information. Even if a company may choose to go for a model of remote delivery of software (think Office 365), rules or just company policy will often require that sensitive data will be kept within the legal and physical control of the data owner (i.e., never made available to the cloud service provider). Recent news and the legal framework under which many major cloud service providers operate make this issue even more stringent and important and many companies will be legally bound to keep their data within their infrastructure – especially if operating in sensitive areas such as communications, financial system or government which make up a sizable chunk of the potential cloud clientele. In short, even if you discard the cost associated with shoveling a lot of data around, data privacy will be a serious catalyst for not embracing fully public cloud solutions.

The second aspect has to do with scale and simple economics. When your installation is small, operational costs will dominate and using a public cloud is the more sensible choice. As your business scales, at one point internalizing your infrastructure is simply more cost efficient, as many case studies with AWS will show you (think Netflix). Public cloud is a bit like a kickstarter thing to get rolling; once your business is ready for prime time, using a bank is a more sensible choice.

The third aspect has to do with the segmentation of the cloud market in terms of unique requirements. While initially a cloud service was just a uniform service, as the market is maturing more stringent and differentiated requirements come to the front. Whether it is real-time performance, availability, security or something else, segmentation is a fact of life – as illustrated by AWS’ differentiated offering. However with segmentation come additional costs and downsizing of individual segments where economies of scale will have to take a back seat.

Putting it all together – while public cloud providers have done a decent job kickstarting the industry, it’s time to re-focus on private cloud deployments. Public cloud will continue to grow, but private cloud demand – volume wise – will grow faster and will be a more differentiated market. How should big public cloud companies respond?

To stay relevant, they need to expand their customer base to those who would never use their public cloud offering. How? The answer is simple: privatize their cloud stack and sell it to those planning to build private clouds. Couple it with devops services and they can replicate their success at a totally different level. Compatibility with their public cloud will be a huge asset.

Are they ready?

Should you pass on PaaS?

Wednesday, May 14th, 2014

It’s funny to watch all the predictions on how fast different parts of the cloud market will grow – but also how some analysts “adjust” their forecast from 30+ CAGR to meager single low digits within just a few months or go from multi-ten billion market sizes to just a couple of billions. All these adjustments reveal one key thing: there’s a large degree of uncertainty with regards to how fast the cloud market will grow (no one really question the enormous growth potential) and where the focus will really be within a couple of years.

No other area reflects this better than Platform as a Service (PaaS). Once it has been touted as the main source of growth; nowadays it’s more a necessary add on for successful IaaS providers. 451 research recently released a research note highlighting the movements in the market and how PaaS solutions are really used – that is, which got acquired by whom. So, should anyone care about PaaS? Or should we just pass on it?

It’s always good to remind ourselves what PaaS really is. Essentially, any self-respecting PaaS solution offers three types of services: application life cycle management, application run time environments and services (which, in most cases, can be extended by other, user defined services). Typically life cycle management includes staging, termination, scaling (auto-scaling), policy based SLA enforcement – just what you would expect from any serious cloud management solution. On the run-time environment side you will find support for Ruby, JavaScript, Java, Python and the likes. As for services, security, database, messaging are typically present in most solutions.

Before answering the question, let me make an attempt to lay out a few ground truths and principles around PaaS:

1. Just as for programming languages, don’t expect a PaaS standard anytime soon. It may be sparser or thicker, but still a jungle it will be and portability will be a pain
2. Services will be used on a “need to have” as well portability basis. An LDAP based security service is a safe bet. An esoteric say messaging service less so.
3. If your app works with one PaaS, it’s not guaranteed that it will work with another one. Staging, policy enforcement, service usage will differ – it may start, but may fail miserably along the road
4. Be sure you use an application framework that is widely supported: node.js, JVM based ones, Ruby, Python are good. Other, not so much.
5. PaaS will not replace a cloud management solution, it adds an extra layer to be managed

So, should you just pass on PaaS and stay with IaaS and whatever application framework you use?

The answer is yes and no. Yes, you should pass on the dream of a “universal PaaS” that would allow you to run your application on “any” cloud and get the same services. This will simply not happen – cloud providers will work towards making sure to provide you with a PaaS layer, but a PaaS layer that is tailored for their particular cloud infrastructure. AWS has one, VMWare (for their private cloud) have another one (Cloud Foundry), for RedHat based clouds you have OpenShift. Moving these around and getting the same level of service is hard, if not impossible.

The answer is also No. No, it’s not a good idea to pass on PaaS. It does provide a platform that insulates the application from some complex aspects of cloud (VM management, scale-out/in, fault management) and allows the programmer to focus on the task at hand – if you are developing a new application. Given the tendency of cloud providers to offer a bundled PaaS solution, the real question to answer is the same old one: which cloud is right for you? Public or private? VMWare based or Linux/KVM/OpenStack or CloudStack based or Azure based? Redhat or some other distro? Once you answered those questions, you implicitly made the choice of PaaS too – in fact, it may be one of the decision criteria.

Just don’t expect “seamless” portability and interoperability. Go for performance, low cost or ease of migrating your legacy – whichever is more important for you.

Why the OpenStack vs CloudStack debate is irrelevant

Friday, April 27th, 2012

There has been a lot of hoopla around the announcement that Citrix will place CloudStack under the Apache umbrella, creating an effective competition to OpenStack. A lot of virtual ink was wasted on which one is more compatible with Amazon and other such issues (though, I must admit I enjoyed reading Randy Bias’ view). My point is quite simple: the debate is irrelevant, because what really will matter is what services the different stacks will enable, not how good IaaS managers they are (both are quite OK, by the way). The bulk of value will not be in public IaaS clouds – but rather in customized SaaS (and sometimes PaaS clouds) geared towards the needs of specific customer groups.

Why?

Using public IaaS is great when you want to quickly prototype something. Beyond that, if you want to run your production enterprise system on it, your operational cost savings will be close to none, as you will have to maintain your full IT department and your own life-cycle management processes. Guess what? Operational costs are actually higher than capital costs – so the real saving would come from offloading your complete IT department, not just converting your CAPEX on HW into OPEX. This is why public IaaS will have a limited utilization – its real use-case will be to enable software as a service offerings, that can generate real benefits for cloud users.

Raising the game to that level, the debate on which is better is essentially transformed into another question: which one will be able to gather the critical mass of vendor and user support to make it successful in the most unforeseen settings? The company I work for made a choice based on this principle, not based on compatibility with whichever other interface – and I strongly believe it’s the primary metric that will matter for most companies: this is why Linux is successful, while e.g. BSD is just a tiny niche player.

Beyond that, a bit of competition is always good 😉

Why OpenStack?

Saturday, February 25th, 2012

As you probably saw, Ericsson has recently joined the OpenStack project. One may ask: why OpenStack, what’s the point of it?

The answer is not connected to Ericsson (and, of course, represents my personal view only), but it’s simple: it is really about openness. Openness is not about open source, alone but it’s about freedom of choice: freedom to pick and mix your hardware according to you needs; freedom to design your network as you want; freedom to use the virtualization technology of your choice. The ITC industry was extremely successful at creating a bewildering choice of compute, storage and network hardware, virtualization platforms, virtual network models and so on – to be fair, each with its benefits (and more often than not, drawbacks). Trying to create the perfect, one fits all “standard” HW and virtualization platform is doomed to fail and would just result in yet another set of offerings. Better leave that area alone: if you are building a mission critical cloud, you are better off with e.g. a telecom blade system; if you are just after a vanilla IaaS platform, many vanilla IT blade vendors will be happy to serve you: the choice is – and should be – yours.

This is where OpenStack got it right: it aims at providing a software abstraction layer above the HW and virtualization layer that can hide and enable management of this bewildering diversity. It’s a tacit recognition of the fact that real value comes from the rest of your stack: how you manage automation, elasticity, scalability, resiliency; how you integrate with BSS systems and so on. Let the guys fight out the race to the bottom, wrap them into OpenStack and create real value.

When we first started using OpenStack, it was soon enough strikingly clear how powerful their model is: we were able to add ground-breaking features such as WAN elasticity, distributed cloud support, SIM based authentication and many more (about which I will talk at the World Telecommunication Congress) – while keeping the abstraction layer untouched. In addition, being an open source project, it was like a wish come true process: whenever we identified the need for some feature, soon enough turned out that someone actually was working on it.

However, in order to live up to its promise, OpenStack has to keep following the same principles: support the broadest possible set of HW and support the broadest set of hypervisors, with equal quality. If the community starts relaxing this holistic approach, it will reduce the appeal of OpenStack and might eventually have the same fate as many other open source projects: loose momentum, loose corporate backing and eventually fold.

So, let the era of cloud freedom roll on and I’m sure some cool stuff will be coming soon from the OpenStack-based community.

The Cloud in 2012

Wednesday, January 4th, 2012

I’ve just read Randy Bias’ post on cloudscaling.com – thought provoking and interesting reading to be sure, but I’m not sure I agree with him.

Why?

Let’s put things into perspective. Amazon Web Services is clearly a fast-growing business and an admirable example of organic innovation. However, if you look at the total IT DC market, it’s about 300BUSD in 2011, of which 25% is related to cloud computing – meaning AWS has captured roughly 1,3% of the total cloud market. Is that significant? Hardly. Is Amazon making money on AWS? Hard to tell, as no details are disclosed – but probably margins are razor thin. Here’s another piece of information: public IaaS market was about 2.3BUSD in 2011, so Amazon captured about 40% of it which makes them the market leader, but other providers, like AT&T, Verizon and so on are not that far behind. Besides there are strong proof points that Amazon is actually not that cheap & efficient – I’m aware of successful private cloud installations coming in at 40% lower cost / VM than Amazon’s equivalent offering (it’s managed as a business, so it takes server replacement, OPEX costs etc into account).

There’s a piece of information hidden behind those figures: the private cloud market is a huge business – perhaps over 50BUSD already last year. Ignoring this market is a mistake few companies can afford without being severely punished. The enterprise market is ripe for “cloudification” and not just for virtualizing existing application or cost savings; there’s a significant demand from enterprises for native cloud applications that scale infinitely; unified management of all aspects of IT – including networks – has been a key requirement even before clouds. Don’t get carried away by a 1BUSD market, when the real iceberg – 50 times bigger – lies under the sea.

So, what’s my prediction for 2012?

I think vanilla, remote IaaS providers like Amazon will see a slowing growth, at least in comparison with premium service public IaaS, private clouds and PaaS and SaaS offerings. I think 2012 will be the year of the emergence of the personalized, localized and highly available (read: five nines) cloud, with radically improved network performance, targeting enterprise customers. We will see PaaS and SaaS clouds skyrocket: most private customers don’t want Amazon’s EC2 service – they want simple, convenient services such as Dropbox, iCloud, Google Docs etc. This is the future of cloud computing, not bare bone IaaS that just moves your PCs to the cloud with unreliable performance and connectivity.

Let’s check it again in 362 days 😉

The reverse business model

Tuesday, December 27th, 2011

I’ve been championing for a while what I call the ‘reverse business model’, so I’m glad to see it applied more and more often – most recently in a deal struck between Google and Mozilla, securing significant funding for the browser developer in return for making (keeping) Google as the default search engine.

In short, the reverse business model is about charging the service/product provider, instead of the end consumer: instead of having an end user pay for a service, charge the original service provider for it, this way making sure the end user gets (the appearance of) a ‘free’ service (of course, at the end, he will pay up – but through a different channel). Examples of this model include charging web sites for premium connectivity from the end user (say hi to non-neutral net) or charging companies using social network sites such as Facebook – see my post about monetizing social networks. Google’s deal with Mozilla falls clearly into this category, making sure that Firefox’s users will use Google instead of Bing or Yahoo.

Will this model prevail for web based services? I believe it’s the only way for monetizing some of the most popular on-line services, such as social network sites; incidentally, it’s one of the ways e.g. operators can monetize over the top delivery of web and cloud services.

But more about that in another post.

Is IaaS fading away?

Wednesday, September 28th, 2011
Two weeks ago we had a really great panel at the Swedish Cloud Day, with participation from OpenNebula, Google, Berkeley and Ericsson (myself). We discussed quite some issues, but one kept bugging me ever since: are we seeing a decline of Infrastructure as a Service (IaaS), while PaaS and SaaS keep becoming dominant?
To answer that question, one has to look at when IaaS is really needed and by whom. One obvious use case is virtualization of existing applications, such as enterprise IT: moving existing enterprise services over to a cloud provider will require IaaS, for the simple reason that any other approach would mean a redesign of the existing software. In this case, IaaS is really a tool to support legacy applications, used by enterprise administrators rather than IT service consumers; it turns a two-tier system (enterprise IT systems – systems users) into a three-tier one: cloud provider – enterprise IT systems administrators – systems users. The real users of IaaS are enterprise IT systems admins; end users will still see SaaS as their primary model of interaction with enterprise IT.
A second use case is for casual utilization – both privately as well as for e.g. scientific or high performance computations. At the end, this is again really just a shift of legacy applications to the cloud: the only reason the user chooses IaaS is because the application is already given, (s)he only needs the scale-up service of the cloud. If (s)he would start from scratch, PaaS would be a better option.
Platform as a Service has some obvious benefits. It takes away the burden of managing instances, placement, networking, OS stack etc and enables the cloud user to focus on the core issue: developing an application that can quickly scale up and down based on needs. All the key services are given and guaranteed to scale and be available, something the IaaS model cannot provide. In many ways, the PaaS model gives back what infinite processor performance scaling took away: the notion of unlimited, reliable, on demand computing power.
I think Google really has a point here and in the future – and for new applications – we will see an increasing usage of PaaS offerings. The key issue though will be provider lock-in and tight coupling to platform APIs, something that no software provider can adopt lightheartedly. I believe we will see an emergence of portable libraries that still lock you into one API, but at least a portable one 😉

Whitepaper on Telecom Cloud Computing

Thursday, May 12th, 2011

Scope Alliance (the alliance of network equipment providers committed to providing standardized base platforms) has just released their whitepaper on telecom cloud computing. I had the honor to be the editor of the document, produced jointly by Ericsson and several of our competitors. Next week the coming out party for the paper will be held at the OpenSAF conference where I will have a talk focusing on its content (see the agenda).