Archive for April, 2012

Why the OpenStack vs CloudStack debate is irrelevant

Friday, April 27th, 2012

There has been a lot of hoopla around the announcement that Citrix will place CloudStack under the Apache umbrella, creating an effective competition to OpenStack. A lot of virtual ink was wasted on which one is more compatible with Amazon and other such issues (though, I must admit I enjoyed reading Randy Bias’ view). My point is quite simple: the debate is irrelevant, because what really will matter is what services the different stacks will enable, not how good IaaS managers they are (both are quite OK, by the way). The bulk of value will not be in public IaaS clouds – but rather in customized SaaS (and sometimes PaaS clouds) geared towards the needs of specific customer groups.


Using public IaaS is great when you want to quickly prototype something. Beyond that, if you want to run your production enterprise system on it, your operational cost savings will be close to none, as you will have to maintain your full IT department and your own life-cycle management processes. Guess what? Operational costs are actually higher than capital costs – so the real saving would come from offloading your complete IT department, not just converting your CAPEX on HW into OPEX. This is why public IaaS will have a limited utilization – its real use-case will be to enable software as a service offerings, that can generate real benefits for cloud users.

Raising the game to that level, the debate on which is better is essentially transformed into another question: which one will be able to gather the critical mass of vendor and user support to make it successful in the most unforeseen settings? The company I work for made a choice based on this principle, not based on compatibility with whichever other interface – and I strongly believe it’s the primary metric that will matter for most companies: this is why Linux is successful, while e.g. BSD is just a tiny niche player.

Beyond that, a bit of competition is always good 😉

Is multi-core programming still hot?

Friday, April 27th, 2012

This is an idea that has been bugging me for a while: just how critical and hot multi-core programming is? Or, better phrased: is it relevant for the programmer at large or is it just a niche issue for some domains?

What triggered me to write this post is a recent business development workshop I attended in Gothenburg, organized as part of the HiPEAC Computing Systems Week. The goal was to draw up the business canvas for research ideas in order to facilitate moving them into mainstream – and this is where I saw the question emerge again: given a technology that helps you parallelize your software, who will really be interested in buying it?

It has been argued for a while that end user device applications and PC software won’t really need to bother beyond a few cores: multi-tasking will make sure that cores are used efficiently. Talking with web developers and those writing software for cloud platforms, the conclusion was the same: they haven’t seen the need to bother about anything: all this is happening at so low level from their perspective that it’s irrelevant.

With all this out of the scope, what is really left?

High performance computing for sure is in dire need of good programming models for many-core chips, in light of the search for the teraflops machine. But, this is a quite niche domain, as are some other, such as OS development or gaming platforms. Honestly, I have to admit: beyond these, I haven’t seen much interest or buzz about multi-core programming, a bit as if the whole hype would have vanished and settled into a stable state. To be honest, it’s as it should be: given the level of sophistication and performance reached for single cores, any but the most demanding applications will find that just one core – or maybe a few, using functional partitioning – will just be enough.

What will this mean?

I think the research community will have to come out and make it clear that the whole area shall be specialized, instead of shooting for a holistic solution that no-one will need. Yes, we will have to fix the needs of specific domains – high performance computing, gaming, perhaps telecoms – but beyond that further efforts will have little practical applicability. Other domains: power efficient computing and extreme, cloud based scalability will be the ones that will matter and it’s where research efforts shall be focused.

With this, I think it’s time for me to leave multi-core programming behind – on this blog and in professional life and focus on those things that can make a broader difference.