Archive for June, 2010

When cloud computing fails to meet the hype

Thursday, June 24th, 2010

Recently there was an announcement by the U.S. Department of Energy regarding some of their results on using cloud computing for scientific computations. Unsurprisingly, it concludes that cloud computing fails to live up to the promise for applications that heavily rely on MPI (Message Passing Interface), while it performs well for serial computations (computations done on the same machine). It underlies one of the fundamental challenges of cloud computing, again: communication bottlenecks.

In a presentation to the SCOPE Alliance (that was very well received by the attendees) I emphasized exactly this point: in order to make cloud computing usable more widely and for applications with more stringent quality of service requirements, one has to focus on a few issues, of which communications management and security clearly stand out. There’s a deeper issue that needs to be addressed and goes somewhat against the promise of uniform, unlimited computing: in order to optimize communication, structure of the cloud needs to be exposed and locality managed.  It’s a major challenge that has been identified by several research groups and industry analysts; I will quote just one of them, BitCurrent:  “one dirty secret of cloud computing is that from a cost perspective, everything is pretty much free compared to the price of moving data around

How to make the elephant dance again

Tuesday, June 22nd, 2010

I must start this post with a disclaimer: what’s expressed here is my private opinion, may not reflect my employer’s official position and it’s based solely on publicly available information and my own thinking alone.

That being said, I must admit I was pondering this post for quite a while. Living in Finland and working in the telecom industry, it’s impossible not to take notice of the travails that Nokia is traversing. Its market share in smartphones is steadily going down; its revenues are declining (latest warning was issues just days ago); its capitalization has fallen below most of its traditional or newfound, former or present rivals (most recent company to surpass Nokia’s valuation is Ericsson). Most importantly, its image of innovative, cool, high tech company was badly tarnished in the developed world (with the exception of Finland, perhaps).

To be fair, Nokia is still the one to beat and it performs very well in emerging markets, mainly due to its world class supply chain and ability to deliver cheap feature phones. But ignoring the clouds on the horizon would be a fatal mistake; smartphones seem to dominate for years to come and that’s a battle where Nokia is on the loosing side.

Nokia’s situation is similar to that in which RIM (the maker of Blackberries) finds itself. Its products are loosing their shine and being able to read emails on your mobile phones is no longer a differentiating feature. Both companies are trying desperately to strike back: Nokia’s netbook (have you heard of it lately?), RIM’s tablet announcement, touch screens etc are all desperate attempts to catch up with the new guys on the block (Apple and the Android camp).

So why is Nokia losing its battle and how could it bounce back?

There are several root causes. The first one, in my humble opinion, is the insistence on Symbian, a long ago outdated operating systems that few people develop software for. If Nokia would have opened it up 10 years ago, the world would look different – but it insisted on a semi-closed, hard to develop for system, trying to do everything (or most) itself. The strategy failed and now seems to be too late to change that. Coupled with a less than well planned ovi.com launch, Nokia lost precious time that allowed others to surpass it.

The second cause has to do with design. Nokia’s phones may be feature-packed, but bricks they are. Coupled with software designed for engineers rather than average people, these are as appealing as a 60 year old former top model who refused to undergo a  lift-off surgery.

There are many more reasons, but I’ll stop here and rather focus on how this may be turned around.

So, what should Nokia do? Continue as today?

Definetly not. There’s another company with a lot of resources that is desperately trying to (re)gain ground in mobile devices. They have a great backend to build on, they have the resources, but somehow they keep slipping back in this domain. They also need a big lift to bounce back. Well, in my humble opinion, Nokia shall partner with them.

So, which company am I talking about?

It’s Microsoft.

It has been steadily loosing ground with (outdated) Windows Mobile, while trying to apply the same model as in the PC business (provide the software, let others do the hardware). Windows Phone 7 seems to be a step in the right direction, but with the same model, it will be terribly hard. On the other hand, Microsoft is great in cloud computing and the control of big Windows is a huge, under-utilized asset. Imagine a well designed, cheap to produce phone that runs Windows and integrates perfectly with the desktop, sporting thousands and thousands of apps – mostly the same as in the PC world. It  would be a great offering that would be hard to match.

What is needed is someone to provide that phone with which the software can be tightly integrated – and that’s why partnering with Nokia would be a match made in heaven (or hell, if you are ‘one of those’). The company cultures are compatible and the team-up, with the right marketing, would create the ‘wow’ factor both companies badly need.

One could argue that this means Nokia giving up its ambitions of becoming a services and software company. I disagree. The team-up – short of a merger of some kind – would offer a lot for both companies and would create an eco-system from which both would benefit. Nokia can still continue its services efforts, but leverage on Microsoft’s Azure; develop software, but on a platform that is likely to be far more user centric than what Nokia has ever built; crucially, it provides access to the American market, something that Nokia failed to penetrate.

I’m convinced that this battle will play out between Apple, the Android camp and Windows phone. Apple is the one-man, fully integrated show; Android is the traditional software-hardware decoupling; a Nokia-Microsoft alliance would be the middle ground that would bring the best of two worlds – IT and telecom – together.

A winning formula. But time is running out.

P.S: A graph to tell it all, fresh from Bloomberg.

Találkozás

Tuesday, June 22nd, 2010

Azért születtek, hogy találkozzanak.
De erröl egyikük sem tudott. Legkevésbé K.
K. egész életét az égetö sivatagi homokban és a forró kövek között töltötte. Számára ez volt az egyetlen létezési mód és nem is vágyott semmi másra. Vadászott, de csak azért mert vadásznia kellett – soha nem gondolta végig miért. Érezte, hogy várnia kell, hogy valami történni fog; nem tudatosan, csak úgy mint mikor reggel, ébredés után, tudjuk, hogy valami következik. Az óra csörög talán, egy madár kezd el csivitelni vagy esetleg egy autó indul valahova. Nem tudjuk, de érezzük, hogy valaminek történnie kell. Így volt vele K. is. Az életösztön forrása a várakozás volt és nem is volt semmije, csak a tudat, hogy valamire várnia kell.
E. is szegénynek született, de aztán sokat utazott. Soha nem foglalkozott azzal, van-e célja az egésznek. Mindegy volt. Egyre feljebb kapaszkodott azon a láthatatlan létrán és már ez is elég cél volt önmagában. K.-val ellentétben E.-nek volt családja és ez is így volt rendjén – bár E.-nek soha nem is jutott eszébe K.-hoz mérni magát. K. és társai csupán csúszómászók voltak E. szemében, értelmetlenek. Igazából E. soha nem is találkozott K.-val vagy K. bármelyik társával.
Addig a bizonyos napig.
Ez a nap K. szempontjából hasonló volt sok más naphoz. Dideregve és éhesen ébredt de a vadász-szerencse nem tartott vele. Nem tudta miért, de közelebb ment a Nagy Szürke Ösvényhez. Talán arra számított vagy valami morzsa kárpotolja? K. nem gondolkodott ilyen távlatokban. Éhes volt, de nem tudta, hogy az éhség csak egy eszköz, ami közelebb viszi az Ösvényhez.
E. számára ez a nap izgalmakkal telinek igérkezett. Új helyre utazott és kiváncsian ült be az autóba. Joshua fákat akart látni, pont olyan fákat amilyenek alatt K. tengette az életét. Persze nem tudta, hogy a Joshua fáknak is céljuk van, pont úgy mint fotós szenvedélyének.
Megállt az út szélén. A motort nem is állította le, hiszen csak egy-két képet akart készíteni. A forróság nyomasztó volt, a csend is.
K. vette észre E.-t elöször és tudta, hogy mi a teendöje. Csendben, szisszenés nélkül közelített feléje, óvatosan, mint akinek az élete függ töle. E. nem vette észre – már csak a surranást hallotta majd az éles hasító fájdalmat a lábában. Felüvöltött és ösztönösen taposott K. csavarodó, pikkelyes testére.
Az utolsó hang amit K. hallott a saját csontjainak ropogása volt, de tudta, hogy ö nyert, teljesítette küldetését. Csak egy érzés volt ami elmúlt hirtelen.
Pár óra telt el, mozdulatlanul feküdtek egymás mellett – a pikkelyes K. és E., az ember. Délután lett mire az elsö autó megállt E. jármüve mellett, de E. már nem látta a feléje hajló arcot – gondolatban még otthon volt, az öreg almafa alatt.
De érezte, hogy mégis csak most érkezett meg.

Reflections from ISCA workshops

Tuesday, June 22nd, 2010

I wrote a while ago about the benefits of space-shared operating systems in comparison with today’s SMP-based operating systems. One of the issues I raised relates to the intrusive nature of the OS in the sense that it usually destroys the cache content – either when it is explicitly invoked or when it decides to reschedule applications.

Yesterday, at WIOSCA (Workshop on Interaction between Operating Systems and Computer Architecture) there was an interesting paper that addressed the first issue. The proposal is basically to move the execution of some (long to execute) OS services to another core – pending certain conditions – a technique the researchers called ‘OS offloading’. The idea is really interesting and it comes amazingly close to the idea of space-shared operating systems (minus the scheduling issues, still unresolved). The results – unsurprisingly – have shown good performance improvement and it was good to see that current OSes can be tweaked relatively easily to become less intrusive.

Another interesting paper – this time at the PESPMA (Parallel Execution of Sequential Programs) workshop – has finally shown a realistic use-case for transactional memory: the guys (actually, gal 😉 )at the Barcelona Supercomputing Center have used transactional memory techniques to perform double execution of critical software and detect possible radiation related faults, essential in e.g. aviation systems. I’ve been a long-time critic of transactional memory, but this research caught me by surprise: there may, eventually, be some use for this idea (albeit from an unexpected corner – something that’s not so un-common for new technologies).

Stay tuned, I will return to some interesting topics up for discussion at ISCA and associated workshops.

The era of low power servers

Tuesday, June 15th, 2010

Yes, it has finally arrived: SeaMicro, a startup in stealth mode until yesterday, announced their low power, low cost server rack in a press release posted on their site. It’s based on Intel Atom chips, which makes it ISA-compatible with current software – yet, they still claim 4x reduction in power consumption, 4x reduction in footprint, while delivering the same performance as current server configurations.

I’ve been advocating and predicting the advancement of low-power chips ever since my revelation at last year’s SOSP. With the iPad, they entered the domain of netbooks/laptops; with today’s announcement, the server domain seems to be the next target. To be sure, there are still mountains to climb until such technology may become mainstream and for some workloads, that will never happen (just take a look at James Hamilton’s blog, one of AWS’s VPs) – but the previously “impossible challenge” has just tipped its toe into the sea of possibilities.

SeaMicro is not the only company working in this domain. There are several other companies that focus on ARM’s Cortex A9 core for building server chips, which promise a further factor of 4-5x improvement in power usage, at about the same device density. These systems will be built, sooner or later, by current or emerging companies – but the problems lie elsewhere: what kind of workloads can you put on these systems? How do you manage data centers with tens of thousands of cores – an order of magnitude more than today?

It’s interesting to note the parallel between low power servers, many-core chips and cloud computing. Both share the basic issue of large scale, sporting large numbers of cores that are co-operating on a specific problem. This scale is larger by at least one order of magnitude compared to today and magnifies many of the problems: the resource sharing bottleneck, partitioning etc. We really need programming models that can make use of massive parallelism without a lot of overhead – and that’s the challenge of the day we really need to focus on.

Üni elsö virtuális tárlata

Sunday, June 13th, 2010

A tavaszi ünnepség alkalmával lefotóztam az Üni óvodában kiállított képeit – az elsö virtuális kiállítás céljából. Nos, itt van 🙂
Az elsö kettö nincs datálva:

Pipacsok

Pipacsok 🙂

Bokor?

Bokor?

Innen már idörendi sorrendben következnek a müvek:

A müvész szerint: Pillangók

A müvész szerint: Pillangók

A kék periódusból

A kék periódusból

Kompozició

Kompozició

Bokor, második kiadás

Bokor, második kiadás

Végül íme a remekmü. Az óvonök szerint teljesen egyedül készítette és a feladat valóban virág festése volt.

Virág

Virág

Itt a nyár, éljen a sport :-)

Wednesday, June 9th, 2010

Megjött északon is a nyár és Üni úgy döntött, hogy ideje elkezdeni egyedül biciklizni. Meg is igértük neki, hogy most már kap ‘igazi’ biciklit is, már nagyon várja 🙂

Más téren is ügyeskedik, például:

If Google and Apple were countries

Tuesday, June 8th, 2010

Yesterday, while browsing through the news of the new iPhone and its comparision with the latest Android-based stuff, an idea started bugging me: there’s an interesting parallel between how Google and Apple do business and how societies in different parts of the world are organized.

In many ways, Google is the technology company version of what is usually called the ‘American way’: gang-ho approach, openness, diversity, do it yourself etc. Android is an open source OS; companies building phones based on Android are in a cut-throat competition; there are dozens and dozens of products – all produced under the benevolent, not-so-controlling eyes of ‘government’ (read: Google).

Enter Apple. iWhatever (minus the Macs) + iTunes is a walled garden: you pay a premium and you get a controlled, in many ways limited but safe and uniform treatment. The stuff is nice, well organized and it works. Sense the analogy? To me, Apple is similar to many European countries with their social market economies, high taxes, government-provided (in many cases, from a monopoly position) services. Everyone has a basic safety net, but in many respects, the choices (like social security, pensions etc) are limited.

I’m not arguing for or against any of these models here – both have their benefits and drawbacks and it’s notoriously difficult to strike a balance. It’s two different ways to look at business (and society) and only time will tell which – or a third one? or both? – will prevail.

The Many-core Programming Book

Monday, June 7th, 2010

As I hinted at it in a previous post, I’m working – together with two co-authors – on a book on programming many-core chips. It was commissioned by Springer, the world’s best known publisher of scientific work. It’s due to be delivered as manuscript by end of Q3 2010 (remains to be seen how good we are at keeping dead-lines 😉 ). As it’s implied by the subject, we focus on the programming of massively multi-core processors, with tens-to-hundreds of cores.

No matter what will happen, it is  a great learning journey. When you have to write down stuff that many people will read and comment, you really have to be 100% sure of what you are doing, what you include and what you leave out. I spent a considerable amount of time on the chapters dealing with state-of-the-art multi-core operating systems as well as future many-core operating systems and all the fact gathering, sorting, distilling process brought me to a deeper understanding of where the issues with OSes really are; why OS will become layered, why OSes and hypervisors are bound to overlap and eventually merge, how memory management and processor scheduling are becoming almost equivalent, what are really the fundamental principles that will allow operating systems to scale up – hopefully indefinitely. Unfortunately, I can’t share parts of the book here, but will return to some of the issues in an edited form.

Another interesting subject that got cleared up for me is the relationship between Amdahl’s law and Gustafson’s law; I urge everyone to read this paper, which explains clearly how the two are actually the expression of the same fundamental law. Researching for the book also allowed me to discover Gunther’s law (or rather, conjecture, see this blog post for analysis), which seems to be backed up by experimental data, but no well-founded theory. It includes Amdahl’s and Gustafsson’s laws as special cases, but it also covers the phenomenon of retrograde scaling.

I hope I made you interested in this book – so stay tuned 😉

The changing shape of the web

Wednesday, June 2nd, 2010

Recently, two British newspapers announced their plans to start charging for access to their online content. According to their estimates, over 95% of their current on-line readers were essentially economically useless or worse, in the sense that these generated no ad revenues, while pumping up the traffic towards the sites. Their calculations forecast that they will loose over 90% of their visitors, but revenues from paying customers should surpass current advertisement based revenues.

Whether they will succeed or not, remains to be seen. However, their choice of starting  to charge for on-line content underlines three fundamental issues: information and quality analysis did not become free just because of internet; pure advertisement based models may not always work; it’s getting harder and harder (read: costlier) to cater for an avalanche of visitors, in the absence of a reliable revenue stream.

I’ll skip the first item for now and focus on the second two. Obviously, advertisement based models worked out nicely for some of the companies – like Google – but it has a fundamental, subtle characteristic to it that makes deployment in many contexts tricky. When you visit Google, you are likely searching for something, so Google can offer you sponsored answers that you are likely to choose; if you visit a page to flip through information, the site owner may only know your location – so it’s like searching for a nail in a haystack: the probability of hitting the visitor’s pain point is significantly lower. It’s all about knowing your customer: Google, through the search you are entering, knows much more about you then, say, the web server of a newspaper. In many ways, Apple does the same thing with their new advertising platform than Google: through the walled garden of iTunes and AppStore, they know their customers pretty well and hence can deliver much more targeted ads – increasing the probability of a hit. Remains to be seen, but I believe it will be a successful model.

So, what’s the alternative for the rest? Obviously, to survive, they’ll have to find new revenue streams, they need to charge for content. Apple’s model, again,  has shown how: set the price right, and they will come. Since I bought my iPad, I became the happy subscriber of a service that, for 29,99USD/month delivers, legally, every morning, digitally, the full latest edition of about 10 daily newspapers. If I would have subscribed separately to all of these, I would have easily exceeded 100 or even 200 USD and the delivery would have likely been less reliable (there are newspapers published in the UK, US, Hungary and Romania – I challenge the postal service to deliver all those every morning at 7am for less).  The point is: charge for your content, but at a level that makes it worth it; use the benefits of digital, over the internet delivery for cutting your operational expenses and make up for the lower income / user figure (having more paying users also helps).

Which brings me to the third issue – the cost of running a server and related communication infrastructure. It’s an unchallenged truth that the traffic over the internet is exploding, which is good news for equipment vendors, but bad for service and content providers. Granted, most of this traffic is made up by largely illegal file-sharing, but the rest of it will likely move towards a model of paid content (at reasonable pricing level), coupled with a targeted, ad-supported free service (the Google model). This makes for a more sustainable, more fair and economically viable model, without sacrificing the fundamentals of fair and easy access to information.

As for illegal traffic – there are already signs that ISPs will move to block access for the worst offenders. While I have my doubts about it, I think they have little choice – after all, stealing used to land you in prison with constrained rights, wasn’t it so? 😉

Üni elsö éve az óvodában

Tuesday, June 1st, 2010

Lassan a végéhez közeledik az elsö óvodai év is; mint a tavaszi ünnepségen kiderült, az év folyamán az óvonök fotozgatták az apróságokat. Íme pár kép melyen Üni is rajta van.

Szülinapi prinsessa

Szülinapi prinsessa

Három manó, két élethelyzet 🙂

Jaj de jó, hull a hó

Jaj de jó, hull a hó

Egymást bátorítva könnyebb :-)

Egymást bátorítva könnyebb 🙂

Mindig is tudtuk, hogy Üni imád festeni, íme a mester, az alkotás hevében:

Talán fa lesz?

Talán fa lesz?

Jó vagyok, ugye?

Jó vagyok, ugye?

Itt is alkotott, a karácsonyi ünnepségre készültek.

Ezt már otthon begyakorolta

Ezt már otthon begyakorolta

Így néz ki a nagy csapat:

A csapat, baloldalt a balerina

A csapat, baloldalt a balerina

Mit akarnak itt most?

Mit akarnak itt most?

Álarcosbál

Álarcosbál

Végül három portré:

Figyelj, megmutatom

Figyelj, megmutatom

Kint

Kint

Bent

Bent