Archive for the ‘Programming’ Category

Is multi-core programming still hot?

Friday, April 27th, 2012

This is an idea that has been bugging me for a while: just how critical and hot multi-core programming is? Or, better phrased: is it relevant for the programmer at large or is it just a niche issue for some domains?

What triggered me to write this post is a recent business development workshop I attended in Gothenburg, organized as part of the HiPEAC Computing Systems Week. The goal was to draw up the business canvas for research ideas in order to facilitate moving them into mainstream – and this is where I saw the question emerge again: given a technology that helps you parallelize your software, who will really be interested in buying it?

It has been argued for a while that end user device applications and PC software won’t really need to bother beyond a few cores: multi-tasking will make sure that cores are used efficiently. Talking with web developers and those writing software for cloud platforms, the conclusion was the same: they haven’t seen the need to bother about anything: all this is happening at so low level from their perspective that it’s irrelevant.

With all this out of the scope, what is really left?

High performance computing for sure is in dire need of good programming models for many-core chips, in light of the search for the teraflops machine. But, this is a quite niche domain, as are some other, such as OS development or gaming platforms. Honestly, I have to admit: beyond these, I haven’t seen much interest or buzz about multi-core programming, a bit as if the whole hype would have vanished and settled into a stable state. To be honest, it’s as it should be: given the level of sophistication and performance reached for single cores, any but the most demanding applications will find that just one core – or maybe a few, using functional partitioning – will just be enough.

What will this mean?

I think the research community will have to come out and make it clear that the whole area shall be specialized, instead of shooting for a holistic solution that no-one will need. Yes, we will have to fix the needs of specific domains – high performance computing, gaming, perhaps telecoms – but beyond that further efforts will have little practical applicability. Other domains: power efficient computing and extreme, cloud based scalability will be the ones that will matter and it’s where research efforts shall be focused.

With this, I think it’s time for me to leave multi-core programming behind – on this blog and in professional life and focus on those things that can make a broader difference.

The reverse business model

Tuesday, December 27th, 2011

I’ve been championing for a while what I call the ‘reverse business model’, so I’m glad to see it applied more and more often – most recently in a deal struck between Google and Mozilla, securing significant funding for the browser developer in return for making (keeping) Google as the default search engine.

In short, the reverse business model is about charging the service/product provider, instead of the end consumer: instead of having an end user pay for a service, charge the original service provider for it, this way making sure the end user gets (the appearance of) a ‘free’ service (of course, at the end, he will pay up – but through a different channel). Examples of this model include charging web sites for premium connectivity from the end user (say hi to non-neutral net) or charging companies using social network sites such as Facebook – see my post about monetizing social networks. Google’s deal with Mozilla falls clearly into this category, making sure that Firefox’s users will use Google instead of Bing or Yahoo.

Will this model prevail for web based services? I believe it’s the only way for monetizing some of the most popular on-line services, such as social network sites; incidentally, it’s one of the ways e.g. operators can monetize over the top delivery of web and cloud services.

But more about that in another post.

Old story, told again (with a twist)

Wednesday, November 23rd, 2011
Old story, told again (with a twist)
Reading Steve Jobs’ biography, I’ve got reminded of the old claim that Microsoft copied (or stole, as Jobs put it) the graphical user interface, making it the backbone of their operating system’s success (while, for Apple, it was just one feature that helped make the product desirable). I find Bill Gates’s summary of the events – told to Jobs- quite funny: “Steve, it’s like me breaking into Xerox PARC to steal their TV set , just to find out that you already stole it” 🙂
Reading the details of that story, largely irrelevant today, it struck me how similar it is to a more recent story, involving this time Apple and Google. Even the most hard core Android fan boys have to admit that its touch interface is a shameless copycat of iOS’s interface (poor Apple, the one subject to a double robbery 😉 ). But, there’s a twist: this time around, Google also copied the philosophy: they make use of it just to promote their other, real product: search.
Nevertheless, to me this is history repeating itself with a little twist.

Reading Steve Jobs’ biography, I’ve got reminded of the old claim that Microsoft copied (or stole, as Jobs put it) the graphical user interface, making it the backbone of their operating system’s success (while, for Apple, it was just one feature that helped make the product desirable). I find Bill Gates’s summary of the events – told to Jobs- quite funny: “Steve, it’s like me breaking into Xerox PARC to steal their TV set , just to find out that you already stole it” 🙂

Reading the details of that story, largely irrelevant today, it struck me how similar it is to a more recent story, involving this time Apple and Google. Even the most hard core Android fan boys have to admit that its touch interface is a shameless copycat of iOS’s interface (poor Apple, the one subject to a double robbery 😉 ). But, there’s a twist: this time around, Google also copied the philosophy: they make use of it just to promote their other, real product: search.

Nevertheless, to me this is history repeating itself with a little twist.

Just how small the mobile room really is

Wednesday, November 23rd, 2011
o doubt, we are witnessing fast paced changes in the mobile landscape. The emergence of Android; the downfall of Nokia; the aquisition of Motorola by Google; Microsoft’s fightback; the departure of Ericsson – all are events that would have taken years to play out in the good old days of the last century – yet, all this happened within little more than a year.
At closer look however, the mobile industry seems to have room for just a few players. With all its meteoric rise, Android only generates healthy profit for two of its fans – Samsung and HTC – while the others are barely making ends meet, if at all (it also generates revenue for Google through search and Microsoft through patent licensing, but that’s a different story). Things might get even murkier if Motorola will get preferential treatments.
Or, take tablets: after almost two years Apple still rules, with the others folding their offerings quicker than you may read their names (with the late exception of Samsung). Microsoft is throwing money around, but perhaps with the exception Nokia, no one is making much out of it. It’s likely that Nokia will enter the tablet field with an ARM and Windows 8 based tablet with a good fighting chance, but the rest will find it a hard act to follow.
So, where will this lead us? It’s only a matter of time in my humble opinion before RIM goes out of business (perhaps bought by Nokia to smooth the entry to the US market) and there will be just three ecosystems left: iOS possibly merged with OS X, dominating the tablet market and a healthy market share in mobiles; Android, as a copycat with a significantly larger market share; and Windows, catering for business users and gaining a healthy share of the tablet market. But, all these ecosystems will be dominated by just a few players: Apple (iOS), Samsung or HTC (Android, with likely one of them entering a path of slow decline) and Nokia (Windows).
The rest? As Mark Knopfler says in one of his songs: “those who don’t like the danger soon find something different to try”

No doubt, we are witnessing fast paced changes in the mobile landscape. The emergence of Android; the downfall of Nokia; the aquisition of Motorola by Google; Microsoft’s fightback; the departure of Ericsson – all are events that would have taken years to play out in the good old days of the last century – yet, all this happened within little more than a year.

At closer look however, the mobile industry seems to have room for just a few players. With all its meteoric rise, Android only generates healthy profit for two of its fans – Samsung and HTC – while the others are barely making ends meet, if at all (it also generates revenue for Google through search and Microsoft through patent licensing, but that’s a different story). Things might get even murkier if Motorola will get preferential treatments.

Or, take tablets: after almost two years Apple still rules, with the others folding their offerings quicker than you may read their names (with the late exception of Samsung). Microsoft is throwing money around, but perhaps with the exception Nokia, no one is making much out of it. It’s likely that Nokia will enter the tablet field with an ARM and Windows 8 based tablet with a good fighting chance, but the rest will find it a hard act to follow.

So, where will this lead us? It’s only a matter of time in my humble opinion before RIM goes out of business (perhaps bought by Nokia to smooth the entry to the US market) and there will be just three ecosystems left: iOS possibly merged with OS X, dominating the tablet market and a healthy market share in mobiles; Android, as a copycat with a significantly larger market share; and Windows, catering for business users and gaining a healthy share of the tablet market. But, all these ecosystems will be dominated by just a few players: Apple (iOS), Samsung or HTC (Android, with likely one of them entering a path of slow decline) and Nokia (Windows).

The rest? As Mark Knopfler says in one of his songs: “those who don’t like the danger soon find something different to try”

Monetizing social networks

Wednesday, August 3rd, 2011

IEEE Spectrum recently published an interesting article about monetizing social networks (titled The revolution will not be monetized). It makes a good reading with a provocative conclusion: advertising will not be able to generate sufficient revenue to keep the ball rolling in the long term. So, where will money come from?

I’ve wrote already once about monetizing web content and services, where I came to the same conclusion: advertising only makes sense if you know exactly what your target is planning to do: otherwise it’s little more than an annoyance that will scare customers away. But, is that true for Facebook as well?

Whenever I log into Facebook, I see the status updates from my friends, including “friends” such as airline companies, hotels, my local mobile provider etc. What I’m being served is actually advertisement from companies I’m likely to use, a piece of information I gave away about myself as soon as I ‘Like’-d them. I confess: I rarely click on any banner, but I DID click on some of the links posted by my merchant ‘friends’. Does anyone monetize that? Of course: the companies themselves, but no one else.

There’s another case that went largely unnoticed. During last year’s volcanic ash crisis, most airline companies’ website were down or only served up an ugly looking text file with laconic updates. Meanwhile, those clever enough to have a Facebook account kept their customers up to date by the minute, leveraging on Facebook’s massive computing power: effectively, these airline companies turned Facebook into a free Infrastructure as a Service (or Platform as a Service, if you consider status updates a service) provider, winning high notes from their customers. Communication was also more personal, the companies’ Facebook operators replying to individual comments as well. In fact, afterwards, instead of calling their help-line, I contacted my favorite airline company through Facebook – and they solved my issue quickly.

So here’s my suggestion for Facebook & co: start monetizing on your company customers. You are giving away a huge opportunity for making money by letting them advertise themselves, interact with their customers, use your infrastructure essentially for free. To some extent, this is what LinkedIn is doing already: charging company members for recruiting services and job postings. Extending the model beyond that is the natural next step for me and it’s monetizing on the greatest asset social networks have: large number of users who themselves chose – and disclosed – what they are interested in, coupled with a large number of companies willing and eager to reach out to those hundreds of millions of potential customers.

It’s a bit like the 21st century TV: you tune in to what is interesting for you, advertisers push out their content and Facebook (the TV program) aggregates it for you. Welcome to the new world of interactive media.

Evolution as machine learning

Friday, June 10th, 2011

This week I had the chance to attend the Turing award lecture of the 2010 winner, Leslie Valiant. His research focuses on computational complexity which is of marginal interest for me, so I had no great expectations – nevertheless, this being the equivalent of Nobel price acceptance speech for computer science, I felt compelled to attend.

How wrong I was: instead of a lecture on his research he delivered one of those speeches that really make you think and provide plenty of food for thought. First – after making clear that he believes strongly in the theory of evolution – he laid out the basic problem with evolution: we don’t have a way to prove that evolution was indeed possible, given the huge potential variety – is it possible to reach the current living ecosystem without ‘external guidance’? Is at all possible that 4.5 billion years are sufficient?

His thesis was that evolution can be modeled as a machine learning process. The ‘machine’ is the living ecosystem itself; the training samples are the variations in the DNA; the result of learning is derived from survival (positive) or not (negative). In this context, one of the most intriguing questions is – what the machine is trained for, or, simply put, what is the meaning of the living world and that of evolution? Valiant’s answer: living ecosystem that always reacts in an optimal way to the surrounding world. Pretty interesting idea, I must admit.

Unfortunately Valiant was unable to provide the actual “formula of life” modeled as a machine learning process and, consequently, no proof that evolution can autonomously happen, without that ‘external guidance’ (God?). However, he made a very compelling argument that this kind of model actually can be useful and the whole line of thought was just intriguingly fresh, visionary and challenging.

As a Turing award winner’s speech should be.

Programming languages and the Christian faith

Friday, May 20th, 2011

Yesterday I had the honor to chair the panel at the Finnish Multi-core Day, with distinguished panelists form world leading companies (Intel, ARM, Nokia) as well as Swedish and Finnish universities. At some point someone raised the issue of programming languages and whether we will ever see convergence – at which point I had a revelation 😉 . Isn’t this really the same thing as with the Christian faith?!

In the good old days, there was only one religion (language): the Catholic one (Fortran), with just a few heretics (Lisp) on the side. Then, reformation came, sparked by Luther (the C language) and Pandora’s box was thrown wide open: Calvinists, Unitarians, Presbyterians, Baptists etc (C++, Java, Erlang, Scala, Prolog etc) all emerged and claimed to be the ‘real’ one, only to capture just a minority of believers (programmers). Is there a way to unify everything back? Not in religion, I believe, and not in programming languages. Does it matter? Not really, I feel that if need comes, I can pray (program) in any church (using any language’s infrastructure). In fact, James Reinders‘ (Intel) answer to the question was: learn as many languages as you can – certainly a wise advise, applicable outside programming as well.

Another nugget of deep wisdom came later on during the day, when Erik Hagersten used a nice metaphor in his talk: those who create a new language are like those who pee in their pants; they think it’s hot, but no one else can feel it ;-). Was he inspired by Anssi Vanjoki’s opinion about those going the Android way? I don’t know, but it is certainly worth pausing and thinking about how we want to develop new languages, be those domain specific languages or general purpose ones – I still believe there’s value in there, but we must keep the ecosystem in mind, always.

Why Microsoft will not buy Nokia

Tuesday, May 17th, 2011

There’s been some buzz lately around an alleged take-over by Microsoft of Nokia’s smartphone business (launched by a well known Nokia whistle blower blogger and spread by all major technology news sites). Well, I don’t think this will happen any time soon, for quite a number of reasons.

First, Microsoft has all it needs from Nokia: the largest smartphone manufacturer will use Windows Phone which will give a strong position for the Redmond based company. Spending more money (a lot of money – at least 40 BUSD) would be really hard to justify, as it would take a long time and would involve huge risks until it would pay off.

Second, Microsoft never went into building laptops itself and for good reason: by doing so, they risked alienating other HW companies licensing the Windows operating system, despite the dominant market position. For Windows Phone, a small player in a big market, buying Nokia and turning Microsoft into a phone manufacturer would almost certainly mean that no other phone company – the likes of Samsung, HTC, LG and others – would license it anymore. Why risk this market, when you can eat your cake (partner with Nokia) and keep it too (keep existing relationships intact).

Third, Google’s experiences with the Nexus phone – or even the accelerated decline of Symbian once Nokia took it over – are warning signs for anyone planning to be a phone platform provider at the same time as it sells products based on that platform. This argument is essentially based on the same underlying gentlemen’s rule as the second argument: you can’t be a partner while competing in the exactly same domain.

Of course, by taking a huge gamble, Microsoft may still go down this path – but becoming a new, bloated Apple at the cost of several tens of billions of dollars just seems too big a risk to take, even for Steve Ballmer.The stakes are just not worth it.

Whitepaper on Telecom Cloud Computing

Thursday, May 12th, 2011

Scope Alliance (the alliance of network equipment providers committed to providing standardized base platforms) has just released their whitepaper on telecom cloud computing. I had the honor to be the editor of the document, produced jointly by Ericsson and several of our competitors. Next week the coming out party for the paper will be held at the OpenSAF conference where I will have a talk focusing on its content (see the agenda).

Chromebook, anyone?

Thursday, May 12th, 2011

So, Google would obviously not agree with the conclusions of my post a couple of days back – they’ve just launched yesterday their ChromeOS and a couple of laptops in co-operation with others to run it on. “Nothing but the Web” – said one of their slogans during the Google I/O event.

Once again – is this the future?

Let’s take a look at what ChromeOS really is. It’s a slim operating system featuring a local file system, with an integrated high-end, feature rich browser (remember Microsoft’s troubles with integrating Internet Explorer with Windows? – no one seems to notice the similar pattern here 😉 ). When working on-line, it will be little more than a fast browser; the interesting thing will happen when work is to be done offline: in order to allow users to continue working, it has to provide a mechanism for caching applications and files locally. What will this mean? Well, apps (in form of HTML5 web pages) will have to be downloaded (something HTML5 supports); files (text, spreadsheets, presentations, videos etc) will have to be downloaded; so, at the end, you will have an OS that supports one single native code application (the browser), several applications that have to be written in a managed set of languages (HTML5 + JavaScript + Flash + your favorite web scripting language here – with source code easily available for anyone to steal) and a file system that is a dumb guys copy of Dropbox or Box.net. Putting it all together, you get something that resembles a Java Virtual Machine (the browser) and a simple file sharing mechanism (the local file system that gets synched with the web whenever online).

Well, I don’t believe that’s worth it. The biggest issue will be – in my opinion – that people WILL forget to cache critical files locally when they are offline, leading to a frustrating experience (imagine booting up your machine on a plane within 8 seconds just to realize you forgot to cache that all important document you’ve been working on for hours). I’m a big fan of Dropbox and it works because it has the reverse logic to ChromeOS: it stores my files locally first, which are then made available on-line as well. In ChromeOS, you have your files stored remotely – and may be cached locally. Besides generating more traffic, this is the Achilles heel of making things work flawlessly.

Then, performance. I’m a programmer – there’s no way Google will convince me that an HTML5 app will be as fast as native code – that is, at the end, simply against the laws of physics (more instructions of the same type take longer to execute). Sure, things can be fast enough, but, according to some reports from the Chrome OS launch event Angry Birds were not as fast as they are on an iPhone. So, what’s the point?

Is then ChromeOS a dead thing from the start?

Time will tell. But I simply can’t believe that any responsible enterprise or private person will have a ChromeOS laptop as the only device, the same way as the iPad failed to knock out the laptop/netbook and become the only device a person needs. Some might find a ChromeOS powered laptop a good solution to do a few things fast (web browsing, answering some emails, quickly checking out some documents) but it will be the same category as the iPad – a second device, nevertheless with the wrong form factor (no ChromeOS tablet is planned). Where I think Google should put their efforts is the Chrome browser for Mac/Windows and ultimately iOS, Android and WinPhone: give people the freedom to use web apps in a fast, self-upgrading environment, while keeping the possibility of storing their stuff locally, have it synchronized over the internet, but keeping the personal device as the primary tool to do work or have fun. Network coverage has still a long way to go to reach universal coverage with decent data access everywhere – and until that dream materializes, personal devices, with their ever increasing capabilities will rule.

P.S.: I’ve just recalled I had a post on ChromeOS about two years ago. The core of that I still find relevant today, but I was amused to see how some other predictions proved wrong (quote: “Apple still has a long way to go until it can really take on Microsoft or Nokia for that matter”). It’s indeed difficult to make predictions about the future 😉

Is the future BrowserOS?

Thursday, May 5th, 2011

It’s been a while since my last post – it has been an exciting period both at work and for my PhD (more on that later) with some really cutting edge stuff I had the chance to work with or learn about. But now I’m back :-), not because work got more boring, but rather because I started to really miss blogging.

What triggered this post is an over-the-dinner debate I had with some colleagues the other day in Silicon Valley. One of the theories floated was the in a few years all we will need is a computer with a browser that supports HTML5; everything else will live in the cloud, occasionally cached locally for those – rare – moments when connectivity is unavailable.

Is this our future? Is the future called BrowserOS?

Surely, cloud computing and personal smart devices are all the rage right now. I strongly believe that on enterprise and business side, cloud computing will play a pivotal role, as soon as current concerns – network performance, security and availability – are reliably addressed. The benefits of cloud computing are well known and those are indeed real ones, primarily in terms of cost control and cost reduction.

On the individual side the picture is less clear. Online collaboration and storage tools – such as Google Apps, Dropbox and similar – have a clear use case and are handy in many cases. However, there’s another powerful trend that is likely to counterbalance this shift towards clouds for individual usage: personal devices (formally called phones) are becoming so powerful and will have so much storage, that essentially all a person needs can be handily stored on a phone or tablet. Couple this with emerging solutions that allow anyone to setup a low cost personal  private cloud accessible anywhere and you have a solution that limits dramatically the appeal of general providers of data storage and replication – simply because people still want control over their data and storing it on a device they always have with them, along with a personal solution to share it with others.

What about services and applications? The picture is again foggy; as Apple’s model has shown, locally installed apps have proved hugely popular. On the other side, Google apps and on-line games are widely used by many and are often used as main examples of browser based software service delivery. Where will we end up? Will apps survive or will we  be happy users of software as a service?

I think there are two factors that will influence the outcome: first, processors will be so powerful and connectivity (still) so costly and slow, that keeping applications local will be more productive for many applications. Second, intellectual property protection concerns will limit the usage of open HTML and favor the usage of compiled or encrypted software solutions.

Thus complete shift to BrowserOS looks improbable – whenever good connectivity is available, browser based solution will be used, but, for a long time, people will be reluctant to rely on browser based solutions alone and providers will be reluctant to make all their code available offline without proper protection. We’ll see a mixture of the two approaches with one or the other emphasized based on personal preferences or usage context, all relying on ever-more powerful personal devices.

The book is out (for pre-order)

Wednesday, February 16th, 2011

This is just a short note that now you can order the book on many-core programming, e.g. from Amazon. It’s scheduled to be available in July, but I’m told it may come earlier.

Springer’s page has more information on the book.

How the new conductor wants to make the elephant dance again

Saturday, January 29th, 2011

Back in June 2010, long before Nokia brought in Elop as the new CEO, I predicted that Nokia’s salvation may come from a strategic alliance with Microsoft. With the announced strategic message to come out on the 11th of February, speculation is rife across the internet what the new strategy might look like.

So, here’s mine 😉

The message will be simple and clear and will be summarized in three points:

1. The alternative to Apple and the ‘pee in the pants’ ((c) former Nokia VP) strategy of the Android camp has to come from a two-legged strategy: Symbian and Windows Phone, through a deep, strategic alliance with Microsoft

2. MeeGo is phased out – hard to build a new ecosystem from scratch (and it’s easy to do: no legacy to support, no sales are jeopardized)

3. Symbian for cost-aware, Windows Phone for high end phones

In practice this will mean the death of Symbian^3 as well, but over time: declaring it now would kill a large part of Nokia’s sales during 2011-2012 (until the WinPhone comes out – who would buy a dead OS-based phone?); what’s in the production  pipeline already will be the last Nokia phones based on Symbian. What will survive however is S40 for the low-end phones which are still the bread and butter of Nokia’s revenue. For high end, it will be all Windows.

There’s a subtle connection here to Microsoft’s announced plans to make Windows 8 available on ARM processors as well: this, combined with a likely unification with Windows Phone, will enable Nokia to build both ARM and Intel-based phones and tablets in the future, while leveraging the same software stack. It will be a clear differentiator compared with Apple or the Android camp and allows addressing different segments (e.g. business and consumer) with different offerings. Windows Phone also represents the last chance to enter to US market which, given the combined financial strengths of Microsoft and Nokia and a good understanding of the market by the former, is within reach.

(once again, this is a pure speculative post, based solely on intuition and interpretation of publicly available material; pure work of fiction)

P.S. It turns out, this is my 100th post in exactly 2 years and 4 months. That’s about 1 post every 9 days. D.S.

Of Wintel and WARM

Wednesday, January 19th, 2011

(this blog post is, of course, pure speculation – no insider information whatsoever)

Such announcements are few and far between: Microsoft announced at CES 2011 that the next version of Windows will be available on ARM-based chips as well, in a move that’s the biggest shift in the OS policy of the Redmond giant ever since those early days of DOS.

The announcement raises several queations and opens up for quite some speculations. It was in the making for some time, but now that it’s public, it provides serious food for thought. What will this mean for Intel? Is it a scale-up step for ARM or a scale down step for Microsoft? How does this relate to Windows Phone 7?

Interestingly, two other giants face similar challenges: Apple with iOS and MacOS, Google with Android and Chrome. Does this fragmentation make sense? What could the “grand plan” be?

I think Apple provided a glimpse of what to expect. Beside the design, their “next generation Macs” focus primarily on battery life, as does the iPad. Adding an iOS-like layer to MacOS is another clue; honestly, I believe what kept Apple in the Intel camp, despite having their own chip design unit, was HW compatibility with Microsoft. Now that is about to change – so why on earth should they have two types of chips and two software stacks? If they can do ALL they deliver today on just one platform, with 4-5x better power efficiency, is there really any serious counter-argument left?

I believe there isn’t. So here’s my prediction for 2012.

In Janury 2012 Apple will launch iPad 3 with the A6 processor based on ARM’s Cortex-A15 core, probably in dual or quad core configuration, together with touch-based iOS 5, bringing the best of MacOS to the tablet (for example, real multi-tasking). It will be followed by iPhone 6 on the same platform in June and the “ultimate Mac” in the fall: same unified iOS 5, but with a mouse-based non-touch interface. It will feature the same A6 chip design but with 4, perhaps 8 cores, delivering 2x performance and 4x better battery life (up to 24 hours and 6 months stand-by). It will be able to run Windows and Linux just as before and for some time to come, the Intel verison will be supported.

What about Microsoft? It will follow more or less the same pattern: Windows 8 will unify the core of Windows and Windows Phone 7, but with two interfaces, one for touch devices based on the Windows Phone 7 interface (to be used on tablets and phones) and the mainstream interface for “big” Windows. Unlike Apple however, Microsoft will continue to support Intel as the primary platform – PCs and servers are still too big to quit. In addition, Intel will likely get their act together and deliver a competitive embedded chipset that can run Microsoft’s new OS.

What about Google then? They won’t have much of choice than follow the same path. In addition, it will have to get the fragmentation of Android under control – openness has proven to be a liability so far.

Will this actually happen? Time will tell, but it’s fun to make predictions, isn’t it?

What you can learn by writing a book

Wednesday, January 19th, 2011

At some point in time, I thought I will actually never be able to put it in writing, but here it is: I managed to finish the book on many-core programming I’ve been working on (with two co-authors) for the past one and a half year – complete with references, figures and word index. It’s now submitted to the publisher, so stay tuned for a release in June.

It was a one of a kind experience. I learned a lot on the technical side but, even more importantly, a lot of the” do:s and don’t:s” of writing a book. Working on a book is a solitary experience and – especially towards the end – a race against the clock and number of pages. You just sit in front of your screen and type and type and type, with the sword of public criticism over your head: every mistake you make will be criticized, perhaps ridiculed. You try to get the text in shape, only to realize, at the end, that the most time-consuming and dull work is to actually get your references, keywords and figures in shape – let alone proof-reading and fixing the raw text you produced.

Would I do it again? Probably yes, but certainly not right how. Will I read it, once out in the wild?

Don’t know. Let me know if it’s worth reading 😉

Future of Software Engineering Research

Friday, December 17th, 2010

In early November I attended the Future of Software Engineering Research (FoSER) workshop, co-located with the FSE conference (Future of Software Engineering), one of the flagship conferences in the area (together with a colleague we also had a paper at the workshop, arguing for language oriented software engineering). One of the goals of the workshop was (quite openly) to create an agenda for which NSF can allocate research funds – however, for a person from Europe it provided a fascinating glimpse into the workings of the community.

One thing that I found striking was the very low industrial participation: company representatives were few and far between and none of the major SW companies were visible. When asking around for the reasons, the answers were perplexing: this is a research community, doing serious science (sic!); the community is driven by a few, hard minded theorists; companies just use what we provide, but we shall be going on with our research agenda. Based on my experience with other communities – like computer architecture or operating systems – I found this attitude (whatever the main reasons really were) stunning. In my view, software engineering research should be most palpable and practical of all the branches of computer science; the fact that it isn’t came as a great surprise.

Now, about the workshop: there were five main themes that were identified and worked upon in smaller work-groups.These were the five areas:

  • Helping people produce and use software intensive systems: the discussions ranged from domain specific languages to end user programming and to management of security issues (such as within Facebook)
  • Designing the complex systems of the future: the ones highlighted were health-, large scale ground transportation-, air traffic control- and battle space related; interestingly, they highlighted speculative pre-execution as a promising approach
  • Making software intensive systems dependable : ideas such as interactive and differential program analysis, requirement languages for the “masses” and automated programming were raised. I have my deep doubts about the later two, but the first one sounded quite promising.
  • Software evolution revolved around improving the decision process and bringing economics tightly into the technology picture
  • Advancing the discipline of SW engineering and research methodology: this was a mixed bunch. They talked about embracing social science methods (that was interesting), but I would strongly question embedding of formal methods into daily software development. Those methods, while intellectually appealing,  have proven to be hard to scale and unusable on anything  where it actually may make sense.

Well, we’ll see what comes out of this effort – but the process and the interaction were well worth the time spent there.

Layering the parallel computing model

Thursday, December 9th, 2010

In order to execute a program on a parallel computer, one obviously has to decompose it into computations that can run in parallel on the different processor cores available in that computer – this much is readily agreed upon in the computing community. When it comes however on how to do this decomposition, what HW primitives to use and who (human or SW/HW) should do it – opinions start to diverge quickly.

One debate is around shared memory model (synchronization based on access to the same storage) vs share nothing (synchronization based on messages); another debate is fielding threading (or lightweight process) based decomposition vs different task models (using shared memory or not). In this post, I’ll focus primarily on this second debate.

The strength of the task based models is that these decouple the amount of parallelism in the application from the available amount of parallel processing capacity: the application can just fire off as many parallel tasks that it possibly can and then rely on underlying system to execute these as efficiently as possible. On the other hand, threading on the application level requires design time partitioning into reasonably long lived chunks of functionality that can be then scheduled by the lower level (middleware or OS layer); this partitioning – save for a few specific cases, such as telecom servers – is usually dependent on the amount of parallel processing capacity available in the target system and hence does not provide for decoupling between application parallelism and parallel processing capacity (putting too many compute intensive threads on one core may actually decrease overall performance).

Herein lies the potential for combining these two models: on a many-core processor, the threading model can be used to provide an abstract model of the underlying HW, while the task model exposes the amount of parallelism from the application. The modeling of the HW can be extremely simple: just allocate one thread (or lightweight process) to each processor core or HW thread to model workers waiting to execute tasks, that are delivered by previous tasks executed by some of these worker threads. This model suits perfectly a many-core system with a large pool of simple cores; it can also incorporate heterogeneous systems comprising high capability cores as well by just modeling those as multiple threads.

The interface between these two layers – HW modeled as threads and applications modeled as graphs of tasks – is obviously the scheduling policy: how are tasks managed (queuing)? how are constraints (e.g. timing) taken into account? how is load balancing – or power management – handled? I feel this is the core of future research in the area of manycore programming and the solutions will likely be primarily domain  (and, to a lesser extent, platform) specific.

Obviously, this model does not address the root problem: where is the parallelism coming from? I believe the answer depends on the nature of the application and the scale of parallelism available in the platform. As a first step, natural parallelism in the application problem can be exploited (think of generating the Fibonacci numbers); when that path is exhausted, I believe there’s only one possibility left: exploit speculative run-ahead pre-execution, potentially augmented with extended semantic information provided by the programmer (see my related post). Over and over I see researchers reaching the point where they raise this question, but somehow stop short of this – rather obvious – answer.

Turning it around: save for a revolutionary development in how we build single core chips, is there a real alternative?

Being stuck and declaring defeat is  not an option.

Open source as the philosophers’ stone

Friday, December 3rd, 2010

According to Wikipedia, the philosophers’ stone is ” .. said to be capable of turning base metals (…) into gold; it was also sometimes believed to be an elixir of life, useful for rejuvenation and possibly for achieving immortality“. It was the grand prize of the alchemists throughout centuries.

What does it have to do with parallel software? Recently, while attending a supplier organized event, open source was repeatedly mentioned as the best way to make sure that a certain technology will get widely accepted. The main argument went like this: companies come and go and they may change their priorities, deciding to axe certain products or technologies; but once out there, people will maintain it and use it. Hence, open source guarantees ‘eternal life’ for a software product.

I find this idea quite interesting. Let’s admit it: big open source projects long ago ceased to be the playground of enthusiastic amateurs and rather became the main vehicle for co-operation between companies, while still tapping into the wisdom of willing individual; however the aspect of increased penetration – while now obvious – was perhaps not so self evident. But it’s there: when Ericsson decided to down-prioritize Erlang, making it open source actually meant a new start and a succesful carrier ever since (including inside Ericsson). Another example is the language Clipper (I used to program in Clipper back in mid 90s): it was discontinued by its mother company, however it’s still alive as well through a number of open-source projects, maintained by companies who still use it for succesful products (in short: it’s a compilable C/C++-like version of dBase with pretty good scripting support).

So is open source the philosophers’ stone that can guarantee eternal life for worthy software? Looks like it and it’s an aspect large companies have to consider when deciding how to handle their software offering.

A tale of the textile and software industry – part II

Friday, December 3rd, 2010
My earlier post on the similarities between the textile and software industry triggered some feedback. Some agreed, some didn’t; one comment, from Mike Williams himself argued that we should be able to push our cost so low that the cost of outsourcing would be much higher.
I think this line of thinking is missing one important point and that’s what I want to elaborate on. During the past few decades, globalization led to the emergence of two parallel power structures, that sometimes interact, sometimes are at odds but clearly influence each other.
The first power structure is the traditional national state setup of our world. Traditionally relationships between states defined the power structures, guided wars and defined if and where peace can prevail, as well as which companies get the upper hand, who gets access to resources etc. This structure is very much alive, but a parallel set of global actors are emerging and are gradually bringing forward their interests and start flexing their powers.
Obviously, I mean global corporations. Many of these have long ago ceased to be entities rooted in national countries – they act globally, have a global structure and clear set of goals, vision and strategy. The two power structures are obviously strongly interconnected and dependent on each other (ultimately, the power of a state relies on the taxes it collects from economic actitiies; many companies are state owned etc) but are increasingly at odds with each other. States want to keep jobs, companies ultimately want to minimize costs; states want to increase tax revenues, companies need to generate profits – and so on. Intel’s CEO said some years ago, that his company could easily operate fully outside of USA; Nokia’s push to reform Finland’s tax system was widely discussed and debated; many UK based companies chose to relocate their headquarters for tax reasons.
My point is really that shifting jobs around is not equal outsourcing anymore. A large global company could decide to shift its R&D to where it’s most cost efficient (while certain requirements – related to e.g. IPR protection, legal stability, human rights (at least for good global citizens) – are fulfilled). This may be bad from a country’s point of view, but makes perfect sense for shareholders; no matter how efficient you become, it will be evenually replicated elsewhere. It’s not the way to fight the inevitable – instead focus on high value added areas where the operational costs are negligable and whereyour country will the primary choice for other reasons (such as quality of living, infrastructure etc)
If you can’t beat them, differentiate yourself.

My earlier post on the similarities between the textile and software industry triggered some feedback. Some agreed, some didn’t; one comment, from Mike Williams himself argued that we (Westerners) should be able to push our cost so low that the cost of outsourcing would be much higher.

I think this line of thinking is missing one important point and that’s what I want to elaborate on. During the past few decades, globalization led to the emergence of two parallel power structures, that sometimes interact, sometimes are at odds but clearly influence each other.

The first power structure is the traditional national state setup of our world. Traditionally relationships between states defined the power structures, guided wars and defined if and where peace can prevail, as well as which companies get the upper hand, who gets access to resources etc. This structure is very much alive, but a parallel set of global actors are emerging and are gradually bringing forward their interests and start flexing their powers.

Obviously I mean global corporations. Many of these have long ago ceased to be entities rooted in national countries – they act globally, have a global structure and clear set of goals, vision and strategy. The two power structures are obviously strongly interconnected and dependent on each other (ultimately, the power of a state relies on the taxes it collects from economic actitiies; many companies are state owned etc) but are increasingly at odds with each other. States want to keep jobs, companies ultimately want to minimize costs; states want to increase tax revenues, companies need to generate profits – and so on. Intel’s CEO said some years ago, that his company could easily operate fully outside of USA; Nokia’s push to reform Finland’s tax system was widely discussed and debated; many UK based companies chose to relocate their headquarters for tax reasons.

My point is really that shifting jobs around is not equal outsourcing anymore. A large global company could decide to shift its R&D to where it’s most cost efficient (while certain requirements – related to e.g. IPR protection, legal stability, human rights, at least for good global citizens – are fulfilled). This may be bad from a country’s point of view, but makes perfect sense for shareholders; no matter how efficient you become, it will eventually be replicated elsewhere. It’s not the way to fight the inevitable – instead focus on high value added areas where the operational costs are negligable and where your country will be the primary choice for other reasons (such as quality of living, infrastructure etc)

If you can’t beat them, differentiate yourself.

Thinking ahead

Thursday, November 18th, 2010

Ericsson has launched, during 2010, a blog geared towards new ideas and different perspectives. One of the main goals of ‘Thinking ahead‘ (the name of the blog) is to foster open debate around the future of communications and how these will impact our lives. I think this is a good idea, especially that both bloggers from inside and outside of the company are invited to share their vision.

Now I’m one of those bloggers. My first post – about computing swarms – just went live and more will likely follow. Check it out and feel free to comment, either there, here or in both places 😉