Why Microsoft will not buy Nokia

May 17th, 2011

There’s been some buzz lately around an alleged take-over by Microsoft of Nokia’s smartphone business (launched by a well known Nokia whistle blower blogger and spread by all major technology news sites). Well, I don’t think this will happen any time soon, for quite a number of reasons.

First, Microsoft has all it needs from Nokia: the largest smartphone manufacturer will use Windows Phone which will give a strong position for the Redmond based company. Spending more money (a lot of money – at least 40 BUSD) would be really hard to justify, as it would take a long time and would involve huge risks until it would pay off.

Second, Microsoft never went into building laptops itself and for good reason: by doing so, they risked alienating other HW companies licensing the Windows operating system, despite the dominant market position. For Windows Phone, a small player in a big market, buying Nokia and turning Microsoft into a phone manufacturer would almost certainly mean that no other phone company – the likes of Samsung, HTC, LG and others – would license it anymore. Why risk this market, when you can eat your cake (partner with Nokia) and keep it too (keep existing relationships intact).

Third, Google’s experiences with the Nexus phone – or even the accelerated decline of Symbian once Nokia took it over – are warning signs for anyone planning to be a phone platform provider at the same time as it sells products based on that platform. This argument is essentially based on the same underlying gentlemen’s rule as the second argument: you can’t be a partner while competing in the exactly same domain.

Of course, by taking a huge gamble, Microsoft may still go down this path – but becoming a new, bloated Apple at the cost of several tens of billions of dollars just seems too big a risk to take, even for Steve Ballmer.The stakes are just not worth it.

Whitepaper on Telecom Cloud Computing

May 12th, 2011

Scope Alliance (the alliance of network equipment providers committed to providing standardized base platforms) has just released their whitepaper on telecom cloud computing. I had the honor to be the editor of the document, produced jointly by Ericsson and several of our competitors. Next week the coming out party for the paper will be held at the OpenSAF conference where I will have a talk focusing on its content (see the agenda).

Chromebook, anyone?

May 12th, 2011

So, Google would obviously not agree with the conclusions of my post a couple of days back – they’ve just launched yesterday their ChromeOS and a couple of laptops in co-operation with others to run it on. “Nothing but the Web” – said one of their slogans during the Google I/O event.

Once again – is this the future?

Let’s take a look at what ChromeOS really is. It’s a slim operating system featuring a local file system, with an integrated high-end, feature rich browser (remember Microsoft’s troubles with integrating Internet Explorer with Windows? – no one seems to notice the similar pattern here 😉 ). When working on-line, it will be little more than a fast browser; the interesting thing will happen when work is to be done offline: in order to allow users to continue working, it has to provide a mechanism for caching applications and files locally. What will this mean? Well, apps (in form of HTML5 web pages) will have to be downloaded (something HTML5 supports); files (text, spreadsheets, presentations, videos etc) will have to be downloaded; so, at the end, you will have an OS that supports one single native code application (the browser), several applications that have to be written in a managed set of languages (HTML5 + JavaScript + Flash + your favorite web scripting language here – with source code easily available for anyone to steal) and a file system that is a dumb guys copy of Dropbox or Box.net. Putting it all together, you get something that resembles a Java Virtual Machine (the browser) and a simple file sharing mechanism (the local file system that gets synched with the web whenever online).

Well, I don’t believe that’s worth it. The biggest issue will be – in my opinion – that people WILL forget to cache critical files locally when they are offline, leading to a frustrating experience (imagine booting up your machine on a plane within 8 seconds just to realize you forgot to cache that all important document you’ve been working on for hours). I’m a big fan of Dropbox and it works because it has the reverse logic to ChromeOS: it stores my files locally first, which are then made available on-line as well. In ChromeOS, you have your files stored remotely – and may be cached locally. Besides generating more traffic, this is the Achilles heel of making things work flawlessly.

Then, performance. I’m a programmer – there’s no way Google will convince me that an HTML5 app will be as fast as native code – that is, at the end, simply against the laws of physics (more instructions of the same type take longer to execute). Sure, things can be fast enough, but, according to some reports from the Chrome OS launch event Angry Birds were not as fast as they are on an iPhone. So, what’s the point?

Is then ChromeOS a dead thing from the start?

Time will tell. But I simply can’t believe that any responsible enterprise or private person will have a ChromeOS laptop as the only device, the same way as the iPad failed to knock out the laptop/netbook and become the only device a person needs. Some might find a ChromeOS powered laptop a good solution to do a few things fast (web browsing, answering some emails, quickly checking out some documents) but it will be the same category as the iPad – a second device, nevertheless with the wrong form factor (no ChromeOS tablet is planned). Where I think Google should put their efforts is the Chrome browser for Mac/Windows and ultimately iOS, Android and WinPhone: give people the freedom to use web apps in a fast, self-upgrading environment, while keeping the possibility of storing their stuff locally, have it synchronized over the internet, but keeping the personal device as the primary tool to do work or have fun. Network coverage has still a long way to go to reach universal coverage with decent data access everywhere – and until that dream materializes, personal devices, with their ever increasing capabilities will rule.

P.S.: I’ve just recalled I had a post on ChromeOS about two years ago. The core of that I still find relevant today, but I was amused to see how some other predictions proved wrong (quote: “Apple still has a long way to go until it can really take on Microsoft or Nokia for that matter”). It’s indeed difficult to make predictions about the future 😉

Playing the history detective, part III: the big picture

May 8th, 2011

So I managed to trace my family back to Tordátfalva, apparently. There’s no surviving church record from the 17th century in Tordátfalva, but nevertheless the link to Tordátfalva was real enough. Combing through the “lustrák” once again, there they were: Vajda János in 1602, 1604; Vajda Péter in 1627, 1635, 1648, 1655; Vajda Mihály, Péter’s young son, in 1635. All of them were regular Székelys, not nobles and there was another twist: for long periods of times they disappear, only to show up again; there’s no Vajda mentioned in 1614, nor after 1655, such as 1674 or 1679 (still, they move to Bölön from Tordátfalva in early 1680s). I have only one possible solution for this: they were likely regular court soldiers at the princes’ court with long absences – including families – from home. Quite possibly this service led to the acquisition of the noble title sometimes between 1655 and 1683 (and may, just may, be at the origin of the name Vajda: the official title of the governor of Transylvania under the Hungarian kingdom was ‘vajda’ and people serving them often inherited the title as their family name). So, I have my work carved out to figure out when and how that – the acquisition of the name and noble title – really happened 😉 . Going further back in time is quite unlikely, given the location of the original home base of the family and the noble status acquired quite late in the process – but I will certainly keep trying; you never know which royal or princely decree might mention them in one context or the other.

In summary, it was a fascinating journey. I have now 15 generations of Vajda documented, spanning the whole history of the independent Transylvanian state, the Austrian-Hungarian empire as well as that of the Unitarian faith: Vajda János, mentioned in 1602 as a grown up soldier, is likely the first who was baptized in the Unitarian faith, only formed during the 1570s – and at the base of the first ever declaration of religious freedom in the world. My forefathers served as soldiers under all the great princes of Transylvania – the Báthory rulers (who, when Báthory István was elected Polish king, ruled from Tartu, Estonia till Transylvania), Rákóczi princes and, of course, were there in the 30 years war during Transylvania’s golden age under Bethlen Gábor.

I don’t care much about titles or similar things. However, it feels good to dig out the context in which my forefathers were part of something I’m truly proud of – Transylvania and what it contributed to the world: the first declaration of religious freedom ever and the Unitarian faith, maintained throughout 15 generations.

P.S.: If I’m not mistaken, this is my first post about family in English. There’s a start for everything, apparently.

Playing the history detective, part II: historic forensics

May 8th, 2011

Many of the ‘lustrák’ were published in a series of books titled ‘Székely historical documents’. Several of these are available digitally so I went one digging through these – with little success. I figured out that there was no Vajda in Bölön before 1650, so they likely moved there between 1650 and 1712. Checking the Vajda and Vayda families in other places, there were quite a few of them scattered around several villages with no obvious connections. The most promising path was the Vayda family in Közép-Ajta, very close to Bölön. Were they my forefathers? Can I link them to my known and confirmed forefathers?

I couldn’t. I was speculating.

The project was stuck for a while until I managed to get hold of the last two books in the aforementioned series: I had to order these as physical copies, as no digitized copy was available. After a few weeks of eagerly waiting for the delivery, the books finally arrived – with “lustra” from 1683 and some previously unavailable years.

And the missing link was there, right in the front of my eyes. It’s one of those rare Heureka! moments when something previously intractable suddenly becomes obvious: in a lucky coincidence, in the lustra from 1683, at the Bölön list, there were not only two relevant names listed under the ‘lófö’ (small nobility) section, but there was unexpected extra information:

“Vajda János, Udvarhelyszékröl, Tordátfalváról költözött ide”

(Vajda János, moved here from Tordátfalva, Udvarhelyszék”)

“Vajda István, Udvarhelyszékröl, Tordátfalváról költözött ide”

(Vajda István, moved here from Tordátfalva, Udvarhelyszék”)

Even more interestingly, István was inserted later into the hand-written list, as if he would have shown up exactly during the process of writing down the conscripts.

I was stunned. It was even more significant as the “lustra” of 1680 had still no Vajda in Bölön – so there it was: my family moved to Bölön, sometimes between 1680-1683 (likely the later, judging from István’s status), from Tordátfalva, which, then as now, is just a small Unitarian village tuck away in a valley (it had 21 houses in the 17th century and has about 100 inhabitants today, more details about the village in Hungarian can be found here).

But, is there any trace of Vajda in Tordátfalva? That was the next step in the forensic work.

Playing the history detective, part I: how one thing leads to the next

May 8th, 2011

It is just funny how one thing sometimes leads to another, seemingly unrelated thing. About two months ago there was a burglary at our house in Transylvania; nothing valuable was stolen (at least not from us, our tenant was less lucky), but the perpetrators (caught in the meantime) left such a mess, that I had to go home to fix things.

And here all began.

In the process of sorting through papers that I never knew existed, I found the birth certificates of the grandparents of my grandparents, going back all the way to the first part of the 19th century. It was a tipping point: suddenly, I felt an urge to find out more about my family’s history, especially about the paternal line. For sure, I had some info available: I knew they hailed from Bölön, once the largest settlement in Szeklerland; I also knew they acquired a noble title at some point; and now I had the lineage going back six generations. But that was about it – the rest was a big unknown.

So I set out to find out more.

As context, it’s useful to spend a few words on Szeklers (or Székelys, in Hungarian). This particular group of Hungarians living in the mountainous region of southeast Transylvania – some claim that they were actually descendants of Attila’s Huns – played a very specific role in the history of Hungary and later Transylvania. They were always organized based on egalitarian principles, unlike the rest of the country; they kept their personal liberty and were exempt from paying tax – in return for serving as soldiers whenever required. This way, the kings of Hungary and later the princes of Transylvania had at their disposal a regular army that was well trained and easy to mobilize on short notice.

Due to this specific setup, the kings and princes made sure that the male population was regularly counted and listed by name in a so-called “lustra” (conscript list) that contained valuable information on the population of each village and is a great source for genealogical research. Church accounts of births, marriages and burials are of course more detailed, but unfortunately only stretch back till the early 18th century.

Googling around quickly revealed that there was a noble Vajda family in Bölön in 1712, but not yet in 1614. I also came across the Transylvanian Genealogical Society; its president János Kocs provided valuable insights based on the Unitarian Church records in Bölön and eventually provided me with the complete record – in digital form – stretching back to 1735. There it was – my father lineage clearly traceable all the way back to one of the three possible fore-fathers whose deaths are recorded after 1739.

In addition, the church records also made clear that in the early part of the 18th century there were very few Vajdas in Bölön, which made it likely that they moved there only shortly before – but where from and when? And, so to speak, ‘was there life before Bölön’?

Is the future BrowserOS?

May 5th, 2011

It’s been a while since my last post – it has been an exciting period both at work and for my PhD (more on that later) with some really cutting edge stuff I had the chance to work with or learn about. But now I’m back :-), not because work got more boring, but rather because I started to really miss blogging.

What triggered this post is an over-the-dinner debate I had with some colleagues the other day in Silicon Valley. One of the theories floated was the in a few years all we will need is a computer with a browser that supports HTML5; everything else will live in the cloud, occasionally cached locally for those – rare – moments when connectivity is unavailable.

Is this our future? Is the future called BrowserOS?

Surely, cloud computing and personal smart devices are all the rage right now. I strongly believe that on enterprise and business side, cloud computing will play a pivotal role, as soon as current concerns – network performance, security and availability – are reliably addressed. The benefits of cloud computing are well known and those are indeed real ones, primarily in terms of cost control and cost reduction.

On the individual side the picture is less clear. Online collaboration and storage tools – such as Google Apps, Dropbox and similar – have a clear use case and are handy in many cases. However, there’s another powerful trend that is likely to counterbalance this shift towards clouds for individual usage: personal devices (formally called phones) are becoming so powerful and will have so much storage, that essentially all a person needs can be handily stored on a phone or tablet. Couple this with emerging solutions that allow anyone to setup a low cost personal  private cloud accessible anywhere and you have a solution that limits dramatically the appeal of general providers of data storage and replication – simply because people still want control over their data and storing it on a device they always have with them, along with a personal solution to share it with others.

What about services and applications? The picture is again foggy; as Apple’s model has shown, locally installed apps have proved hugely popular. On the other side, Google apps and on-line games are widely used by many and are often used as main examples of browser based software service delivery. Where will we end up? Will apps survive or will we  be happy users of software as a service?

I think there are two factors that will influence the outcome: first, processors will be so powerful and connectivity (still) so costly and slow, that keeping applications local will be more productive for many applications. Second, intellectual property protection concerns will limit the usage of open HTML and favor the usage of compiled or encrypted software solutions.

Thus complete shift to BrowserOS looks improbable – whenever good connectivity is available, browser based solution will be used, but, for a long time, people will be reluctant to rely on browser based solutions alone and providers will be reluctant to make all their code available offline without proper protection. We’ll see a mixture of the two approaches with one or the other emphasized based on personal preferences or usage context, all relying on ever-more powerful personal devices.

The book is out (for pre-order)

February 16th, 2011

This is just a short note that now you can order the book on many-core programming, e.g. from Amazon. It’s scheduled to be available in July, but I’m told it may come earlier.

Springer’s page has more information on the book.

How the new conductor wants to make the elephant dance again

January 29th, 2011

Back in June 2010, long before Nokia brought in Elop as the new CEO, I predicted that Nokia’s salvation may come from a strategic alliance with Microsoft. With the announced strategic message to come out on the 11th of February, speculation is rife across the internet what the new strategy might look like.

So, here’s mine 😉

The message will be simple and clear and will be summarized in three points:

1. The alternative to Apple and the ‘pee in the pants’ ((c) former Nokia VP) strategy of the Android camp has to come from a two-legged strategy: Symbian and Windows Phone, through a deep, strategic alliance with Microsoft

2. MeeGo is phased out – hard to build a new ecosystem from scratch (and it’s easy to do: no legacy to support, no sales are jeopardized)

3. Symbian for cost-aware, Windows Phone for high end phones

In practice this will mean the death of Symbian^3 as well, but over time: declaring it now would kill a large part of Nokia’s sales during 2011-2012 (until the WinPhone comes out – who would buy a dead OS-based phone?); what’s in the production  pipeline already will be the last Nokia phones based on Symbian. What will survive however is S40 for the low-end phones which are still the bread and butter of Nokia’s revenue. For high end, it will be all Windows.

There’s a subtle connection here to Microsoft’s announced plans to make Windows 8 available on ARM processors as well: this, combined with a likely unification with Windows Phone, will enable Nokia to build both ARM and Intel-based phones and tablets in the future, while leveraging the same software stack. It will be a clear differentiator compared with Apple or the Android camp and allows addressing different segments (e.g. business and consumer) with different offerings. Windows Phone also represents the last chance to enter to US market which, given the combined financial strengths of Microsoft and Nokia and a good understanding of the market by the former, is within reach.

(once again, this is a pure speculative post, based solely on intuition and interpretation of publicly available material; pure work of fiction)

P.S. It turns out, this is my 100th post in exactly 2 years and 4 months. That’s about 1 post every 9 days. D.S.

Of Wintel and WARM

January 19th, 2011

(this blog post is, of course, pure speculation – no insider information whatsoever)

Such announcements are few and far between: Microsoft announced at CES 2011 that the next version of Windows will be available on ARM-based chips as well, in a move that’s the biggest shift in the OS policy of the Redmond giant ever since those early days of DOS.

The announcement raises several queations and opens up for quite some speculations. It was in the making for some time, but now that it’s public, it provides serious food for thought. What will this mean for Intel? Is it a scale-up step for ARM or a scale down step for Microsoft? How does this relate to Windows Phone 7?

Interestingly, two other giants face similar challenges: Apple with iOS and MacOS, Google with Android and Chrome. Does this fragmentation make sense? What could the “grand plan” be?

I think Apple provided a glimpse of what to expect. Beside the design, their “next generation Macs” focus primarily on battery life, as does the iPad. Adding an iOS-like layer to MacOS is another clue; honestly, I believe what kept Apple in the Intel camp, despite having their own chip design unit, was HW compatibility with Microsoft. Now that is about to change – so why on earth should they have two types of chips and two software stacks? If they can do ALL they deliver today on just one platform, with 4-5x better power efficiency, is there really any serious counter-argument left?

I believe there isn’t. So here’s my prediction for 2012.

In Janury 2012 Apple will launch iPad 3 with the A6 processor based on ARM’s Cortex-A15 core, probably in dual or quad core configuration, together with touch-based iOS 5, bringing the best of MacOS to the tablet (for example, real multi-tasking). It will be followed by iPhone 6 on the same platform in June and the “ultimate Mac” in the fall: same unified iOS 5, but with a mouse-based non-touch interface. It will feature the same A6 chip design but with 4, perhaps 8 cores, delivering 2x performance and 4x better battery life (up to 24 hours and 6 months stand-by). It will be able to run Windows and Linux just as before and for some time to come, the Intel verison will be supported.

What about Microsoft? It will follow more or less the same pattern: Windows 8 will unify the core of Windows and Windows Phone 7, but with two interfaces, one for touch devices based on the Windows Phone 7 interface (to be used on tablets and phones) and the mainstream interface for “big” Windows. Unlike Apple however, Microsoft will continue to support Intel as the primary platform – PCs and servers are still too big to quit. In addition, Intel will likely get their act together and deliver a competitive embedded chipset that can run Microsoft’s new OS.

What about Google then? They won’t have much of choice than follow the same path. In addition, it will have to get the fragmentation of Android under control – openness has proven to be a liability so far.

Will this actually happen? Time will tell, but it’s fun to make predictions, isn’t it?

What you can learn by writing a book

January 19th, 2011

At some point in time, I thought I will actually never be able to put it in writing, but here it is: I managed to finish the book on many-core programming I’ve been working on (with two co-authors) for the past one and a half year – complete with references, figures and word index. It’s now submitted to the publisher, so stay tuned for a release in June.

It was a one of a kind experience. I learned a lot on the technical side but, even more importantly, a lot of the” do:s and don’t:s” of writing a book. Working on a book is a solitary experience and – especially towards the end – a race against the clock and number of pages. You just sit in front of your screen and type and type and type, with the sword of public criticism over your head: every mistake you make will be criticized, perhaps ridiculed. You try to get the text in shape, only to realize, at the end, that the most time-consuming and dull work is to actually get your references, keywords and figures in shape – let alone proof-reading and fixing the raw text you produced.

Would I do it again? Probably yes, but certainly not right how. Will I read it, once out in the wild?

Don’t know. Let me know if it’s worth reading 😉

Séta

December 18th, 2010

Vén fák alól érkezem
– mintha csak a múltból jönnék –
(mögöttem már zöldbe lobbantak a gesztenyék)
a Stadionhoz – behemót, kedves emlék
hol a gyözelemért izgultam egykor (rég)
és a Szamos –
jegeden kalandos
volt átkelni mikor jött a tél
hü halászaid a hídon állnak még
de a távolban – az már a Hét Vezér tér
– avagy ahogy ma hívják: július 14 –
magunkat látom amint rúgjuk a labdát
és biztosítjuk a tér ricsaját –
balról egykori iskolám
(átadom üzenetem némán)
lassítok – udvarán
fiatal apa oktatja kisfiát
”vállból hajítsd, messze a kosár”
szembejön egy égö hajú lány – elegáns –
(rám is mosolygott talán)
a sarok mögül az öreg ház nevet
van is mit: fejemet
beborítja a szirmok hava
– megérkeztem, haza –
hallom a vidám csaholást
(elsiet mellettem egy kisdiák
– vagy képzelem csak talán?)
hova is tetted a kulcsod, te szamár?!

Morrison

December 18th, 2010

A hajnal gyilkosként érkezik
Megöli az izzó éj lángjait
Várj, ne fojtsd el
A dübörgö hangokat
Ne! Ne oltsd ki a parázsló vágyakat
Adj idöt az utolsó italnak
Egy percet a végsö dalnak

A fények meghaltak, elhagyták
Szürke arcodat
Látom rajta a fáradt ráncokat
Keserü ködben keresem
Kábán tántorogva
A megölt álmodat

Future of Software Engineering Research

December 17th, 2010

In early November I attended the Future of Software Engineering Research (FoSER) workshop, co-located with the FSE conference (Future of Software Engineering), one of the flagship conferences in the area (together with a colleague we also had a paper at the workshop, arguing for language oriented software engineering). One of the goals of the workshop was (quite openly) to create an agenda for which NSF can allocate research funds – however, for a person from Europe it provided a fascinating glimpse into the workings of the community.

One thing that I found striking was the very low industrial participation: company representatives were few and far between and none of the major SW companies were visible. When asking around for the reasons, the answers were perplexing: this is a research community, doing serious science (sic!); the community is driven by a few, hard minded theorists; companies just use what we provide, but we shall be going on with our research agenda. Based on my experience with other communities – like computer architecture or operating systems – I found this attitude (whatever the main reasons really were) stunning. In my view, software engineering research should be most palpable and practical of all the branches of computer science; the fact that it isn’t came as a great surprise.

Now, about the workshop: there were five main themes that were identified and worked upon in smaller work-groups.These were the five areas:

  • Helping people produce and use software intensive systems: the discussions ranged from domain specific languages to end user programming and to management of security issues (such as within Facebook)
  • Designing the complex systems of the future: the ones highlighted were health-, large scale ground transportation-, air traffic control- and battle space related; interestingly, they highlighted speculative pre-execution as a promising approach
  • Making software intensive systems dependable : ideas such as interactive and differential program analysis, requirement languages for the “masses” and automated programming were raised. I have my deep doubts about the later two, but the first one sounded quite promising.
  • Software evolution revolved around improving the decision process and bringing economics tightly into the technology picture
  • Advancing the discipline of SW engineering and research methodology: this was a mixed bunch. They talked about embracing social science methods (that was interesting), but I would strongly question embedding of formal methods into daily software development. Those methods, while intellectually appealing,  have proven to be hard to scale and unusable on anything  where it actually may make sense.

Well, we’ll see what comes out of this effort – but the process and the interaction were well worth the time spent there.

Karácsonyi készülödés

December 11th, 2010

Karácsonyi mézeskalácsot sütöttek a pékek, íme a bizonyíték.

Készül a massza

Készül a massza


Szaggatjuk a tésztát

Szaggatjuk a tésztát


Munkában

Munkában


A kiskukta is fotózott

A kiskukta is fotózott

Üni és az ürutazás

December 11th, 2010

(közjáték két részben)

Elsö rész
Szereplök: Üni, András, nyakat is védö “ürhajós” sapka
Helyszín és idöpont: otthon, reggel, oviba készülve

Üni: Api, ez milyen sapka?
András: Ürhajós sapka
Üni: Mi az az ürhajós?
András: Az ürhajós nagyon magason repül, sokkal magasabban mint a repülö; ott nagyon hideg van, azért van ilyen sapkája
Üni: Értem. Én is szeretek repülni!

Második rész
Szereplök: Üni, András, a hold és az Esthajnalcsillag
Helyszín és idöpont: óvoda elött, délután, hazafelé

Üni: Api nézd, a hold!
András: Valóban, mellette az Esthajnalcsillag

(beülnek az autóba, Üni forgolódva szemmel tartja a holdat)

Üni: Api, a hold követ minket!
András: Nem követ, csak olyan messze van, hogy mindenhonnan látszik
Üni: Menjünk közelebb, nézzük meg!
András: Nem tudunk, Manó, olyan magason van, hogy csak az ürhajósok tudnak oda menni
Üni: Nekem van ürhajós sapkám!

(csend, majd függöny)

Layering the parallel computing model

December 9th, 2010

In order to execute a program on a parallel computer, one obviously has to decompose it into computations that can run in parallel on the different processor cores available in that computer – this much is readily agreed upon in the computing community. When it comes however on how to do this decomposition, what HW primitives to use and who (human or SW/HW) should do it – opinions start to diverge quickly.

One debate is around shared memory model (synchronization based on access to the same storage) vs share nothing (synchronization based on messages); another debate is fielding threading (or lightweight process) based decomposition vs different task models (using shared memory or not). In this post, I’ll focus primarily on this second debate.

The strength of the task based models is that these decouple the amount of parallelism in the application from the available amount of parallel processing capacity: the application can just fire off as many parallel tasks that it possibly can and then rely on underlying system to execute these as efficiently as possible. On the other hand, threading on the application level requires design time partitioning into reasonably long lived chunks of functionality that can be then scheduled by the lower level (middleware or OS layer); this partitioning – save for a few specific cases, such as telecom servers – is usually dependent on the amount of parallel processing capacity available in the target system and hence does not provide for decoupling between application parallelism and parallel processing capacity (putting too many compute intensive threads on one core may actually decrease overall performance).

Herein lies the potential for combining these two models: on a many-core processor, the threading model can be used to provide an abstract model of the underlying HW, while the task model exposes the amount of parallelism from the application. The modeling of the HW can be extremely simple: just allocate one thread (or lightweight process) to each processor core or HW thread to model workers waiting to execute tasks, that are delivered by previous tasks executed by some of these worker threads. This model suits perfectly a many-core system with a large pool of simple cores; it can also incorporate heterogeneous systems comprising high capability cores as well by just modeling those as multiple threads.

The interface between these two layers – HW modeled as threads and applications modeled as graphs of tasks – is obviously the scheduling policy: how are tasks managed (queuing)? how are constraints (e.g. timing) taken into account? how is load balancing – or power management – handled? I feel this is the core of future research in the area of manycore programming and the solutions will likely be primarily domain  (and, to a lesser extent, platform) specific.

Obviously, this model does not address the root problem: where is the parallelism coming from? I believe the answer depends on the nature of the application and the scale of parallelism available in the platform. As a first step, natural parallelism in the application problem can be exploited (think of generating the Fibonacci numbers); when that path is exhausted, I believe there’s only one possibility left: exploit speculative run-ahead pre-execution, potentially augmented with extended semantic information provided by the programmer (see my related post). Over and over I see researchers reaching the point where they raise this question, but somehow stop short of this – rather obvious – answer.

Turning it around: save for a revolutionary development in how we build single core chips, is there a real alternative?

Being stuck and declaring defeat is  not an option.

Open source as the philosophers’ stone

December 3rd, 2010

According to Wikipedia, the philosophers’ stone is ” .. said to be capable of turning base metals (…) into gold; it was also sometimes believed to be an elixir of life, useful for rejuvenation and possibly for achieving immortality“. It was the grand prize of the alchemists throughout centuries.

What does it have to do with parallel software? Recently, while attending a supplier organized event, open source was repeatedly mentioned as the best way to make sure that a certain technology will get widely accepted. The main argument went like this: companies come and go and they may change their priorities, deciding to axe certain products or technologies; but once out there, people will maintain it and use it. Hence, open source guarantees ‘eternal life’ for a software product.

I find this idea quite interesting. Let’s admit it: big open source projects long ago ceased to be the playground of enthusiastic amateurs and rather became the main vehicle for co-operation between companies, while still tapping into the wisdom of willing individual; however the aspect of increased penetration – while now obvious – was perhaps not so self evident. But it’s there: when Ericsson decided to down-prioritize Erlang, making it open source actually meant a new start and a succesful carrier ever since (including inside Ericsson). Another example is the language Clipper (I used to program in Clipper back in mid 90s): it was discontinued by its mother company, however it’s still alive as well through a number of open-source projects, maintained by companies who still use it for succesful products (in short: it’s a compilable C/C++-like version of dBase with pretty good scripting support).

So is open source the philosophers’ stone that can guarantee eternal life for worthy software? Looks like it and it’s an aspect large companies have to consider when deciding how to handle their software offering.

A tale of the textile and software industry – part II

December 3rd, 2010
My earlier post on the similarities between the textile and software industry triggered some feedback. Some agreed, some didn’t; one comment, from Mike Williams himself argued that we should be able to push our cost so low that the cost of outsourcing would be much higher.
I think this line of thinking is missing one important point and that’s what I want to elaborate on. During the past few decades, globalization led to the emergence of two parallel power structures, that sometimes interact, sometimes are at odds but clearly influence each other.
The first power structure is the traditional national state setup of our world. Traditionally relationships between states defined the power structures, guided wars and defined if and where peace can prevail, as well as which companies get the upper hand, who gets access to resources etc. This structure is very much alive, but a parallel set of global actors are emerging and are gradually bringing forward their interests and start flexing their powers.
Obviously, I mean global corporations. Many of these have long ago ceased to be entities rooted in national countries – they act globally, have a global structure and clear set of goals, vision and strategy. The two power structures are obviously strongly interconnected and dependent on each other (ultimately, the power of a state relies on the taxes it collects from economic actitiies; many companies are state owned etc) but are increasingly at odds with each other. States want to keep jobs, companies ultimately want to minimize costs; states want to increase tax revenues, companies need to generate profits – and so on. Intel’s CEO said some years ago, that his company could easily operate fully outside of USA; Nokia’s push to reform Finland’s tax system was widely discussed and debated; many UK based companies chose to relocate their headquarters for tax reasons.
My point is really that shifting jobs around is not equal outsourcing anymore. A large global company could decide to shift its R&D to where it’s most cost efficient (while certain requirements – related to e.g. IPR protection, legal stability, human rights (at least for good global citizens) – are fulfilled). This may be bad from a country’s point of view, but makes perfect sense for shareholders; no matter how efficient you become, it will be evenually replicated elsewhere. It’s not the way to fight the inevitable – instead focus on high value added areas where the operational costs are negligable and whereyour country will the primary choice for other reasons (such as quality of living, infrastructure etc)
If you can’t beat them, differentiate yourself.

My earlier post on the similarities between the textile and software industry triggered some feedback. Some agreed, some didn’t; one comment, from Mike Williams himself argued that we (Westerners) should be able to push our cost so low that the cost of outsourcing would be much higher.

I think this line of thinking is missing one important point and that’s what I want to elaborate on. During the past few decades, globalization led to the emergence of two parallel power structures, that sometimes interact, sometimes are at odds but clearly influence each other.

The first power structure is the traditional national state setup of our world. Traditionally relationships between states defined the power structures, guided wars and defined if and where peace can prevail, as well as which companies get the upper hand, who gets access to resources etc. This structure is very much alive, but a parallel set of global actors are emerging and are gradually bringing forward their interests and start flexing their powers.

Obviously I mean global corporations. Many of these have long ago ceased to be entities rooted in national countries – they act globally, have a global structure and clear set of goals, vision and strategy. The two power structures are obviously strongly interconnected and dependent on each other (ultimately, the power of a state relies on the taxes it collects from economic actitiies; many companies are state owned etc) but are increasingly at odds with each other. States want to keep jobs, companies ultimately want to minimize costs; states want to increase tax revenues, companies need to generate profits – and so on. Intel’s CEO said some years ago, that his company could easily operate fully outside of USA; Nokia’s push to reform Finland’s tax system was widely discussed and debated; many UK based companies chose to relocate their headquarters for tax reasons.

My point is really that shifting jobs around is not equal outsourcing anymore. A large global company could decide to shift its R&D to where it’s most cost efficient (while certain requirements – related to e.g. IPR protection, legal stability, human rights, at least for good global citizens – are fulfilled). This may be bad from a country’s point of view, but makes perfect sense for shareholders; no matter how efficient you become, it will eventually be replicated elsewhere. It’s not the way to fight the inevitable – instead focus on high value added areas where the operational costs are negligable and where your country will be the primary choice for other reasons (such as quality of living, infrastructure etc)

If you can’t beat them, differentiate yourself.

Thinking ahead

November 18th, 2010

Ericsson has launched, during 2010, a blog geared towards new ideas and different perspectives. One of the main goals of ‘Thinking ahead‘ (the name of the blog) is to foster open debate around the future of communications and how these will impact our lives. I think this is a good idea, especially that both bloggers from inside and outside of the company are invited to share their vision.

Now I’m one of those bloggers. My first post – about computing swarms – just went live and more will likely follow. Check it out and feel free to comment, either there, here or in both places 😉