Archive for December, 2010

Séta

Saturday, December 18th, 2010

Vén fák alól érkezem
– mintha csak a múltból jönnék –
(mögöttem már zöldbe lobbantak a gesztenyék)
a Stadionhoz – behemót, kedves emlék
hol a gyözelemért izgultam egykor (rég)
és a Szamos –
jegeden kalandos
volt átkelni mikor jött a tél
hü halászaid a hídon állnak még
de a távolban – az már a Hét Vezér tér
– avagy ahogy ma hívják: július 14 –
magunkat látom amint rúgjuk a labdát
és biztosítjuk a tér ricsaját –
balról egykori iskolám
(átadom üzenetem némán)
lassítok – udvarán
fiatal apa oktatja kisfiát
”vállból hajítsd, messze a kosár”
szembejön egy égö hajú lány – elegáns –
(rám is mosolygott talán)
a sarok mögül az öreg ház nevet
van is mit: fejemet
beborítja a szirmok hava
– megérkeztem, haza –
hallom a vidám csaholást
(elsiet mellettem egy kisdiák
– vagy képzelem csak talán?)
hova is tetted a kulcsod, te szamár?!

Morrison

Saturday, December 18th, 2010

A hajnal gyilkosként érkezik
Megöli az izzó éj lángjait
Várj, ne fojtsd el
A dübörgö hangokat
Ne! Ne oltsd ki a parázsló vágyakat
Adj idöt az utolsó italnak
Egy percet a végsö dalnak

A fények meghaltak, elhagyták
Szürke arcodat
Látom rajta a fáradt ráncokat
Keserü ködben keresem
Kábán tántorogva
A megölt álmodat

Future of Software Engineering Research

Friday, December 17th, 2010

In early November I attended the Future of Software Engineering Research (FoSER) workshop, co-located with the FSE conference (Future of Software Engineering), one of the flagship conferences in the area (together with a colleague we also had a paper at the workshop, arguing for language oriented software engineering). One of the goals of the workshop was (quite openly) to create an agenda for which NSF can allocate research funds – however, for a person from Europe it provided a fascinating glimpse into the workings of the community.

One thing that I found striking was the very low industrial participation: company representatives were few and far between and none of the major SW companies were visible. When asking around for the reasons, the answers were perplexing: this is a research community, doing serious science (sic!); the community is driven by a few, hard minded theorists; companies just use what we provide, but we shall be going on with our research agenda. Based on my experience with other communities – like computer architecture or operating systems – I found this attitude (whatever the main reasons really were) stunning. In my view, software engineering research should be most palpable and practical of all the branches of computer science; the fact that it isn’t came as a great surprise.

Now, about the workshop: there were five main themes that were identified and worked upon in smaller work-groups.These were the five areas:

  • Helping people produce and use software intensive systems: the discussions ranged from domain specific languages to end user programming and to management of security issues (such as within Facebook)
  • Designing the complex systems of the future: the ones highlighted were health-, large scale ground transportation-, air traffic control- and battle space related; interestingly, they highlighted speculative pre-execution as a promising approach
  • Making software intensive systems dependable : ideas such as interactive and differential program analysis, requirement languages for the “masses” and automated programming were raised. I have my deep doubts about the later two, but the first one sounded quite promising.
  • Software evolution revolved around improving the decision process and bringing economics tightly into the technology picture
  • Advancing the discipline of SW engineering and research methodology: this was a mixed bunch. They talked about embracing social science methods (that was interesting), but I would strongly question embedding of formal methods into daily software development. Those methods, while intellectually appealing,  have proven to be hard to scale and unusable on anything  where it actually may make sense.

Well, we’ll see what comes out of this effort – but the process and the interaction were well worth the time spent there.

Karácsonyi készülödés

Saturday, December 11th, 2010

Karácsonyi mézeskalácsot sütöttek a pékek, íme a bizonyíték.

Készül a massza

Készül a massza


Szaggatjuk a tésztát

Szaggatjuk a tésztát


Munkában

Munkában


A kiskukta is fotózott

A kiskukta is fotózott

Üni és az ürutazás

Saturday, December 11th, 2010

(közjáték két részben)

Elsö rész
Szereplök: Üni, András, nyakat is védö “ürhajós” sapka
Helyszín és idöpont: otthon, reggel, oviba készülve

Üni: Api, ez milyen sapka?
András: Ürhajós sapka
Üni: Mi az az ürhajós?
András: Az ürhajós nagyon magason repül, sokkal magasabban mint a repülö; ott nagyon hideg van, azért van ilyen sapkája
Üni: Értem. Én is szeretek repülni!

Második rész
Szereplök: Üni, András, a hold és az Esthajnalcsillag
Helyszín és idöpont: óvoda elött, délután, hazafelé

Üni: Api nézd, a hold!
András: Valóban, mellette az Esthajnalcsillag

(beülnek az autóba, Üni forgolódva szemmel tartja a holdat)

Üni: Api, a hold követ minket!
András: Nem követ, csak olyan messze van, hogy mindenhonnan látszik
Üni: Menjünk közelebb, nézzük meg!
András: Nem tudunk, Manó, olyan magason van, hogy csak az ürhajósok tudnak oda menni
Üni: Nekem van ürhajós sapkám!

(csend, majd függöny)

Layering the parallel computing model

Thursday, December 9th, 2010

In order to execute a program on a parallel computer, one obviously has to decompose it into computations that can run in parallel on the different processor cores available in that computer – this much is readily agreed upon in the computing community. When it comes however on how to do this decomposition, what HW primitives to use and who (human or SW/HW) should do it – opinions start to diverge quickly.

One debate is around shared memory model (synchronization based on access to the same storage) vs share nothing (synchronization based on messages); another debate is fielding threading (or lightweight process) based decomposition vs different task models (using shared memory or not). In this post, I’ll focus primarily on this second debate.

The strength of the task based models is that these decouple the amount of parallelism in the application from the available amount of parallel processing capacity: the application can just fire off as many parallel tasks that it possibly can and then rely on underlying system to execute these as efficiently as possible. On the other hand, threading on the application level requires design time partitioning into reasonably long lived chunks of functionality that can be then scheduled by the lower level (middleware or OS layer); this partitioning – save for a few specific cases, such as telecom servers – is usually dependent on the amount of parallel processing capacity available in the target system and hence does not provide for decoupling between application parallelism and parallel processing capacity (putting too many compute intensive threads on one core may actually decrease overall performance).

Herein lies the potential for combining these two models: on a many-core processor, the threading model can be used to provide an abstract model of the underlying HW, while the task model exposes the amount of parallelism from the application. The modeling of the HW can be extremely simple: just allocate one thread (or lightweight process) to each processor core or HW thread to model workers waiting to execute tasks, that are delivered by previous tasks executed by some of these worker threads. This model suits perfectly a many-core system with a large pool of simple cores; it can also incorporate heterogeneous systems comprising high capability cores as well by just modeling those as multiple threads.

The interface between these two layers – HW modeled as threads and applications modeled as graphs of tasks – is obviously the scheduling policy: how are tasks managed (queuing)? how are constraints (e.g. timing) taken into account? how is load balancing – or power management – handled? I feel this is the core of future research in the area of manycore programming and the solutions will likely be primarily domain  (and, to a lesser extent, platform) specific.

Obviously, this model does not address the root problem: where is the parallelism coming from? I believe the answer depends on the nature of the application and the scale of parallelism available in the platform. As a first step, natural parallelism in the application problem can be exploited (think of generating the Fibonacci numbers); when that path is exhausted, I believe there’s only one possibility left: exploit speculative run-ahead pre-execution, potentially augmented with extended semantic information provided by the programmer (see my related post). Over and over I see researchers reaching the point where they raise this question, but somehow stop short of this – rather obvious – answer.

Turning it around: save for a revolutionary development in how we build single core chips, is there a real alternative?

Being stuck and declaring defeat is  not an option.

Open source as the philosophers’ stone

Friday, December 3rd, 2010

According to Wikipedia, the philosophers’ stone is ” .. said to be capable of turning base metals (…) into gold; it was also sometimes believed to be an elixir of life, useful for rejuvenation and possibly for achieving immortality“. It was the grand prize of the alchemists throughout centuries.

What does it have to do with parallel software? Recently, while attending a supplier organized event, open source was repeatedly mentioned as the best way to make sure that a certain technology will get widely accepted. The main argument went like this: companies come and go and they may change their priorities, deciding to axe certain products or technologies; but once out there, people will maintain it and use it. Hence, open source guarantees ‘eternal life’ for a software product.

I find this idea quite interesting. Let’s admit it: big open source projects long ago ceased to be the playground of enthusiastic amateurs and rather became the main vehicle for co-operation between companies, while still tapping into the wisdom of willing individual; however the aspect of increased penetration – while now obvious – was perhaps not so self evident. But it’s there: when Ericsson decided to down-prioritize Erlang, making it open source actually meant a new start and a succesful carrier ever since (including inside Ericsson). Another example is the language Clipper (I used to program in Clipper back in mid 90s): it was discontinued by its mother company, however it’s still alive as well through a number of open-source projects, maintained by companies who still use it for succesful products (in short: it’s a compilable C/C++-like version of dBase with pretty good scripting support).

So is open source the philosophers’ stone that can guarantee eternal life for worthy software? Looks like it and it’s an aspect large companies have to consider when deciding how to handle their software offering.

A tale of the textile and software industry – part II

Friday, December 3rd, 2010
My earlier post on the similarities between the textile and software industry triggered some feedback. Some agreed, some didn’t; one comment, from Mike Williams himself argued that we should be able to push our cost so low that the cost of outsourcing would be much higher.
I think this line of thinking is missing one important point and that’s what I want to elaborate on. During the past few decades, globalization led to the emergence of two parallel power structures, that sometimes interact, sometimes are at odds but clearly influence each other.
The first power structure is the traditional national state setup of our world. Traditionally relationships between states defined the power structures, guided wars and defined if and where peace can prevail, as well as which companies get the upper hand, who gets access to resources etc. This structure is very much alive, but a parallel set of global actors are emerging and are gradually bringing forward their interests and start flexing their powers.
Obviously, I mean global corporations. Many of these have long ago ceased to be entities rooted in national countries – they act globally, have a global structure and clear set of goals, vision and strategy. The two power structures are obviously strongly interconnected and dependent on each other (ultimately, the power of a state relies on the taxes it collects from economic actitiies; many companies are state owned etc) but are increasingly at odds with each other. States want to keep jobs, companies ultimately want to minimize costs; states want to increase tax revenues, companies need to generate profits – and so on. Intel’s CEO said some years ago, that his company could easily operate fully outside of USA; Nokia’s push to reform Finland’s tax system was widely discussed and debated; many UK based companies chose to relocate their headquarters for tax reasons.
My point is really that shifting jobs around is not equal outsourcing anymore. A large global company could decide to shift its R&D to where it’s most cost efficient (while certain requirements – related to e.g. IPR protection, legal stability, human rights (at least for good global citizens) – are fulfilled). This may be bad from a country’s point of view, but makes perfect sense for shareholders; no matter how efficient you become, it will be evenually replicated elsewhere. It’s not the way to fight the inevitable – instead focus on high value added areas where the operational costs are negligable and whereyour country will the primary choice for other reasons (such as quality of living, infrastructure etc)
If you can’t beat them, differentiate yourself.

My earlier post on the similarities between the textile and software industry triggered some feedback. Some agreed, some didn’t; one comment, from Mike Williams himself argued that we (Westerners) should be able to push our cost so low that the cost of outsourcing would be much higher.

I think this line of thinking is missing one important point and that’s what I want to elaborate on. During the past few decades, globalization led to the emergence of two parallel power structures, that sometimes interact, sometimes are at odds but clearly influence each other.

The first power structure is the traditional national state setup of our world. Traditionally relationships between states defined the power structures, guided wars and defined if and where peace can prevail, as well as which companies get the upper hand, who gets access to resources etc. This structure is very much alive, but a parallel set of global actors are emerging and are gradually bringing forward their interests and start flexing their powers.

Obviously I mean global corporations. Many of these have long ago ceased to be entities rooted in national countries – they act globally, have a global structure and clear set of goals, vision and strategy. The two power structures are obviously strongly interconnected and dependent on each other (ultimately, the power of a state relies on the taxes it collects from economic actitiies; many companies are state owned etc) but are increasingly at odds with each other. States want to keep jobs, companies ultimately want to minimize costs; states want to increase tax revenues, companies need to generate profits – and so on. Intel’s CEO said some years ago, that his company could easily operate fully outside of USA; Nokia’s push to reform Finland’s tax system was widely discussed and debated; many UK based companies chose to relocate their headquarters for tax reasons.

My point is really that shifting jobs around is not equal outsourcing anymore. A large global company could decide to shift its R&D to where it’s most cost efficient (while certain requirements – related to e.g. IPR protection, legal stability, human rights, at least for good global citizens – are fulfilled). This may be bad from a country’s point of view, but makes perfect sense for shareholders; no matter how efficient you become, it will eventually be replicated elsewhere. It’s not the way to fight the inevitable – instead focus on high value added areas where the operational costs are negligable and where your country will be the primary choice for other reasons (such as quality of living, infrastructure etc)

If you can’t beat them, differentiate yourself.