Is it time to privatize the cloud?

May 18th, 2014

Sure thing, the talking king heads of cloud have been preaching for a long time how public clouds will take over the world and become the new utility behemoths. The big (or early) ones – AWS, Azure, Google – have certainly seen rapid growth and this growth will likely continue for a long period of time. But, is the future all public cloud or does the private cloud have any chance to exist?

To answer this question, there are several dimensions to be considered. To start with, is the question of privacy of information. Even if a company may choose to go for a model of remote delivery of software (think Office 365), rules or just company policy will often require that sensitive data will be kept within the legal and physical control of the data owner (i.e., never made available to the cloud service provider). Recent news and the legal framework under which many major cloud service providers operate make this issue even more stringent and important and many companies will be legally bound to keep their data within their infrastructure – especially if operating in sensitive areas such as communications, financial system or government which make up a sizable chunk of the potential cloud clientele. In short, even if you discard the cost associated with shoveling a lot of data around, data privacy will be a serious catalyst for not embracing fully public cloud solutions.

The second aspect has to do with scale and simple economics. When your installation is small, operational costs will dominate and using a public cloud is the more sensible choice. As your business scales, at one point internalizing your infrastructure is simply more cost efficient, as many case studies with AWS will show you (think Netflix). Public cloud is a bit like a kickstarter thing to get rolling; once your business is ready for prime time, using a bank is a more sensible choice.

The third aspect has to do with the segmentation of the cloud market in terms of unique requirements. While initially a cloud service was just a uniform service, as the market is maturing more stringent and differentiated requirements come to the front. Whether it is real-time performance, availability, security or something else, segmentation is a fact of life – as illustrated by AWS’ differentiated offering. However with segmentation come additional costs and downsizing of individual segments where economies of scale will have to take a back seat.

Putting it all together – while public cloud providers have done a decent job kickstarting the industry, it’s time to re-focus on private cloud deployments. Public cloud will continue to grow, but private cloud demand – volume wise – will grow faster and will be a more differentiated market. How should big public cloud companies respond?

To stay relevant, they need to expand their customer base to those who would never use their public cloud offering. How? The answer is simple: privatize their cloud stack and sell it to those planning to build private clouds. Couple it with devops services and they can replicate their success at a totally different level. Compatibility with their public cloud will be a huge asset.

Are they ready?

Should you pass on PaaS?

May 14th, 2014

It’s funny to watch all the predictions on how fast different parts of the cloud market will grow – but also how some analysts “adjust” their forecast from 30+ CAGR to meager single low digits within just a few months or go from multi-ten billion market sizes to just a couple of billions. All these adjustments reveal one key thing: there’s a large degree of uncertainty with regards to how fast the cloud market will grow (no one really question the enormous growth potential) and where the focus will really be within a couple of years.

No other area reflects this better than Platform as a Service (PaaS). Once it has been touted as the main source of growth; nowadays it’s more a necessary add on for successful IaaS providers. 451 research recently released a research note highlighting the movements in the market and how PaaS solutions are really used – that is, which got acquired by whom. So, should anyone care about PaaS? Or should we just pass on it?

It’s always good to remind ourselves what PaaS really is. Essentially, any self-respecting PaaS solution offers three types of services: application life cycle management, application run time environments and services (which, in most cases, can be extended by other, user defined services). Typically life cycle management includes staging, termination, scaling (auto-scaling), policy based SLA enforcement – just what you would expect from any serious cloud management solution. On the run-time environment side you will find support for Ruby, JavaScript, Java, Python and the likes. As for services, security, database, messaging are typically present in most solutions.

Before answering the question, let me make an attempt to lay out a few ground truths and principles around PaaS:

1. Just as for programming languages, don’t expect a PaaS standard anytime soon. It may be sparser or thicker, but still a jungle it will be and portability will be a pain
2. Services will be used on a “need to have” as well portability basis. An LDAP based security service is a safe bet. An esoteric say messaging service less so.
3. If your app works with one PaaS, it’s not guaranteed that it will work with another one. Staging, policy enforcement, service usage will differ – it may start, but may fail miserably along the road
4. Be sure you use an application framework that is widely supported: node.js, JVM based ones, Ruby, Python are good. Other, not so much.
5. PaaS will not replace a cloud management solution, it adds an extra layer to be managed

So, should you just pass on PaaS and stay with IaaS and whatever application framework you use?

The answer is yes and no. Yes, you should pass on the dream of a “universal PaaS” that would allow you to run your application on “any” cloud and get the same services. This will simply not happen – cloud providers will work towards making sure to provide you with a PaaS layer, but a PaaS layer that is tailored for their particular cloud infrastructure. AWS has one, VMWare (for their private cloud) have another one (Cloud Foundry), for RedHat based clouds you have OpenShift. Moving these around and getting the same level of service is hard, if not impossible.

The answer is also No. No, it’s not a good idea to pass on PaaS. It does provide a platform that insulates the application from some complex aspects of cloud (VM management, scale-out/in, fault management) and allows the programmer to focus on the task at hand – if you are developing a new application. Given the tendency of cloud providers to offer a bundled PaaS solution, the real question to answer is the same old one: which cloud is right for you? Public or private? VMWare based or Linux/KVM/OpenStack or CloudStack based or Azure based? Redhat or some other distro? Once you answered those questions, you implicitly made the choice of PaaS too – in fact, it may be one of the decision criteria.

Just don’t expect “seamless” portability and interoperability. Go for performance, low cost or ease of migrating your legacy – whichever is more important for you.

Back in business

May 12th, 2014

Wow, it has been over two years since my last post (it was about OpenStack vs CloudStack) – and a lot has happened during this time. But now it’s finally time to get rolling again, both on the blog as well as on Twitter.

A lot of things have happened in the meanwhile. Professionally, we are making cloud – telco cloud – really happen, spiced up with a bit of SDN too (hence my change of work title and focus). Personally, our family has grown with one new member, Örs András, who just passed the solid age of 1 year two months ago.

I guess this is enough to get this blog active again :-) Watch out for a new post on PaaS and cloud computing coming up soon.

Why the OpenStack vs CloudStack debate is irrelevant

April 27th, 2012

There has been a lot of hoopla around the announcement that Citrix will place CloudStack under the Apache umbrella, creating an effective competition to OpenStack. A lot of virtual ink was wasted on which one is more compatible with Amazon and other such issues (though, I must admit I enjoyed reading Randy Bias’ view). My point is quite simple: the debate is irrelevant, because what really will matter is what services the different stacks will enable, not how good IaaS managers they are (both are quite OK, by the way). The bulk of value will not be in public IaaS clouds – but rather in customized SaaS (and sometimes PaaS clouds) geared towards the needs of specific customer groups.

Why?

Using public IaaS is great when you want to quickly prototype something. Beyond that, if you want to run your production enterprise system on it, your operational cost savings will be close to none, as you will have to maintain your full IT department and your own life-cycle management processes. Guess what? Operational costs are actually higher than capital costs – so the real saving would come from offloading your complete IT department, not just converting your CAPEX on HW into OPEX. This is why public IaaS will have a limited utilization – its real use-case will be to enable software as a service offerings, that can generate real benefits for cloud users.

Raising the game to that level, the debate on which is better is essentially transformed into another question: which one will be able to gather the critical mass of vendor and user support to make it successful in the most unforeseen settings? The company I work for made a choice based on this principle, not based on compatibility with whichever other interface – and I strongly believe it’s the primary metric that will matter for most companies: this is why Linux is successful, while e.g. BSD is just a tiny niche player.

Beyond that, a bit of competition is always good ;-)

Is multi-core programming still hot?

April 27th, 2012

This is an idea that has been bugging me for a while: just how critical and hot multi-core programming is? Or, better phrased: is it relevant for the programmer at large or is it just a niche issue for some domains?

What triggered me to write this post is a recent business development workshop I attended in Gothenburg, organized as part of the HiPEAC Computing Systems Week. The goal was to draw up the business canvas for research ideas in order to facilitate moving them into mainstream – and this is where I saw the question emerge again: given a technology that helps you parallelize your software, who will really be interested in buying it?

It has been argued for a while that end user device applications and PC software won’t really need to bother beyond a few cores: multi-tasking will make sure that cores are used efficiently. Talking with web developers and those writing software for cloud platforms, the conclusion was the same: they haven’t seen the need to bother about anything: all this is happening at so low level from their perspective that it’s irrelevant.

With all this out of the scope, what is really left?

High performance computing for sure is in dire need of good programming models for many-core chips, in light of the search for the teraflops machine. But, this is a quite niche domain, as are some other, such as OS development or gaming platforms. Honestly, I have to admit: beyond these, I haven’t seen much interest or buzz about multi-core programming, a bit as if the whole hype would have vanished and settled into a stable state. To be honest, it’s as it should be: given the level of sophistication and performance reached for single cores, any but the most demanding applications will find that just one core – or maybe a few, using functional partitioning – will just be enough.

What will this mean?

I think the research community will have to come out and make it clear that the whole area shall be specialized, instead of shooting for a holistic solution that no-one will need. Yes, we will have to fix the needs of specific domains – high performance computing, gaming, perhaps telecoms – but beyond that further efforts will have little practical applicability. Other domains: power efficient computing and extreme, cloud based scalability will be the ones that will matter and it’s where research efforts shall be focused.

With this, I think it’s time for me to leave multi-core programming behind – on this blog and in professional life and focus on those things that can make a broader difference.

Why OpenStack?

February 25th, 2012

As you probably saw, Ericsson has recently joined the OpenStack project. One may ask: why OpenStack, what’s the point of it?

The answer is not connected to Ericsson (and, of course, represents my personal view only), but it’s simple: it is really about openness. Openness is not about open source, alone but it’s about freedom of choice: freedom to pick and mix your hardware according to you needs; freedom to design your network as you want; freedom to use the virtualization technology of your choice. The ITC industry was extremely successful at creating a bewildering choice of compute, storage and network hardware, virtualization platforms, virtual network models and so on – to be fair, each with its benefits (and more often than not, drawbacks). Trying to create the perfect, one fits all “standard” HW and virtualization platform is doomed to fail and would just result in yet another set of offerings. Better leave that area alone: if you are building a mission critical cloud, you are better off with e.g. a telecom blade system; if you are just after a vanilla IaaS platform, many vanilla IT blade vendors will be happy to serve you: the choice is – and should be – yours.

This is where OpenStack got it right: it aims at providing a software abstraction layer above the HW and virtualization layer that can hide and enable management of this bewildering diversity. It’s a tacit recognition of the fact that real value comes from the rest of your stack: how you manage automation, elasticity, scalability, resiliency; how you integrate with BSS systems and so on. Let the guys fight out the race to the bottom, wrap them into OpenStack and create real value.

When we first started using OpenStack, it was soon enough strikingly clear how powerful their model is: we were able to add ground-breaking features such as WAN elasticity, distributed cloud support, SIM based authentication and many more (about which I will talk at the World Telecommunication Congress) – while keeping the abstraction layer untouched. In addition, being an open source project, it was like a wish come true process: whenever we identified the need for some feature, soon enough turned out that someone actually was working on it.

However, in order to live up to its promise, OpenStack has to keep following the same principles: support the broadest possible set of HW and support the broadest set of hypervisors, with equal quality. If the community starts relaxing this holistic approach, it will reduce the appeal of OpenStack and might eventually have the same fate as many other open source projects: loose momentum, loose corporate backing and eventually fold.

So, let the era of cloud freedom roll on and I’m sure some cool stuff will be coming soon from the OpenStack-based community.

The Cloud in 2012

January 4th, 2012

I’ve just read Randy Bias’ post on cloudscaling.com – thought provoking and interesting reading to be sure, but I’m not sure I agree with him.

Why?

Let’s put things into perspective. Amazon Web Services is clearly a fast-growing business and an admirable example of organic innovation. However, if you look at the total IT DC market, it’s about 300BUSD in 2011, of which 25% is related to cloud computing – meaning AWS has captured roughly 1,3% of the total cloud market. Is that significant? Hardly. Is Amazon making money on AWS? Hard to tell, as no details are disclosed – but probably margins are razor thin. Here’s another piece of information: public IaaS market was about 2.3BUSD in 2011, so Amazon captured about 40% of it which makes them the market leader, but other providers, like AT&T, Verizon and so on are not that far behind. Besides there are strong proof points that Amazon is actually not that cheap & efficient – I’m aware of successful private cloud installations coming in at 40% lower cost / VM than Amazon’s equivalent offering (it’s managed as a business, so it takes server replacement, OPEX costs etc into account).

There’s a piece of information hidden behind those figures: the private cloud market is a huge business – perhaps over 50BUSD already last year. Ignoring this market is a mistake few companies can afford without being severely punished. The enterprise market is ripe for “cloudification” and not just for virtualizing existing application or cost savings; there’s a significant demand from enterprises for native cloud applications that scale infinitely; unified management of all aspects of IT – including networks – has been a key requirement even before clouds. Don’t get carried away by a 1BUSD market, when the real iceberg – 50 times bigger – lies under the sea.

So, what’s my prediction for 2012?

I think vanilla, remote IaaS providers like Amazon will see a slowing growth, at least in comparison with premium service public IaaS, private clouds and PaaS and SaaS offerings. I think 2012 will be the year of the emergence of the personalized, localized and highly available (read: five nines) cloud, with radically improved network performance, targeting enterprise customers. We will see PaaS and SaaS clouds skyrocket: most private customers don’t want Amazon’s EC2 service – they want simple, convenient services such as Dropbox, iCloud, Google Docs etc. This is the future of cloud computing, not bare bone IaaS that just moves your PCs to the cloud with unreliable performance and connectivity.

Let’s check it again in 362 days ;-)

The reverse business model

December 27th, 2011

I’ve been championing for a while what I call the ‘reverse business model’, so I’m glad to see it applied more and more often – most recently in a deal struck between Google and Mozilla, securing significant funding for the browser developer in return for making (keeping) Google as the default search engine.

In short, the reverse business model is about charging the service/product provider, instead of the end consumer: instead of having an end user pay for a service, charge the original service provider for it, this way making sure the end user gets (the appearance of) a ‘free’ service (of course, at the end, he will pay up – but through a different channel). Examples of this model include charging web sites for premium connectivity from the end user (say hi to non-neutral net) or charging companies using social network sites such as Facebook – see my post about monetizing social networks. Google’s deal with Mozilla falls clearly into this category, making sure that Firefox’s users will use Google instead of Bing or Yahoo.

Will this model prevail for web based services? I believe it’s the only way for monetizing some of the most popular on-line services, such as social network sites; incidentally, it’s one of the ways e.g. operators can monetize over the top delivery of web and cloud services.

But more about that in another post.

Egy székely család krónikája (II. rész)

December 15th, 2011

A Bölönbe való áttelepedés – figyelmen kivül hagyva a 20. század folyamán bekövetkezö változásokat – a Vajda család történetének egyik legfontosabb mozzanata ami újfent szinte egybeesik Erdély történelmének egyik fontos pillanatával. 1690-ben, Apafi Mihály halálával véget ér a független erdélyi fejedelemség korszaka: a következö század már a Habsburg birodalomba való lassú, nem mindig békés beilleszkedésröl, a székely határörség megszervezéséröl és a hagyományos elöjogok elvesztéséröl szól.

1683-ban ennek még csak az elöszele érzödik: a Bécs sikertelen török ostromából visszatérö, már öregedö Apafi hatalma – az oszmán birodaloméval együtt – lassú hanyatlásnak indul, az ország állapota távolról sem rózsás. Ennek felmérését szolgálja az az összeírás, melyben a tordátfalvi Vajda István és János elöször bukkan fel lóföként Bölönben (1680-ban még nem szerepelnek a lustrában); érdekes módon Jánost utólag toldják be, mint Miklósváron, Kálnaki Ferenc szolgálatában álló lovast. Lévén, hogy származásukat is külön kiemelik, mindez arra utal, hogy István és János a bécsi hadjárat után kerülnek Bölönbe, talán éppen a hadjárat során szerzett érdemeik elismeréseképpen kapnak lófö címet.

Mielött továbblépnénk, pár végsö szó a Vajdák tordátfalvi múltjáról: a helyi szájhagyomány immár semmit sem tud a valaha onnan elszármazott Vajdákról, emléküket a helynevek sem örízték meg. A 16-17. századi egyházi anyakönyvek – ha léteztek is – elpusztultak; az erdélyi fejedelmek fennmaradt okmányait örzö királyi könyvekben sem lelhetö rájuk vonatkozó adat. Bármilyen további információ immáron csak Udvarhelyszék még feldolgozásra váró, jó állapotban levö Kolozsváron örzött irattárából, esetleg eddig ismeretlen lustrákból vagy a különbözö helyi családi hagyatékokokból kerülhet elö. Udvarhely irattára különösképpen érdekes, hisz a 16. század elejéig visszamenöen örzi a perek iratait – amennyiben a tordátfalvi Vajdáknak bármi peres ügyük akadt, arról ott jó eséllyel feljegyzés található.

A Bölönbe költözést követö ötven évböl igen kevés adat maradt fenn az immáron bölöni Vajdákról, de szerencsére ezek – és a helyi szájhagyomány – elég támpontot nyújtanak. Az elsö adat 1713-ból marad fenn, amikor is január 6.-án a teljes háromszéki katonaköteles lakosságot III. Károly hüségére eskették, Apor Péter fökirálybíró felügyelete alatt (a lista az ö hagyatékából maradt fenn). A lista a mi szempontunkból több okból is érdekes: társadalmi réteg szerint rendszerezi az esküt tevöket; minden 12 évnél idösebb férfiember szerepel rajta; aki írni tudott, az ö maga írta alá az esküt. Bölönben egyetlen Vajda szerepel, a nemeseknél: Vajda István, aki nem maga írja alá az esküt.

Mit árul el ez a bölöni Vajdákról?

Bármi is történt az 1683-as összeírást követö három évtized alatt, 1713-ban egyetlen, immár nemes Vajda család van Bölönben, amelynek feje vagy nem tud írni (a saját nevét sem) vagy dacból nem akarja megtenni. Ugyanakkor nincsenek még 12 évnél idösebb fiúgyerekei, hisz rajta kivül egyetlen Vajda sem szerepel a listán. Jogosan vetödik fel a kérdés, hogy ez az István leszármazottja-e a harminc évvel korábban említett Istvánnak és Jánosnak? Bár ezt egyértelmüen igazolni nem lehet, több közvetlen bizonyíték támasztja ezt alá. Elöször is az 1683-ban lovasként jelzett családok túlnyomó többsége 1713-ban már nemesként van jelen az eskütételnél ami arra utal, hogy 1713-ra az egykori lófök jó része nemesi státuszra tett szert – ami valószínüleg a birtokszerzés és az egykori kiváltságot igazoló, immár az új körülmények között megerösített cím következménye (számos adat igazolja, hogy a székelyek igyekeztek egykori kiváltságaikat a fejedelmek adományleveleinek újboli igazoltatásával megtartani).  Másodszor, lévén, hogy 12 évnél idösebb fia nincs, 1713-ban István nemigen lehet 40 évesnél idösebb, de amint hamarosan látni fogjuk, 20 évesnél sem lehet fiatalabb – így valamikor 1673 és 1693 között születhetett; a késöbbi adatok a 25 év körüli kort valószínüsítik (azaz 1688 körül születhetett). Mindez nagy valószínüséggel egybeköti öt az 1683-ban jegyzett Istvánnal és Jánossal; ugyanakkor, lévén, hogy ö az egyetlen Vajda 1713-ban, ez további támpont arra, hogy 1683-ban István és János között apa-fiú kapcsolat állhatott fenn. Túl sok egybeesés kellene ahhoz, hogy az 1683-ból ismert István is, János is utód nélkül haljon meg és helyettük egy ugyancsak kiváltságos, unitárius Vajda költözzön máshonnan Bölönbe.

A másik megválaszolandó kérdés, hogy István valóban a mi családunk elödje-e. Erre a bölöni falusi hagyomány adja meg egyértelmüen a választ: bár a 20. század elejére a falu jelentös része Vajda volt, egyértelmüen négy ”törzset” jegyeztek: a kutyabörös (nemes, amúgy is vagyonos), a ”magyarka”, a putzken (cigány) és a gépész Vajdákat. A mi családunk a kutyabörös volt; sinológus nagybátyám, Vajda Gyula, a II. világháború elött még saját szemével látta a nemesi oklevelet. Sajnos azóta nyoma veszett, így az egyik fontos adat is – kinek, ki és mikor bocsátotta ki – elveszett.

Visszatérve a 18. századra, a bölöni Vajdákról a következö adat 1721-böl való, amikor is az adóösszeírásban Vajda János jobbágy mint szolgatartó van jelen. Figyelembe véve a kutyabörös hagyományt, valamint, hogy az összeírásban a nemesek nem szerepelnek, szerintem ez a Vajda nem a mi családunk tagja (esetleg egy oldalág képviselöje). Ez az utolsó felelhetö adat, mielött az anyakönyvek szilárd talajára érve immáron a teljes leszármazás nyomon követhetö.

A bölöni unitárius egyház anyakönyvei hibátlanul, jó állapotban megvannak 1739-töl 1908-ig a sepsiszentgyörgyi levéltárban (az azt követö évek anyakönyveit ma is a helyi lelkészi hivatal örzi). A régi anyakönyvek tartalmazzák a születések, házasságok (”copulálás”) és temetések adatait – sajnos kezdetben a temetéseknél az elhunyt korát nem jegyezték, mint amint a keresztelendö gyereknek is csak az apját írták be, ami sok esetben nehezítette a szálak kibogozását.

Az elsö periódusból származó adatok elemzése a Vajdákra vonatkozóan a következö adatokat szolgáltatta: a 18. század elején még nagyon csekély számú Vajda élt Bölönben; a 18. század elejéig visszavezethetö Vajda családok mind két östöl származnak: Vajda Páltól (ö a kutyabörös ág öse) és Vajda Józseftöl, de a kettejük közötti esetleges rokoni kapcsolatról semmi adat nem létezik; végül az 1713-ban jegyzett István már 1739 elött meghalt, amint az 1721-ben jegyzett János is.

Az 1739 elött született, házasodott és gyerekkel gyarapodó, de már 1739 után elhunyt Vajdából ötöt jegyez az anyakönyv: Gyurka (mh. 1743, lehetséges, hogy kiskorú gyerekröl van szó), Miklós (mh. 1755), Bálint (mh. 1756), Ferenc (mh. 1763) és Mihály (mh. 1765). Közülük Mihály esete érdekes: feljegyzik róla, hogy igen öreg, 82 esztendös – ami arra utal, hogy ö is ”bevándorló”, máshol születhetett és 1713 után költözött Bölönbe (talán házasság révén), tehát a Vajdák valamely más, nem kutyabörös ágához tartozik.

Visszatérve az kutyabörös Vajdákhoz, Pál elöször 1742-ben, majd 1754-ben házasodik, 1779-ben hal meg (sajnos életkorát nem jegyezték fel). Születési idöpontja így is 1715-1717 körülre tehetö, tehát valószínüleg az 1713-ban jegyzett István fiáról van szó. Töle számítva a leszármazás már jól nyomon követhetö:

Vajda György, Pál fia (1747-1812, 1773-ban házasodik Bálint Erzsókkal)
Vajda József, György fia (1773-1827, 1799-ben házasodik Szomor Zsuzsával)
Vajda György, József fia (1803-1868, 1826-ban házasodik Keresztes Rózáliával)
Vajda Sándor, György fia (1832 – ?, 1856-ban házasodik Pál Judittal)
Vajda Sándor, Sándor fia (1857-1942, 1884-ben házasodik az 1862-ben született Kozma Juliannával)
Vajda András, Sándor fia (1904-1964): nagyapám, az elsö Vajda aki elköltözik Bölönböl, elöbb Brassóba majd Kolozsvárra. Édesapám már Brassóban született, jómagam kolozsvárinak vallom magam.

A bölöni kutyabörös Vajdák története, mint székely katona család, itt véget ér: az utolsó tagja aki még katonáskodott nagyapám volt. A kialakult kép továbbra is hiányos hisz Pál és leszármazottjai életéröl, tetteiröl igen keveset tudunk, annak ellenére, hogy olyan fontos események szemtanúi, talán résztvevöi voltak mint az 1848-as forradalom és az elsö világháború. Szerencsére a rendelkezésre álló forrásanyag sokkal gazdagabb – adott a lehetöség arra, hogy egy részletesebb, színesebb családi portré alakuljon ki.

Egy székely család krónikája (I. rész)

December 14th, 2011

Ez a beírás a családomról szól – arról amit öseimröl az elmúlt fél évben sikerült kiderítenem. A már angolul megírt beírás bövitett, kiegészített verziója.

A bölöni ”kutyabörös” Vajda család dokumentálható története tulajdonképpen 1602-ben, Udvarhelyszék egyik eldugott falvában, Tordátfalván kezdödik. Tordátfalva akkoriban, is mint ma is, a föutaktól távol esö, egy völgyben meghuzodó, 15-25 családos település volt, melynek lakói állattenyésztéssel és almatermesztéssel foglalkoztak. Nevét a hagyomány szerint Tordátról, a Firtos várából menekülö mondabeli vitézröl kapta, aki itt telepedett meg – ha ennek bármi valóságmagja van, akkor a falu története a Szent István elötti idöszakra nyúlik vissza. Az okiratokban elöször talán az 1333-as pápai tizedjegyzékben bukkan fel, templomos helyként Villa Thorta néven – bár ezt sokan inkább a közeli Tracsafalvával azonosítják. Az elsö vitathatatlan említése 1546-ból való, amikor is a brassói gimnázium matrikulájában szerepel egy bizonyos Franciscus Sartor Thoratfalinus; az 1567-es regestrum már 21 kapuval jegyzi, mint egyházilag Székelyszentmiklóshoz tartozó falvat, bár már a 15. századtól saját templommal rendelkezik. Ezt a méretét, kisebb-nagyobb változásokkal, a következö évtizedekben is megtartotta.

Méretének köszönhetöen Tordátfalva középkori történetéröl keveset tudunk. Érdekes adatokkal az Udvarhelyszék levéltára szolgált: ebböl tudjuk, hogy már 1642-ben volt iskolamestere (1642 november 25: István Deák, alias Bandy), 1672-ben is Enok Ferentzet említik mint helyi iskolamester ami arra utal, hogy a falu gyerekei legalább az írás/olvasás alapjait elsajátították, már a 17. században. Ez a két iskolamester nagy valószínüséggel a Vajda család egyes tagjainak is tanítója lehetett. További érdekes adalékokkal az 1789-es püspöki vizitáció során készített leltár szolgál: ennek alapján a templom lényeges átépítéseken esett át a 17. század folyamán és több régi ezüst tárggyal is rendelkezett, ami a falu gazdasági erejét engedi sejtetni.

Ebböl a faluból származik Vajda János gyalogos szabad székely is, aki ott van Basta tábornok által a császár hüségére Székelykeresztúron, 1602 augusztus 11.-én felesketett székelyek sorában, 26 falubelijével együtt; két évvel késöbb mint Demeter Ferencz százában szolgáló puskás látjuk viszont. Családi hátteréröl a 16. századból fennmaradt összeírások adnak közvetett információkat: Tordátfalva 20-21 családjából 10-11 jobbágy volt, 5-6 nemes vagy lófö, a többi szabad, gyalog székely. Érdekes módon a 16. századból vagy a lófö, vagy a jobbágy családok listája maradt meg – és ezek egyikén sem szerepelnek Vajdák. Mindez valamint János 1602-es és 1604-es besorolása alapján valószínüsíthetö, hogy a tordátfalvi Vajda család, amenyiben már akkoriban is Tordátfalván élt, a szabad, gyalogkatona családok egyike volt.

Jánosunk élete az erdélyi fejedelemség és az unitárius vallás kialakulásának periódusával esik egybe: figyelembe véve az utódjának életére vonatkozó adatokat, János 1570-1575 között születhetett, pontosan abban a periódusban, amikor a speyeri békét követöen kialakul az erdélyi fejedelemség intézményrendszere és a Székelyföldön gyors iramban zajló áttérés nyomán az unitárius vallás – hála a tordai országgyülés 1557-es, a vallásszabadságra vonatkozó rendeletének is – az egyik domináns vallás lesz Erdélyben. Tordátfalva igazolhatóan unitárius falunak számított már a 16. század végén, így joggal tételezhetjük fel, hogy Jánost már az új hitben keresztelték meg. Fiatal legényként átélte Erdély elsö aranykorát Báthory István fejedelem és lengyel király uralkodása alatt, valószínüleg katonáskodott az ingadozó Báthory Zsigmond idejében és, többi székely társával együtt, Vitéz Mihály havaselvi vajda mellé állt 1600-1601-ben. Átélte a 15 éves háborút, Basta seregének kegyetlenkedéseit – az is lehet, hogy ennek során vesztette életét valamely csatában, vagy talán a Bocskai felkelés során, mely biztosította családjának jogait (Bocskai ugyanis visszaállította a hagyományos székely kiváltságokat). Tény, hogy 1614-ben már nem találjuk a Bethlen Gábor által elrendelt katonai összeírásokban – de családja továbbra is Tordátfalván él: 1627-ben Vajda Péter már mint veterán gyalogos szerepel a lustrákban.

A rendelkezésre álló szükös adatokat összevetve így valószínüsíthetö, hogy János 1614 elött meghalt, fia Péter akkoriban még kiskorú, talán 10-15 éves lehetett, ezért nem szerepel az 1614-es, részletes összeírásban. 1627-ben Péter így körülbelül 25-28 éves legénynek számít, aki valószínüleg már részt vett Bethlen Gábor második Habsburg ellenes hadjáratában 1624-ben, ezért is jegyzik már veteránnak. Vele találkozunk újra az 1635 november 18.-án készült, Rákóczi György által elrendelt összeírásban, továbbra is mint gyalogos katona – de ekkor már feljegyzik, hogy két fia van: Mihály aki már legénynek számít és még egy kiskorú fiú, kinek nevét nem jegyzik fel. Ezek alapján Péter 1620 körül, 20-23 évesen házasodhatott, fia Mihály 1621-1622 körül születhetett; második fia talán 1630 körül jöhetett a világra.  Péterröl még két adatunk van: 1648 június 12.-én hüségesküt tesz, még I. Rákóczi György katonájaként; 1655 június 30.-án mint falusi tanácsos (jurati) szerepel a II. Rákóczi György által Havasalföldön folytatott hadjáratból távol maradottak listáján, azzal a megjegyzéssel, hogy ’hadbavonuláskor sindet (beteg)’ volt – de már ezt a bejegyzés is kihúzták. Könnyen megtörténhet, hogy Péter akkoriban halt meg, ami nem meglepö, hisz közel 60 éves korával már így is hosszú életünek számított. Élete egybeesett Erdély aranykorával melyet Bethlen Gábor és I. Rákóczi György uralkodása fémjelzett.

Ami meglepö viszont, hogy az összeírásokban sehol nem szerepelnek Péter fiai, Mihály és az öccse – holott Mihály 1648-ban már házas ember lehetett (ha hamarabb meg nem halt vagy máshova nem vándorolt), söt öccse is már nagykorúnak számíthatott. Tulajdonképpen 1655 után egyáltalán nincs adatunk a Vajdákról Tordátfalván, sem az 1674-es, sem az 1679-es összeírásokban egész 1683-ig, mikor Vajda János és István immár lóföként Tordátfalváról Bölönbe nem költöznek.

Mi lehet ennek az oka?

Az adatok három támpontot nyújtanank: Péter egyenes fiú ága nem hal ki, hisz 1683-ban egyértelmüen tordátfalvi Vajdákat említenek és Tordátfalván más Vajdáról nincs tudomásunk; a család valamilyen módon Péter halála után is Tordátfalvához kötödik, hisz 1683-ban onnan költöznek át; a lófö cím megszerzése arra utal, hogy a család egyik tagja kitüntette magát. Mindezt összevetve a legvalószínübb következtetés számomra az, hogy Mihály vagy az öccse, majd az fiúleszármazottjaik is a fejedelmi udvarban szolgáltak, talán a testörség tagjaként Apafi fejedelem alatt, míg családjuk megtartotta tordátfalvi bázisát. Ez az elmélet megmagyarázza a lustrákból való kimaradást valamint azt is, hogy miért költöznek Bölönbe – szolgálataikért cserébe kaphatták a lófö címet, talán birtokkal is megtoldva. A birtokhoz esetleg házasság révén is juthattak, ami újra csak az akkoriban – Tordátfalvához képest – központinak számító Bölönbe való költözést válthatta ki.

Figyelembe véve a valószínüsíthetö születési adatokat, két változat lehetséges. Az elsö szerint az 1683-ban említett Vajda István Mihálynak vagy öccsének lehet a fia, 1650-1655 között születhetett. Az 1683-ban említett másik tordátfalvi lófö Vajda, János akit Kálnaki Farkas szolgájaként jegyzenek,  talán István ifjabbik testvére lehet – de mindenképp rokonnak számítanak, hisz mindketten Tordátfalváról származnak. A másik lehetséges változat szerint István tulajdonképpen Mihály öccse, Péter fia és János már az ö fia, aki a helyi nemes szolgálatába szegödött. Ebben az esetben István már 50 év fölött járhatott, mig János a huszas éveit taposta mikor Bölönbe költöztek. Jómagam a második alternatívát tartom valószínübbnek, de további adatok hiányában biztosan eldönteni nem lehet.

(folytatás következik)

Üni sztárfotója :-)

November 23rd, 2011

UniZoldfokiSzigeteken

Old story, told again (with a twist)

November 23rd, 2011
Old story, told again (with a twist)
Reading Steve Jobs’ biography, I’ve got reminded of the old claim that Microsoft copied (or stole, as Jobs put it) the graphical user interface, making it the backbone of their operating system’s success (while, for Apple, it was just one feature that helped make the product desirable). I find Bill Gates’s summary of the events – told to Jobs- quite funny: “Steve, it’s like me breaking into Xerox PARC to steal their TV set , just to find out that you already stole it” :-)
Reading the details of that story, largely irrelevant today, it struck me how similar it is to a more recent story, involving this time Apple and Google. Even the most hard core Android fan boys have to admit that its touch interface is a shameless copycat of iOS’s interface (poor Apple, the one subject to a double robbery ;-) ). But, there’s a twist: this time around, Google also copied the philosophy: they make use of it just to promote their other, real product: search.
Nevertheless, to me this is history repeating itself with a little twist.

Reading Steve Jobs’ biography, I’ve got reminded of the old claim that Microsoft copied (or stole, as Jobs put it) the graphical user interface, making it the backbone of their operating system’s success (while, for Apple, it was just one feature that helped make the product desirable). I find Bill Gates’s summary of the events – told to Jobs- quite funny: “Steve, it’s like me breaking into Xerox PARC to steal their TV set , just to find out that you already stole it” :-)

Reading the details of that story, largely irrelevant today, it struck me how similar it is to a more recent story, involving this time Apple and Google. Even the most hard core Android fan boys have to admit that its touch interface is a shameless copycat of iOS’s interface (poor Apple, the one subject to a double robbery ;-) ). But, there’s a twist: this time around, Google also copied the philosophy: they make use of it just to promote their other, real product: search.

Nevertheless, to me this is history repeating itself with a little twist.

Just how small the mobile room really is

November 23rd, 2011
o doubt, we are witnessing fast paced changes in the mobile landscape. The emergence of Android; the downfall of Nokia; the aquisition of Motorola by Google; Microsoft’s fightback; the departure of Ericsson – all are events that would have taken years to play out in the good old days of the last century – yet, all this happened within little more than a year.
At closer look however, the mobile industry seems to have room for just a few players. With all its meteoric rise, Android only generates healthy profit for two of its fans – Samsung and HTC – while the others are barely making ends meet, if at all (it also generates revenue for Google through search and Microsoft through patent licensing, but that’s a different story). Things might get even murkier if Motorola will get preferential treatments.
Or, take tablets: after almost two years Apple still rules, with the others folding their offerings quicker than you may read their names (with the late exception of Samsung). Microsoft is throwing money around, but perhaps with the exception Nokia, no one is making much out of it. It’s likely that Nokia will enter the tablet field with an ARM and Windows 8 based tablet with a good fighting chance, but the rest will find it a hard act to follow.
So, where will this lead us? It’s only a matter of time in my humble opinion before RIM goes out of business (perhaps bought by Nokia to smooth the entry to the US market) and there will be just three ecosystems left: iOS possibly merged with OS X, dominating the tablet market and a healthy market share in mobiles; Android, as a copycat with a significantly larger market share; and Windows, catering for business users and gaining a healthy share of the tablet market. But, all these ecosystems will be dominated by just a few players: Apple (iOS), Samsung or HTC (Android, with likely one of them entering a path of slow decline) and Nokia (Windows).
The rest? As Mark Knopfler says in one of his songs: “those who don’t like the danger soon find something different to try”

No doubt, we are witnessing fast paced changes in the mobile landscape. The emergence of Android; the downfall of Nokia; the aquisition of Motorola by Google; Microsoft’s fightback; the departure of Ericsson – all are events that would have taken years to play out in the good old days of the last century – yet, all this happened within little more than a year.

At closer look however, the mobile industry seems to have room for just a few players. With all its meteoric rise, Android only generates healthy profit for two of its fans – Samsung and HTC – while the others are barely making ends meet, if at all (it also generates revenue for Google through search and Microsoft through patent licensing, but that’s a different story). Things might get even murkier if Motorola will get preferential treatments.

Or, take tablets: after almost two years Apple still rules, with the others folding their offerings quicker than you may read their names (with the late exception of Samsung). Microsoft is throwing money around, but perhaps with the exception Nokia, no one is making much out of it. It’s likely that Nokia will enter the tablet field with an ARM and Windows 8 based tablet with a good fighting chance, but the rest will find it a hard act to follow.

So, where will this lead us? It’s only a matter of time in my humble opinion before RIM goes out of business (perhaps bought by Nokia to smooth the entry to the US market) and there will be just three ecosystems left: iOS possibly merged with OS X, dominating the tablet market and a healthy market share in mobiles; Android, as a copycat with a significantly larger market share; and Windows, catering for business users and gaining a healthy share of the tablet market. But, all these ecosystems will be dominated by just a few players: Apple (iOS), Samsung or HTC (Android, with likely one of them entering a path of slow decline) and Nokia (Windows).

The rest? As Mark Knopfler says in one of his songs: “those who don’t like the danger soon find something different to try”

Is IaaS fading away?

September 28th, 2011
Two weeks ago we had a really great panel at the Swedish Cloud Day, with participation from OpenNebula, Google, Berkeley and Ericsson (myself). We discussed quite some issues, but one kept bugging me ever since: are we seeing a decline of Infrastructure as a Service (IaaS), while PaaS and SaaS keep becoming dominant?
To answer that question, one has to look at when IaaS is really needed and by whom. One obvious use case is virtualization of existing applications, such as enterprise IT: moving existing enterprise services over to a cloud provider will require IaaS, for the simple reason that any other approach would mean a redesign of the existing software. In this case, IaaS is really a tool to support legacy applications, used by enterprise administrators rather than IT service consumers; it turns a two-tier system (enterprise IT systems – systems users) into a three-tier one: cloud provider – enterprise IT systems administrators – systems users. The real users of IaaS are enterprise IT systems admins; end users will still see SaaS as their primary model of interaction with enterprise IT.
A second use case is for casual utilization – both privately as well as for e.g. scientific or high performance computations. At the end, this is again really just a shift of legacy applications to the cloud: the only reason the user chooses IaaS is because the application is already given, (s)he only needs the scale-up service of the cloud. If (s)he would start from scratch, PaaS would be a better option.
Platform as a Service has some obvious benefits. It takes away the burden of managing instances, placement, networking, OS stack etc and enables the cloud user to focus on the core issue: developing an application that can quickly scale up and down based on needs. All the key services are given and guaranteed to scale and be available, something the IaaS model cannot provide. In many ways, the PaaS model gives back what infinite processor performance scaling took away: the notion of unlimited, reliable, on demand computing power.
I think Google really has a point here and in the future – and for new applications – we will see an increasing usage of PaaS offerings. The key issue though will be provider lock-in and tight coupling to platform APIs, something that no software provider can adopt lightheartedly. I believe we will see an emergence of portable libraries that still lock you into one API, but at least a portable one ;-)

Monetizing social networks

August 3rd, 2011

IEEE Spectrum recently published an interesting article about monetizing social networks (titled The revolution will not be monetized). It makes a good reading with a provocative conclusion: advertising will not be able to generate sufficient revenue to keep the ball rolling in the long term. So, where will money come from?

I’ve wrote already once about monetizing web content and services, where I came to the same conclusion: advertising only makes sense if you know exactly what your target is planning to do: otherwise it’s little more than an annoyance that will scare customers away. But, is that true for Facebook as well?

Whenever I log into Facebook, I see the status updates from my friends, including “friends” such as airline companies, hotels, my local mobile provider etc. What I’m being served is actually advertisement from companies I’m likely to use, a piece of information I gave away about myself as soon as I ‘Like’-d them. I confess: I rarely click on any banner, but I DID click on some of the links posted by my merchant ‘friends’. Does anyone monetize that? Of course: the companies themselves, but no one else.

There’s another case that went largely unnoticed. During last year’s volcanic ash crisis, most airline companies’ website were down or only served up an ugly looking text file with laconic updates. Meanwhile, those clever enough to have a Facebook account kept their customers up to date by the minute, leveraging on Facebook’s massive computing power: effectively, these airline companies turned Facebook into a free Infrastructure as a Service (or Platform as a Service, if you consider status updates a service) provider, winning high notes from their customers. Communication was also more personal, the companies’ Facebook operators replying to individual comments as well. In fact, afterwards, instead of calling their help-line, I contacted my favorite airline company through Facebook – and they solved my issue quickly.

So here’s my suggestion for Facebook & co: start monetizing on your company customers. You are giving away a huge opportunity for making money by letting them advertise themselves, interact with their customers, use your infrastructure essentially for free. To some extent, this is what LinkedIn is doing already: charging company members for recruiting services and job postings. Extending the model beyond that is the natural next step for me and it’s monetizing on the greatest asset social networks have: large number of users who themselves chose – and disclosed – what they are interested in, coupled with a large number of companies willing and eager to reach out to those hundreds of millions of potential customers.

It’s a bit like the 21st century TV: you tune in to what is interesting for you, advertisers push out their content and Facebook (the TV program) aggregates it for you. Welcome to the new world of interactive media.

Evolution as machine learning

June 10th, 2011

This week I had the chance to attend the Turing award lecture of the 2010 winner, Leslie Valiant. His research focuses on computational complexity which is of marginal interest for me, so I had no great expectations – nevertheless, this being the equivalent of Nobel price acceptance speech for computer science, I felt compelled to attend.

How wrong I was: instead of a lecture on his research he delivered one of those speeches that really make you think and provide plenty of food for thought. First – after making clear that he believes strongly in the theory of evolution – he laid out the basic problem with evolution: we don’t have a way to prove that evolution was indeed possible, given the huge potential variety – is it possible to reach the current living ecosystem without ‘external guidance’? Is at all possible that 4.5 billion years are sufficient?

His thesis was that evolution can be modeled as a machine learning process. The ‘machine’ is the living ecosystem itself; the training samples are the variations in the DNA; the result of learning is derived from survival (positive) or not (negative). In this context, one of the most intriguing questions is – what the machine is trained for, or, simply put, what is the meaning of the living world and that of evolution? Valiant’s answer: living ecosystem that always reacts in an optimal way to the surrounding world. Pretty interesting idea, I must admit.

Unfortunately Valiant was unable to provide the actual “formula of life” modeled as a machine learning process and, consequently, no proof that evolution can autonomously happen, without that ‘external guidance’ (God?). However, he made a very compelling argument that this kind of model actually can be useful and the whole line of thought was just intriguingly fresh, visionary and challenging.

As a Turing award winner’s speech should be.

Embracing twitter

May 20th, 2011

Finally, I saw the light: I’ll start twitting – so follow me here.

Let’s see how this works out.

Programming languages and the Christian faith

May 20th, 2011

Yesterday I had the honor to chair the panel at the Finnish Multi-core Day, with distinguished panelists form world leading companies (Intel, ARM, Nokia) as well as Swedish and Finnish universities. At some point someone raised the issue of programming languages and whether we will ever see convergence – at which point I had a revelation ;-) . Isn’t this really the same thing as with the Christian faith?!

In the good old days, there was only one religion (language): the Catholic one (Fortran), with just a few heretics (Lisp) on the side. Then, reformation came, sparked by Luther (the C language) and Pandora’s box was thrown wide open: Calvinists, Unitarians, Presbyterians, Baptists etc (C++, Java, Erlang, Scala, Prolog etc) all emerged and claimed to be the ‘real’ one, only to capture just a minority of believers (programmers). Is there a way to unify everything back? Not in religion, I believe, and not in programming languages. Does it matter? Not really, I feel that if need comes, I can pray (program) in any church (using any language’s infrastructure). In fact, James Reinders‘ (Intel) answer to the question was: learn as many languages as you can – certainly a wise advise, applicable outside programming as well.

Another nugget of deep wisdom came later on during the day, when Erik Hagersten used a nice metaphor in his talk: those who create a new language are like those who pee in their pants; they think it’s hot, but no one else can feel it ;-). Was he inspired by Anssi Vanjoki’s opinion about those going the Android way? I don’t know, but it is certainly worth pausing and thinking about how we want to develop new languages, be those domain specific languages or general purpose ones – I still believe there’s value in there, but we must keep the ecosystem in mind, always.

Why Microsoft will not buy Nokia

May 17th, 2011

There’s been some buzz lately around an alleged take-over by Microsoft of Nokia’s smartphone business (launched by a well known Nokia whistle blower blogger and spread by all major technology news sites). Well, I don’t think this will happen any time soon, for quite a number of reasons.

First, Microsoft has all it needs from Nokia: the largest smartphone manufacturer will use Windows Phone which will give a strong position for the Redmond based company. Spending more money (a lot of money – at least 40 BUSD) would be really hard to justify, as it would take a long time and would involve huge risks until it would pay off.

Second, Microsoft never went into building laptops itself and for good reason: by doing so, they risked alienating other HW companies licensing the Windows operating system, despite the dominant market position. For Windows Phone, a small player in a big market, buying Nokia and turning Microsoft into a phone manufacturer would almost certainly mean that no other phone company – the likes of Samsung, HTC, LG and others – would license it anymore. Why risk this market, when you can eat your cake (partner with Nokia) and keep it too (keep existing relationships intact).

Third, Google’s experiences with the Nexus phone – or even the accelerated decline of Symbian once Nokia took it over – are warning signs for anyone planning to be a phone platform provider at the same time as it sells products based on that platform. This argument is essentially based on the same underlying gentlemen’s rule as the second argument: you can’t be a partner while competing in the exactly same domain.

Of course, by taking a huge gamble, Microsoft may still go down this path – but becoming a new, bloated Apple at the cost of several tens of billions of dollars just seems too big a risk to take, even for Steve Ballmer.The stakes are just not worth it.

Whitepaper on Telecom Cloud Computing

May 12th, 2011

Scope Alliance (the alliance of network equipment providers committed to providing standardized base platforms) has just released their whitepaper on telecom cloud computing. I had the honor to be the editor of the document, produced jointly by Ericsson and several of our competitors. Next week the coming out party for the paper will be held at the OpenSAF conference where I will have a talk focusing on its content (see the agenda).