Archive for January, 2010

Who will really benefit from iPad

Thursday, January 28th, 2010

Now it’s official and the web is boiling over with news, blogs, comments and what not on Steve Jobs’ latest invention: the iPad. Essentially, it’s a slim netbook with a nice design and 3G connectivity, with a user interface no less intuitive as any of the previous Apple gadgets.

But, behind the scene, there’s one company that will benefit and there’s another one that should take notice. Both have quasi-monopoly in their own field and both have set their eyes on markets the other one dominates and there has been a lot of fuss around the Next Big Battle of the Giants. With the iPad, I believe one of them won the first battle – and this will make the war even more interesting to watch.

The one company that stands to benefit from the iPad is ARM, in its attempt to win the heart and soul of laptop/netbook users so linked to x86 platforms (and, on a different front, for a niche in the server market). Though Apple did not say it directly – just stating that the iPad’s chipset is its own design – the fact that all iPhone apps will run on iPad means that it must use an ARM core at its heart. This is a huge win for ARM, which tried for some time – through its partners – to get into the netbook/laptop market, but never really succeeded, largely due to the lack of a port of  ‘big Windows’ onto ARM cores. So Apple is obviously the next best choice, given that Linux didn’t really manage to get any following beyond the nerd community.

Why should the other company – Intel – take notice? ARM is winning straight out on the power battle, easily supporting tens of hours of work-time (limited mostly by the screen and not the chip itself). On this front, Intel is facing an uphill battle and if the iPad becomes a success with a significant share of the laptop/notebook market, that will directly impact Intel’s market share.

And, who knows, some people in Redmond may be watching…

Note: As always, the views expressed on this blog are mine and mine alone, with no connection whatsoever to my employer’s official views.

Üni és az absztrakt festészet

Thursday, January 28th, 2010

Üni egyik kedvenc rajztémája a kéz. Többször is próbálkozott körberajzolni a kezünket, illetve a sajátját, többnyire nem sok sikerrel. Ma azzal próbálkoztunk, hogy körbesatírozzuk a kezemet, de ez sem járt sokkal több sikerrel.

Viszont az ötlet megmozgathatta Üni fantáziáját, mert az alábbi rajzot aztán teljesen egyedül produkálta és azzal adta át nekem, hogy ez ‘A kéz’:

Így rajzolta le Üni a kezet

Így rajzolta le Üni a kezet

Mitagadás, nemigen értettem 🙂 . Viszont Üni szépen elmagyarázta, sorra mutogatva az ujjakat. Ledöbbentem, mert úgy magyarázta el, ahogy az alábbi képen bejelöltem:

... így magyarázta el, hogy hol a kéz

... így magyarázta el, hogy hol a kéz

Egyértelmü, ugye? 😉

A busy new year

Thursday, January 28th, 2010

2010 started off with lots of travelling for me – in fact, between 7th of January and today, I’ve only been home for about 90 hours while covering 4 countries on two continents. This included India, where I attended HPCA/PPoPP (High Performance Computer Architecture and Principles and Practice of Parallel Programming). So let me kick off this year by sharing a few reflections from these two events.

The overall impression I left with is that research is stuck. There were very few ideas that would have passed my threshold of being fresh or innovative: most of the papers were about shaving off 10-15% here or there of this or that, but no take on any major issues: exploding power envelopes, developing parallel software more easily etc.

The more interesting parts were the keynote by professor Arvind of MIT (who, beside openly promoting his startup company, emphasized the potential of domain specific high level languages)  and the panel on how to build extreme scale machines (extreme meaning both terraflop in data-centers and petaflop in mobile devices). The general consensus centered on two major ideas:

  • We will see more accelerators and more domain specific hardware – it’s our only bet right now to continue scaling perfomance without needing a nuclear plant for each datacenter
  • Domain specific, high level languages (or libraries) are the single most promising track left to explore for paralel software

While the second idea is clearly in line with my thinking, the first one left me with a slightly disappointed mood. Is this really the best we can do? Isn’t there anything better left? Luckily, it turned out I’m not alone with my doubts – in a ‘get you up to date’ chat with professor Anant Agarwal of MIT and Tilera, he was of the opinion that the key is really to keep cores simple – then we do have a path towards thousands of cores, at least down to 15nm level, without any major power bottlenecks. I certainly hope he’s right – otherwise software-reliant companies are in for a major investment to adjust to the brave new world of HW accelerators.

Well, that’s about it – make sure to check out my other post on the implications of iPad 😉 .