Archive for June, 2009

Villámcsapás Amszterdamban

Thursday, June 25th, 2009

Váltóáram/egyenáram (pontosabban: AC/DC) rázta meg Amszterdam Ajax stadionját pontosan két napja 🙂 . Mivel Helsinkiben már rég elfogytak a jegyek, így kiruccantam megnézni az öregfiúkat akcióban.

Mit mondjak, öreg rocker nem vén rocker. Olyan kétórás bulit rendeztek, ahogy elö van írva a rock nagykönyvében. Voltak új számok is, de elhangzott a Thunderstruck, a Shook me all night long, TNT (robbanásokkal füszerezve), Whole lotta Rosie (óriási lufi-bögyössel aki ritmusra ringatta magát), Highway to Hell stb. Az egyetlen amit fel lehetne róni nekik az a számok közötti szünetek voltak – de amikor 60 évesen próbálod azt csinálni, mint 20 évesesen, talán érthetö is 😉 .  A belépés is látványos volt, a szinpad gyakorlatilag ” felrobbant” és egyszerre csak ott voltak a füst mögött és feldübörgött a jólismert AC/DC gitár. Ízelítöként álljon itt a Thunderstruck felvétele (telefonnal, így a hangminöség nem a legjobb)

Persze nem maradhatott el Angus sztriptízszáma sem a “Jack” ritmusára, az egyetlen változás az volt, hogy a gatyáján nem holland zászló, hanem nemes egyszerüséggel csak az ‘AC/DC’ felirat volt fel´piktálva. Íme a video:

A legnagyobb show viszont a ‘Let there be rock’-ra lett idözítve, Angus Young kb. 20 percig szólózott a nézötér közepén egy lift tetején. A képen látható fényfolt maga a gitárvarázsló 😉 .

Angus Young, a fénybe borult gitárördög

Angus Young, a fénybe borult gitárördög

Végül még annyit, hogy a Rolling Stones mintájára az AC/DC is “rátalált” a jelvényükre: mig a gördülö kövek Mick Jagger nyelvét használták fel, az AC/DC Angus és Brian Johnson simlis sapkáját eleválta jelvény szintjére, két szép vörös ördögszarvval (látható a videókon a szinpad felett).

Moore’s law, or what fabless chip design has in common with cloud computing?

Monday, June 22nd, 2009

I kicked off this year with a post on Moore’s law and it’s sustainability from economical point of view. It turns out that I’m not the only one thinking along these lines: first came IBM fellow Carl Andersson who proclaimed the end of Moore’s law in April at an international symposium on the basis of – economical reasons (only few vendors – essentially two groups – can afford the cost of developing new process nodes). Then more recently Gene Frantz from Texas Instruments wrote more-or-less the same in a guest blog on DSP Design Line, an online publication. Customers don’t really give a damn about the process technology used – they want a solution that meets their needs (costs, power, performance etc) earlier than their competition can go public with a new product. To paraphrase the old slogan of my company – the rest is just technology.

To add the topping to the cake, iSuppli, a technology research company published a report on the semiconductor industry (sampled by te same DSP Design Line here) setting the year of death for Moore’s law to 2014. The reason, again, is economical, not technological. Since it costs more and more to develop new process nodes, chip manufacturers will need more time to gain back the investment – hence they will stay with the same process much longer. Their forecast shows a more-or-less steady flow of revenue from 65 and 90nm process nodes well into the next decade.

So, what will this mean?

First, don’t expect a spectacular change in power characteristics, but expect more complex chip designs, probably with more simpler cores – after all, the chip vendors will have to show improving performance. That will put even more pressure on software to go ‘infinitly parallel’ as well as on solutions to deal with memory access, peripherials etc. Also, it may herald the end of Intel’s dominance – even they will not be able to afford the latest process node: not because they don’t have the money, but because such investment would erode their margins and profits implicitly. I think we will see the same effect in the fabless chip design as it’s emerging in cloud computing: there will be platform providers (process technology and data centre, respectively) and there will be solution providers, who will build customized solutions delivered on the aforementioned platforms (chips or software & services). You will be measured by how well you can meet your customers’ needs on existing platforms, not how fast you can deliver new platforms. It will be a major shift, but from which chip customers can only benefit, as the range of offers will increase significantly.

UPMARC summer school

Monday, June 22nd, 2009

Last week I attended – also as a speaker – the summer school on multi-core organized by the UPMARC research centre at the University of Uppsala (you may find more details on the summer school on their webpage). I was talking about some of the research we’ve been doing at Ericsson, mainly in the area of scheduling and power management on massive multi-core chips.

There were some really interesting presentations that I enjoyed a lot. Kunle Olukotun from Stanford – leader of the Pervasive Parallelism Lab – talked about their research direction with regards to programming multi-core chips. Their approach is to tackle the productivity, portability and scalability issue through domain specific, high level languages embedded in Scala (an object-oriented/functional hybrid language that compiles to Java bytecode). The point he was making was that general parallelization is impossible, hence you need to go specific and tackle domain by domain; it’s better to design software on the intentional, high level with implicit parallel semantics and rely on compiler to adapt it to whatever platform comes your way – and this is doable if you are working in a specific domain. Our experience with the domain specific language for DSPs proves exectly this point.

By the way, talking about programming on intentions level, Charles Simonyi’s company (the famous former Microsoft employee, inventor of the Hungarian notation and double space tourist) has gone live with demos of their tool (see more on their webpage), that promises easy definition of domain specific languages tied to a central repository from which real code can be generated – as it fits best the actual domain. Put together, there seems to be a renewed momentum in the industry and academia for promoting domain specific languages as a solution to raising the abstraction level while tackling the issues around compiling for everchanging hardware targets.

Back to the UPMARC summer school, there were a couple of other interesting talks, about memory coherence, temporal data and instruction streaming to improve memory performance and component based design. Honestly, the quality of the talks varied widely, but it was a constructive and inspirational event, with lot of interaction and debate.

I just hope to see more of this in the future.