Re-thinking Moore’s law

Well, one month (almost) has passed of 2009 and perhaps it’s time to start writing again. As far as I can tell, not much has happened in the area of parallel computing and many-core chips that I could write about – well, at least not on the technical side. However, two trends (or events) have made me think a bit more about Moore’s law, or at least its consequence – the exponential increase in processing power over the last years.

The first is really a trend – a push for more power efficiency, while keeping the computing power on the same level (or better). Needless to spend more keyboard strokes on this – it’s obvious everywhere, in every area of computing, from netbooks to data centers.

The second issue is the economic crisis we are traversing which ushered in a new era of cost awareness. Granted, for some industries cents have already before been more important than MIPS or Dhrystones, but for a sizable chunk of the computing industry the ‘cost obsession’ is just getting fashionable. It’s clearly less important how much computing power you can deliver – it’s more important at what cost you can provide the current computing power. In fact, Moore’s law actually led to a situation where there’s plenty of computing power / sqcm – in fact, in some areas there’s so much that some companies actually managed to kill their own business by providing high-power chips: while the need for computing only increased moderately, high performance chips drove the volumes to such low levels that it’s simply not viable from an economical perspective to keep producing them.

So where will this take us? Clearly, there’s no end in sight for Moore’s law in terms of transistors / sqcm, at least not technology-wise. However, I claim that there is a shift in focus all over the industry, that can be sumarized simply as lower cost for the same performance versus more performance for the same cost, as until now. There are signs of this even in software, even from the most unexpected side as well – Windows 7 is the first ever Windows that requires less resources than its predecessor.

If this proves to be true, the implications are profound. Firstly, probably even Intel will postpone the massive multi-billion bucks investment required to shift to a new process technology and will seek to capitalize on already done investments. This will open up a chance for smaller manufacturers to get access to the latest process technology and will level the playing field on the HW side.

HW is however just one side of the cost – customers will require more on the software side as well. Most probably the next battle will be fought on how chip vendors can meet expectations on software tools (compilers, debuggers, simulators and the like). Efficiency will the battle call of the day and when everything else is comparable, software tooling will be the deciding factor.

Will this happen? Clearly, unless you are in the business of creating the next supercomputer, lower cost will sound as music to your ears. Will chip vendors take note? Looking at Intel dismal results – not to mention smaller vendors – they probably will. Whichever way it will go, it will be fun to watch 🙂 .

Happy Blogging Year to all the – known or unknown – readers of this blog!

Update: Here’s an article analysing Intel’s fab plans. Sounds pretty much as economic-guided slow-down of the development of new chip technologies.

2 Responses to “Re-thinking Moore’s law”

  1. […] kicked off this year with a post on Moore’s law and it’s sustainability from economical point of view. It turns out that I’m not the […]

  2. […] that economics are inevitably kicking in and slowing down the march of Moore’s law (see this post for more details) and the need for efficiency, portability and processing capabilities well beyond […]