Archive for August, 2010

Net neutrality: it’s about scarce resources, stupid!

Monday, August 23rd, 2010

(this post reflects my ideas and my ideas alone – it may or may not be the same as those of my employer)

Sure enough, the net is full with arguments around net neutrality, on the wake of the leaked Google-Verizon talks and the subsequent officially released proposal that would treat mobile broadband (and some other future services) differently from traditional fixed internet access. Some went on a crusade on behalf of net neutrality; some defended the Google/Verizon proposal, but, for sure, everyone has an opinion. So do I ūüėČ .

The bottom line is that mobile access is different. While you can just lay more and more cables at relatively low cost, the frequency band usable for wireless access is limited by the laws of physics. There are just a limited amount of bits that can be transported over the air in a certain location – thus, radio resources differ from fixed access (cables) in that they are really a scarce resource. Sure enough, you can increase throughput to some extent by increasing the density of cells (base-stations), but this obviously requires both capital investment and generates operational costs. Even then, physics will eventually kick in.

Thus, someone has to pay for bandwidth over the air – it’s just the basics of economics when it comes to the access to scarce resources: (un)availability of supply and demand is driving the prices. In this case, there are four parties that may pay the bill: the mobile operator, content providers, the content consumer (end-user) and the government. The operator has little incentive to do any investment beyond gaining more (paying) customers – and the market is quickly reaching 100%+ penetration in developed countries (and some developing ones as well). That leaves the other three: content consumers, content¬†providers and the government.

In their momentary lapse of reason, mobile operators decided to offer flat-rate mobile broadband to their customers – something that they surely regret by now. The hard fact is that this will be hard to change – and even charging for traffic that exceeds a certain limit will not mitigate the cost of providing good quality service (read: dense networks): most of the users will stay below the limit and the few heavy users are ‘good enough’ at consuming the scarce resource when it’s most needed. The result: low quality network, customer complaints – just look at the reputation of AT&T in the US¬† as a consequence of the popularity of the iPhone.

The government may be part of the solution: by enshrining the right to net access at a certain level (say, X MBits/s) and subsidizing network buildout that can guarantee such level of service. Such an approach would guarantee equal basic access, while not hindering innovation and further improvements: on top of requiring a minimal service, it shall give the freedom to content providers, consumers¬†and mobile operators to negotiate commercial agreements on who gets access – and for what extra fee – to the additional bandwidth that may be built out – and sold – by the operator. It’s a bit like social security versus getting rich: everyone is eligible to a minimum standard of living, but beyond that he or she is free to get rich, according to his or her capabilities.

Sure enough, such a framework has a number of tricky issues (such as how to decide how much it really cost to build the minimum level of service and how much of the operator investment goes beyond that). But it’s my strong belief that such a public-private partnership and the combination of regulatory guarantees and requirements¬†of certain level of service coupled with industry-driven differentiation of the access to a scarce resource is the way forward. No, not everyone will be a bit-millionaire – but everyone will have enough bits to feel OK and those with more demand will earn more.

Just as in our society otherwise. It’s the worst solution – except all other possible solutions.

The world of computing, AD 2020

Monday, August 2nd, 2010

Recently IEEE Spectrum published an article by professor David Patterson of Berkeley, titled The trouble with Multicore. In it, he makes an analysis of the state of the art of parallel computing research and the things we have tried (and failed: specific languages, ‘properly designed’ hardware, automatic parallelization); finally, he offers three visions for 2020: stop of performance increase; success for multicore processors in just a few areas; full-blown success where any application can benefit from a large number of cores. Patterson admits that he considers the latest scenario unlikely.

I have a great respect for David Patterson’s achievements (the RISC architecture, the RAID concept etc) and I feel privileged for having met him a couple of years back. However, I feel that, along with much of the community, his line of reasoning is somehow too narrowly placed within the limits of traditional wisdom: their focus is still on functional or data level parallelization, where, I must agree, it’s likely that we will fail to find an overall solution. However, there’s another direction worth exploring that I strongly believe can eventually lead us to a breakthrough that will keep the IT industry on the same track we got accustomed with the past 30 years.

So, let me give my modest prediction for year 2020.

As Moore’s law will likely continue, it’s fair to assume that for some time to come (at least down to 5nm – due by about 2020¬†– ¬†leading¬†industry experts¬†claim) we will be able to cram more and more transistors (and memory) into one chip. True, instead of more powerful cores, we will design systems with more, but simpler cores – chips with lots (gigabytes) of on-chip memory and thousands to tens of thousands of cores surely look feasible in 10 years time. In fact, we will see a ¬†sea of computing resources crammed together on one silicon – a sea, where a drop (one core) will have very little significance on its own. Such chips are certainly possible within today’s power budget (see also my related post on low power servers).

So how can software benefit?

I think we need to give up on our ideas of automatic parallelization as well as the pervasive quest for scalable algorithms, at least for the so called embarrasingly sequential applications. Instead – as I briefly described earlier – let’s focus on strengthening the semantic knowledge the hardware will have about what (not how) the software wants to achieve; use this information to intelligently perform run-ahead calculations on behalf of a sequential program. Granted, no amount of cores would suffice to completely explore the execution space of a reasonably sized software, but if we can limit the scope of exploration by telling the hardware which are the potential branches, one can make good use of the sea of computational resources to make an application run faster – by simply pre-executing all the likely branches (narrowed down by the extra semantic information made available to the hardware). In this approach, the more cores you have, the better the chances of success and thus the speedup- hence, the sea of computing resources will make perfect sense.

One could consider such an approach a waste of energy: many of the cores will perform work that is discarded, thus waste energy. This argument is flawed, due to the same reasons for which natural selection is not a waste from life’s evolution point of view: any failure branch actually helps the eventual winner to succeed; thus, you may think of it as part of the same bigger entity that works succesfully on the problem. If you get a certain speed (or throughput) for one unit of energy, but perhaps double the performance for eight time the energy – is that worth it? For many applications, where continuous performace improvement is a must the answer is a clear yes.

So, here’s my vision for 2020: chips with a sea of computing entities and matching memory; software that can provide by orders of magnitude¬†more semantic information to the hardware than today; run-ahead exploration of execution space by the sea of computing entities using the semantic information provided by the software.

As a closing remark, I can only agree with David Patterson: “No matter how the ball bounces, it’s going to be fun to watch, at least for the fans. The next decade is going to be interesting.” Couldn’t disagree with that ūüėČ .