A brief history of writing software

Recently there was a long Ericsson-internal mail discussion thread around the statements made at a panel of Microsoft engineers. There were many interesting posts by colleagues near and far, but here’s my contribution – of course, in line with my beliefs expressed earlier on this blog 😉 .

“There has always been two dimensions to software development: one was the platform (HW and OS) dimension and the other was the problem domain dimension. For most of the past 50 years, software development focused almost exclusively on the first one, creating languages that allowed description of programs in a way that was easily translated to Neumann types of (sequential) machines. The problem domain, whatever it was, had to be squeezed into this paradigm.

During the 80s it became painfully clear that this approach is a big bottleneck, so we started tinkering with the high level assembler languages that we had (C, Fortran, Pascal, ADA etc) and invented stuff that was moving towards more genericity and somewhat higher level of abstraction (object oriented paradigm), while moving some of the obvious tasks to the tool/run-time system domain (such as memory management). The underlying assumption remained though the same: we must follow the platform – any attempt to raise abstraction level in general led to inefficiency when deployed on HW. Some argued that ‘well, this is the price you pay’.

Still, while this fixed some of the issues around actually writing code, the rift between domain knowledge and platform knowledge remained wide open – so we invented UML, agile etc. These, again, masked some of the discrepancies but didn’t fix the real problem – we took steps forward, but still not enough. We always hit the same wall: generic, high level of abstraction meant inefficient code and we still had the communication issue between domain and platform experts. Graphics were just aiding understanding – shortening getting up to speed times – but weren’t really giving the order of magnitude boost we all hoped for.

The key insight of the past few years was that while generic, high abstraction level, Neumann-paradigm focused design leads to inefficiency in deployed code and does not really bridge the gap between domain expertise and programming expertise, there may be a middle way that could re-concile these tensions: what you cannot solve in generic terms, becomes much simpler if you narrow the domain down so that the range of choices are greatly reduced; hence the proposal goes like this: focus on a specific, limited, well-defined domain (for example, DSP programming, communication stack development etc), create a language/modeling environment/infrastructure that allows domain experts to express what the software shall do (not how) and then focus on making the transformation within this limited, well-defined domain to Neumann architectures automatic. The key is being restricted: efficient automatic transformations become possible and the result is an eco-system where you design in the domain level (what, not how, possible by even domain experts) and then can generate efficient target code through specific, limited, targetted transformations (created by platform experts).

Cheers, Andras”

Comments are closed.