Stardate
20020728.1257 (Captain's log): In late 1970's, if memory serves me correctly, I bought my first computer. It was a Radio Shack Color Computer (the "Coco") and it ran a 6809 processor with a clock rate of about 900 kilohertz. (It was a quadrature clock, so the effective cycle rate of the processor was four times that.) It had 64K of RAM, and I mostly ran OS9 on it, using three floppy drives. I used to use it with a television to visit various BBS systems and communicate. Ah, the bad old days.
A few years later, I bought my first Amiga, which had a 7 (?) MHz 68000 and 512K of RAM; I also bought a hard drive for it and some other peripherals.
In about three weeks, if HP actually does what they say, I'll be getting my new workstation. Once it's fully tricked out, it's going to have two 2.4 GHz P4 Xeons, 1G of RAM, about 150G of HD and a lot of other stuff. My Coco communicated through a 1200 baud modem; my new computer is connected to the Internet through my cable modem with a downlink of 2 megabits per second.
All three of these systems cost the same in constant dollars within a factor of two. That's the power of Moore's Law, which is variously stated, but which I tend to think of as this: The value of computer hardware you can buy for constant money doubles every 18 months.
In some areas, the rate of change has actually been faster; the cost of HD storage has been dropping much faster than that for years.
Conservatively, my new workstation will have about 500 times the compute power, 2000 times the RAM, and about twenty thousand times the disk space of my Amiga. So why doesn't it seem any faster?
Well, actually, it will indeed be far faster. But it isn't going to seem 500 times as powerful, and that's because software is less efficient now than before.
Some of that perception is nostalgia, of course. The programs I ran on my Amiga in 1990 were a lot less powerful and a lot less sophisticated than the ones I run now, and my standards for impatience have changed. But some of that is real; modern programs do take more processor power to do what seems like the same thing.
My 500-times-faster workstation will actually seem like it's about 20 times as fast as my Amiga. The rest of that boost will be consumed by increased software overhead.
To many people, this is a crime. Back in the good old days of efficient software, They Were Men Who Did The Developing and they took pride in their craft and created small, efficient, clean programs which ran fast even on slow computers. The modern stuff seems bloated beyond belief, and it must be because modern programmers are incompetent wimps who no longer care.
Actually, though, that has nothing to do with it. What's really going on is that the programmers are making quite deliberate tradeoffs. The kind of tight, small, efficient computer code we all seem nostalgic for is also slow to write and debug, fragile in the face of revision, and very prone to bugs. If modern software packages were implemented that way, they'd take much too long, cost much too much, and be even more buggy than the stuff we're currently using. Sometimes some critical sections of code are still implemented that way, but only when nothing else will do.
Of all the product arenas I described earlier, embedded software (my own field) still has the closest roots to that kind of thing because the processors we're using now aren't all that much faster than the stuff in use back then. My last project used an ARM running about 20 MHz, with 2M of ROM; and that just isn't all that much more powerful than my Amiga 1000 way back when.
But even we try to avoid that kind of thing if we can. It turns out that there are usually a bunch of different ways any given problem can be solved, and some of them will run fast, while others can be written fast, and given a choice we'll nearly always choose the latter. We make a conscious decision to trade off execution efficiency for programmer efficiency, because if we didn't, we'd never finish.
All engineering is tradeoffs. It always has been. General Motors is willing to spend a million dollars on design to remove one dollar from the manufacturing cost of a car, because they produce a lot more than a million cars per year. On the other hand, if your entire production run for your product is 200 units (and I've worked on such) then it would be insanity. Sometimes it's worth increasing the manufacturing cost of the product to save engineering cost.
And that's what's been going on in software. Programmers use approaches in their code which are safe and sure, which can be designed and written quickly and with high confidence that they'll work. As it happens, many such approaches also make the resulting code larger or run slower, but if the computer is blazingly fast and has a huge amount of memory anyway, it's no big deal.
With most kinds of products, timely delivery is critical. It doesn't matter how nifty your code is if you don't ship. Something that's finished is better than something which slips forever.
There are numerous ways in which this tradeoff plays out. For example, most embedded code is written in C, which doesn't hold your hand and pretty much requires you to do everything yourself. More modern languages like C++ and Java offer considerable assets to the programmer in terms of handling automatically things which a C programmer would have to do directly. On the other hand, you have to load a much bigger runtime system (consuming memory) and a lot of the code generated by the language runs le
|