USS Clueless - Make the software more effcient
     
     
 

Stardate 20020110.0623

(On Screen): Winterspeak says that it shouldn't be critical that Macs still haven't broken the GHz speed barrier yet, because Apple and its software vendors should be able to work around that by making their software more efficient. I'm afraid it's not that simple.

Apple will break 1 GHz. Unless Motorola outright goes out of business, or dumps the silicon business entirely, then it will continue to work on its processors and they will get faster. I don't believe we're in an outright stall again. The problem is more that Moto isn't moving as fast as AMD and Intel are, and that the PPC is falling behind.

But the thought that software really could be more efficient and shouldn't need as much CPU power as it does is a recurring theme among users. "I used to use a 50 MHz 486, and the use experience was just as snappy as this 900 MHz Pentium III. Where the hell are all the cycles going anyway?" I've heard things like that all the time.

Part of it is nostalgia. The use experience on the 486 actually wasn't as zippy, and that old software was a lot more clunky anyway. However, it is true that older apps written in the days of 50 MHz and less were more efficient than modern apps are. It is not true that modern app developers could return to that level of efficiency. Software professionals know this, but it's difficult to explain why to the laity. This is, unfortunately, an example of "Nothing is impossible for the man who doesn't have to do it himself."

The effort of writing software doesn't scale linearly with the size of the package being written. In other words, an app which is twice as large takes more than twice as much work to create. It's exponential, though the exact exponent is variable. In some cases it's as bad as squared: twice as large is four times as much work. But there are shortcuts which can be taken to lower that.

If we were to write modern software packages to the same standards as that old stuff, it would take way too long and cost far too much. It could be done, but it couldn't be sold. It's commercially impossible. Adobe, for instance, could make a version of Photoshop which ran at that purported level of efficiency, but it would take ten years and they'd have to sell it for $3000 per copy. By the time it came out it would be obsolete and customers would laugh at the price.

The only way to get that exponential scaling of effort under control is to recognize that it's possible to trade off design-time expense for run-time expense. When the computers are faster and when they have more memory, then the programmers can take shortcuts which decrease the expense at design time but result in programs which are larger and require more cycles to run than might absolutely be necessary. By doing this, it becomes possible to deliver the product in a reasonable time frame and at a reasonable cost.

From the point of view of the software vendors, an inefficient program which delivers on time and can sell at a reasonable price is better than an efficient one which arrives late and costs too much. The market has shown that the customers agree.

But there's more to it than that: we expect our computers to do more now than we did ten years ago. Despite our nostalgia, if we go back and try to reuse the programs we used ten years ago, we find them intolerable. The user interfaces are horrible (by modern standards) and they are distinctly unrich in features we have grown used to.

And there are entire categories of programs we use now which are only possible because of high powered CPUs. Modern rendering programs like Lightwave or Maya are only possible because modern CPUs are blazingly fast.

Another example of that is the imminent switch from MPEG2 to MPEG4 as the standard for video compression. MPEG4 creates substantially smaller files with little quality loss, but it does so at the expense of considerably more CPU load at playback time. MPEG4 couldn't be used five years ago because the CPUs weren't fast enough to decode it in real time.

MP3 audio is another example. It is computationally fiendish, far more so than most people realize. Codecs like MP3 generally are based on Fourier analysis. Sections of the sound are analyzed at coding time and what is actually stored is the Fourier coefficients which describe the sound in that segment. At playback time the sound is actually resynthesized according to those coefficients. That's how the sound on a DVD is stored, and because of that it's possible to play back a DVD at a reduced rate with the sound slowed down without the sound dropping in pitch. The playback synthesizes the correct frequencies but expands the time parameters. (The result is rather odd sounding, quite frankly.) And that's nothing compared to what's involved in the video codec.

These are computationally difficult problems. Software DVD playback isn't practical for any CPU running less than about 400 MHz.

Customer demand for more powerful apps, and for apps which chew more CPU power, is going to continue to rise. There isn't any way to rationalize a clock-rate stall; it's commercial suicide.

Winterspeak suggests that one out for Apple is to standardize on multi-processor configurations if Moto can't deliver faster processors. There's good sides and bad sides to that. One problem with it is that many kinds of apps can't easily take advantage of SMP.

By far the

Captured by MemoWeb from http://denbeste.nu/cd_log_entries/2002/01/fog0000000134.shtml on 9/16/2004