Stardate
20021119.1924 (Captain's log): Bob writes:
The evangelists don't have performance to hang onto at all, but I still love OS X. I've installed Jaguar (finally), and it's much faster, cleaner, and adds features. The hard numbers make me sad, but I can't go back to a PC. Microsoft has burned all of their bridges with me in the unsupportability of the various versions of Windows. Now that I'm a freelancer, I just don't have the time to fix Windows anymore. On the company's dime, it was another story. I'd gladly troubleshoot networking problems, font issues, or configuration glitches. I was getting paid, so who cares?
Isn't that the point? Apple still controls the hardware and the OS, so it's supposed to work perfectly together. The fact that Apple can't imagine employing an Intel processor (or a clone) is, in my opinion, shortsighted, but not a decision that can be sustained.
I say it over and over. OS X is UNIX. UNIX runs on Intel processors. Therefore...
The critical difference between Apple and its competitors is indeed the fact that it controls the entire architecture. It's also true that there are certain advantages to be had from that. The problem is that there are also substantial disadvantages associated with it, and in the long run those disadvantages are more important.
This industry began small, as most do, and in the early days desktop computers were often thought of as being little more than toys. Indeed, the early systems were sold primarily either as toys similar to video games, or as educational products for kids, or as high-end geek toys for the kind of people who in earlier times might have become radio hams or stereo freaks or hot car buffs; people who like to tinker and have a lot of spare money.
That's when Apple got its start, with Jobs and Wozniac cobbling together the original Apple II. You also saw things like the Radio Shack Color Computer, and the Atari 800, and such abortions as the TI 9900 (which failed mostly because of inept marketing and a terrible product concept).
But you also saw the beginnings, the hint, of a different approach to the industry, the open hardware architecture. In this approach (made possible originally by Digital Research through its CP/M operating system for the Z-80) no single company controls the whole thing.
It's true that the closed-architecture systems used components made by others; this wasn't a case of a single company actually creating everything starting only with raw materials. But you did have a single company which made the majority of the architectural decisions, and it was the only company which sold the end-product. For example, the Radio Shack Color Computer (the "CoCo") was based on the Motorola 6809e processor, and Radio Shack and Motorola worked together to design the 6883 bus controller chip which was the real heart of the design. As part of that deal, RS was given exclusive rights for (I believe it was) the first year to purchase the 6883. And RS developed the entire firmware package for the system, and ultimately there were no clones.
But with the CP/M system, there was nothing proprietary (except the OS, which anyone could buy). The CP/M OS could be configured to run on many different hardware architectures, and for a while there were several companies making CP/M systems.
The entire industry changed when a visionary arrived. It is fashionable to decry Bill Gates, and to criticize nearly everything about him, and to talk about him as evil incarnate and an accidental billionaire, but in fact he is the single most influential figure in the history of the computer industry.
I've written before about how I consider Bill Gates and Michael Dell to be the two most innovative people in the industry, and I still think so. And the reason they're influential and also scorned by many is that their innovations were not in engineering. (Many if not most such commentators are engineering chauvinists, and completely discount the importance of anything else, such as marketing and business models.)
Gates arrived earlier than Dell and is by far the more influential, because Gates was the first to see, and the best at embracing, the truth that computer software is critically different as a product because it costs virtually nothing to manufacture and distribute compared to its value. With hardware you're lucky if you can sell at a 3:1 markup over "cost of sales" (a term of art which means the incremental cost to the company of producing one unit and getting it to a customer or retailer). But in software markups of as high as 500:1 are not uncommon.
Software can be very expensive to produce, but nearly all the cost is one-time, and that means that the single most important thing to achieve with software is volume. Software is far more subject to economy of scale than any other kind of product we know of.
Moreover, software is extremely subject to network effect, which is where the perceived value of each unit to its buyer rises as a function of how many other people also have bought it.
Some products are not enhanced by network effect. The value of a pill which cures me of a horrible disease is not increased by knowing that there are a lot of other people also being cured by it; I'm not concerned with them, I'm concerned with me.
But with many kinds of products, the more of them there are out there being used, the more valuable each of them is to its owner, and software is an extreme case of that.
What Gates saw in the early 1980's was that in software that the more units you could sell, not only would the producer cost per unit fall but the perceived value to the customer would rise. And from the very beginning, Gates has always made sure that the one central goal of everything Microsoft did was volume. (For instance, he accepted a lower price from IBM for a non-exclusive license for DOS over the higher price IBM offered for an exclusive license. Gates wanted to keep open the option of selling DOS to other companies.) No one else saw how critical this was and made such a commitment to it until after Microsoft proved beyond any doubt just how important it truly was, and by that point Microsoft had established itself so strongly in the industry as to be unassailable.
CP/M established the concept of an open architecture system, but Digital Research didn't really survive the transition to 16-bit processors. CP/M-86 ran on the original PC, but ultimately DOS was the winner mostly because Gates was aggressive about licensing it broadly and selling it cheap, foregoing short term revenue in order to work for long term volume and far more revenue far into the future. (Another unusual thing about Gates is that he takes the long view.)
The PC hardware architecture was open, and compatible machines could be purchased from many sources and would run the same software. In the hardware, too, this gained benefit from both economy of scale and from network effect (as well as the general benefits of competition), and the PC architecture became more and more dominant.
The big disadvantage of an open architecture is that it's going to be somewhat clunky. When no one controls it, no one can enforce hard decisions on the industry and you tend to carry around a lot of baggage. (Modern PCs are still largely saddled with an interrupt architecture which comes from IBM's decision to use an Intel interrupt controller chip designed in the 1970's for the 8085.) To some extent the architecture of the PC has always been inelegant, and that was unavoidable. All of the competing closed architectures in the 1980's (the Amiga, the Atari ST and the Mac) were better from a strict engineering standpoint.
But from a commercial standpoint, the benefits of an open architecture far outweighed any engineering details. The big problem with a closed architecture, especially one with a proprietary operating system, is that it can't really achieve the volume that an open architecture can.
And as the operating systems became larger, more elaborate and explosively more expensive to produce and maintain, what you ended up with was a lot of companies who were selling elegant computers but who were saddled with structural expenses which forced them to sell their computers at a premium price, which didn't deliver a perceived bang-per-buck ratio that most consumers found acceptable.
Each of the architectures attracted a core group of zealots; that always happens. But it's the non-zealotous who ultimately decide if a given system will live or die, because as volumes drop then either the company has to hike prices further so as to pay back fixed costs or else the company begins to lose money. So Atari and Commodore died, and for a long time Apple was gravely ill.
In terms of engineering elegance, it's easy to argue that having a single company controlling the architecture will usually result in a cleaner system. But from the standpoint of market forces, that approach is deeply crippled. Operating systems today are grossly expensive to develop, enhance and maintain. The days when the entire OS could fit on a single floppy disk are long gone, and the cost of software rises exponentially with size. In its entire history, Microsoft (the world's largest software company) has only really developed four operating systems, with each going through many incarnations. (Just for reference: DOS, Windows (3.1/95/98/ME), NT/2K/XP, CE.) DOS was primitive and extremely low in features and capability, but each of the others has represented an investment of billions of dollars, and the only way that can be commercially viable is if shipment volume of each is huge so that the engineering expense can be amortized without each copy being unreasonably expensive.
Open hardware architecture, in the form of the many companies who supported the PC, competed with the closed-architecture companies in the free market and killed all but one of them off. Apple is the sole survivor of an earlier time, limping badly but still in the race. But the market effects which killed Commodore and Atari are continuing to hurt Apple, and they're only going to get worse in the long run. Apple survived because it was stronger than Commodore and Atari, not because it was fundamentally different.
And it's no longer the case that Apple's products will be more elegant from an engineering point of view simply because it controls the architecture, because its low volume is also affecting its suppliers. Because Motorola got tired of losing money on high performance PPCs, Apple has had to live through two clock-rate stalls. (We're in the second one now.)
The most recent system it released, the 1.25 GHz dual G4, is an appalling piece of junk from an engineering standpoint, and reveals the deep trouble that Apple is in due to low volume. The problem they face is that the G4 front side bus was designed for processors a fifth as fast as the modern ones, and it doesn't have the bandwidth needed to keep a modern processor fed with data. But Apple can't change that; only Motorola can, and that would cost Motorola a lot of engineering expense which Apple couldn't ultimately pay back and for which there are no other significant customers. If Moto did that work in a timely fashion and charged Apple a break-even price for the resulting chips, Apple's products would cost so much that Apple would die. But for Moto to charge a price to Apple which permitted Apple to sell its products at a price competitive with commodity PCs means Moto would take a loss. Either way means death to someone.
That's because Apple can only pay for its mammoth expense in developing its own proprietary operating system by selling its hardware at a greater markup than anyone else in the industry. When you buy a new Mac, part of what you're paying for is the cost of development of the OS.
That's true for the PC, too, but the cost of the OS is distributed over 25 times as many PCs, so your contribution to the Microsoft-NT-reimbursement-fund is far lower for a modern WinXP system than it is for a new Mac with OSX on it.
That is the irreducible fact of Apple's business model: they're financing OS development out of hardware sales, and when hardware volume is low, then the surcharge for OS development per hardware unit sold must be high. (Revenue from sales of OS upgrades to existing users is small; the majority of the money for the OS comes from hardware sales.)
If Apple's volume was higher, much of its problems would have solutions. With higher volume, it would be a more attractive customer for CPU companies and could get cutting-edge processors for its systems at a more reasonable price. With higher volume it could charge a lower OS-surcharge per unit. With higher volume its products would be more subject to network effect and would be more valuable to its customers. Volume would solve everything.
The problem is that it's also out of reach; everything about Apple's business plans predisposes it to not being able to increase its volume sales dramatically. In essence, it needs the benefits of volume to already be in effect in order to increase its volume, but since they aren't then it really can't. It's Catch 22, and though it doesn't necessarily mean imminent death it does condemn Apple to stagnation and nicheness.
And perversely it also means that Apple has less and less control over its own architecture, as it must try to make do with commodity components which are no longer designed to its specifications. That's what the newest Mac makes clear.
Without a new processor from Motorola which actually redesigns the front side bus, the only way that Apple could pump more data through the pins of the chip was by increasing the base clock rate. The bus protocol can't be changed; it would remain single-data-rate. But by boosting the voltage and running really hot (with great big fans and lots of noise) Apple was able to make it single-data-rate at 167 MHz instead of single-data-rate at 133 MHz, gaining about 25%. In order to find any memory which could run in such a weird mode, they used PC2700 DDR (333 MHz) and only used half its capacity. It's double-data-rate but only being used as single-data-rate when talking to the CPU, but that was the only solution because SDRAM won't run at those speeds.
The result is an engineering monstrosity, which still can't compete in terms of processor power, and it's about the best Apple can do. Moto may well manage to boost the internal clock rate of the processors another step or two, but the gains will be minor and will to a great extent be nullified by the fact that even at 167 MHz the bus still can't really provide data fast enough (especially for a duallie). Faster processors will spend more of their cycles waiting for memory, doing nothing useful.
Bob suggests switching to Intel. On the face of it this is attractive, but it's yet another case of something which looks good as an engineering solution, but not so good in terms of market realities.
By using the same processors as the PC, Apple gains and keeps parity in terms of computing power. Of course this also means that Apple is then in a toe-to-toe fight with Microsoft, which is a lot bigger than Apple and which has t
|