|
|||
She's apparently a fruitcake; she's had every single one of her attempts summarily dismissed by the arbitrators. Doesn't this woman notice even the faintest bit of irony in the fact that she is using belligerent legal proceedings to wrangle with another non-profit Christian organization over the ownership of the phrase "Love Thy Neighbor"? (discuss)
The OWL project, if approved, will produce a telescope with a 100 meter main mirror. The mirror and the assembly which holds it and its secondary optics will weigh a whopping 13,500 tons (which is about the size of a USN cruiser). In fact, it will be so huge that some of its secondary mirrors will be as large as the largest scopes in existence now. They're completely serious about this and their work looks practical, albeit immense and spectacularly expensive. It remains to be seen whether they can convince the governments of Europe to fund the project, but given their undeniable success with the VLT, there's a chance. OWL stands for "OverWhelmingly Large" and they mean every word of it. Go get 'em, boys! (discuss)
Corporations are not "clean" or "dirty". Corporations are profitable or unprofitable, and that's the only characteristic about them which matters. That happens to be the law, interestingly, and if a corporation operates clean and unprofitable where dirty (legal, but dirty) would make it profitable, its stockholders can sue. "What I don't understand is why a mainstream company like Yahoo would stay in the business of pornography." That from Patrick Trueman of the American Family Association (oh, well). Surely he knows the answer. (discuss)
But this is about a different word. When I was a kid, "gay" meant "happy". It had nothing to do with sexual persuasion. (When I was a kid no-one was willing to admit in polite society that there was any alternative sexual persuasion.) When I was a young adult, the term came to mean "male homosexual" (and later "any homosexual"), about the time that most people became comfortable with the idea that there were such beasts (without them actually being beasts). But no-one can control language; words mean what they're used to mean and new meanings appear all the time. Language is a work in progress. Kids have now adopted a new meaning for "gay" as a general insult, and I don't think that it has anything to do with sexual persuasion either. It's possible that it derived from that but there's no longer any connection. I'm not sure just where it did come from, really; but in practical use for young people now, saying "that's gay" is similiar to saying "that sucks". The two links given above both contain examples of that usage of the word gay but I've seen plenty of others. I first noticed this maybe two years ago. I suspect the homosexual community will object to this usage, but there's not really a great deal they can do about it. (discuss)
But there are a couple of issues here. First, it may not be true. There are copies and then there are copies. One way of copying a CD involves a "raw copy", which not all systems permit. But if it can be done this way, then it means that the copy is bit-for-bit identical to the original and should perform exactly the same way. If the original sounded fine then the copy will, too. The second issue is a question of whether copying software even in non-raw mode (i.e. MP3) could not detect this kind of copy protection and remove it. There are distinct limits on what they can insert before it becomes something that a traditional dumb CD player can't correct, so within limits it should be possible to make an MP3 generation program smart enough to do a filtering pass on the data to remove crap spikes like this system adds. I don't anticipate that it will be long before they become available. (discuss)
First, this crack is totally passive. It's possible to crack the crypto simply by listening, which means that there's no way for the network to know it is being attacked. Second, the 40-bit key used by 802.11 can be cracked in fifteen minutes. Third and worst is that the crack scales linearly with the number of bits of the key. In other words, going to a 128 bit key would only increase the crack time to about an hour. What should have happened is that each base station should have shipped from the factory with a unique public and private key burned into its flash memory. Then what would happen upon opening a session is that the remote would query for the public key, which would be sent in clear. The remote would calculate a session key, encrypt it and send it to the base station which would decrypt it, and then you'd use DES or AES thereafter. Then the only issue is how randomly the session keys were chosen by the remote, which is a much less serious problem. But even if a given session key were cracked, that would do no good for any other sessions since each would use its own. And if the public key for any given base station was cracked it wouldn't endanger any other base station. Of course, that wasn't possible because 802.11 was designed back in the dark days when the US Government was imposing controls on strong encryption, which is why it is using a 40-bit key in the first place. But there's a difference between the size of the key and the actual strength of the encryption. The DVD CSS uses a 40 bit key but because of screwups it's actually only 26 bits strong (and can be cracked by brute force in a couple of seconds). Now it turns out that the 802.11 crypto is even weaker. If there's a lesson to be learned here it's that private citizens really do need strong crypto, and government controls on same do more harm than good. (But we knew that already.) (discuss)
I wish I could help, but I don't know how I can. If I were religious I could pray for her and her family. If I was superstitious I could wish them luck. But as a mechanistic atheist I don't believe in those things, so all I can do is sit here and feel dread about the future of a little girl I've never met that I care a great deal about. (discuss)
One example of supremely bad taste is going away. (That's good.) But it's transforming itself into an even worse example of bad taste. (That's not.) (discuss) Update: At least they're not going to clone Bert Parks. (I promise that's the last "clone" joke.)
Obviously, a network like this can be designed well or badly, and you could get the same performance out of networks 50% different in expense. Customers wanted optimized network designs, and BBN had a group whose job was to design networks for potential customers in hopes of making the low bid. They used genetic programming, a relatively new technique then. They would sit down and manually design a potential network, and then put it into a special program that ran on a dedicated high speed workstation. What that would do is to successively mutate the network, and then evaluate all the children and only retain the best few, then mutate etc. until someone told it to stop. Usually they'd run it for three days or so (representing thousands of generations). There were two invariant results. First, the winning network bore no resemblance to anything a human would design. Second, the winning network would massively outperform the best human design for less money. If you didn't know the mechanistic source of the design, you might describe some of them as "ingenious". As a problem in genetic programming, this one was quite circumscribed. The "genome" was relatively small, consisting of the number of nodes, their placements (out of no more than a few hundred potential choices most of which were cities) and the interconnections between them. Part of the model would be a description of the expected traffic and that didn't mutate, but it did feed the evaluation heuristic which attempted to determine how well a candidate design would carry that particular traffic pattern. This was about the limit of what was possible with the compute resources available at the time, since the high speed workstation on which this ran used a 25 MHz 68030. But as we all know, the cost and power of computing has come a long way since then, and it's interesting to read that the same essential approach is now being used, with thousands of times more compute power, to actually design computer algorithms. This is really exciting, and it's the first practical example I've ever seen of a program which really can solve a problem for us without us having to tell it how to do so. All programming until now has involved humans doing the design. In this case we're actually seeing working code which no human described ahead of time. It will be interesting to see how widespread this becomes as computers continue to get cheaper and faster. (discuss)
But the article doesn't say who the buyer is. There's long been speculation that it might be Sony, which would make sense. But that's not the only possibility, especially if Sony has already given up on its network appliance, and I'd like to suggest another. Network appliances are not a viable business, but PDA's are, especially PDAs merged with cell phones. I suggest the following possibility: Be will be purchased by a major cell phone company to produce a competitor for Symbian and Palm and WinCE for use in smart cell phones. None of those three is totally satisfactory; each has its flaws. It is also an emerging business and looks like the wave of the future for the cell phone industry, and it's the only place where a "net appliance" really does make sense. Be has already shown that it is capable of porting its code to different platforms (from PPC to the x86) and should have no difficulty in porting to ARM (the de facto standard for cell phones). Their code is fast and efficient and should have no difficulty running on underpowered portable devices, especially if the screen is small. The GUI would have to be reworked but that's not a serious problem. So my dark horse for an acquisition is my former employer, Qualcomm. (I have no inside information on this; it's been months since I've talked to anyone there. I'm just speculating.) (discuss)
A bunch of refugees from Netscape have formed a new company called Kontiki and are working on a P2P system. They are not repeating the "new economy" mistake of punting the business model. So they've concentrated up-front on worrying about how it's going to make money, but I don't think they've thought deeply enough about it. Their answer is that it would be a cut-rate competitor for Akamai. Akamai is a hosting company which has server farms placed all over the US (and probably other places in the world) with high bandwidth net connections, which will host big files for other companies. The idea is to distribute bandwidth problems, but so far it hasn't been a commercial success. The alternative for companies who want to distribute large files (e.g. video files or large flash files) has been to host them on their own servers, but it's possible to really get hammered that way. Kontiki's answer is to use a P2P system to distribute the load, at minimal cost to the content provider. All well and good, except why would individuals want to let their computers (and more important, their bandwidth) be used for distribution this way? The problems of a P2P network is that its end-users won't pay for it. They have become accustomed to using P2P for free. Also, if they're going to spend long periods on the system (hours per week, which is essential for it to work) then it's going to be because they can trade pretty much anything on it that they want (i.e. MP3's and porn). But if it's a commercial system then it's going to have to be paid for by corporations. Advertising is out; the revenues aren't high enough. Direct sponsorship by corporate content producers would come with the string that the system be designed to prevent data piracy. That means that from the point of view of the potential users the system would be crippled because it wouldn't permit them to trade what they want on it. So you can build a system which will be used heavily but which no-one will pay for, or a system which corporations will pay for but which no-one will use. But even if the users of a P2P system were willing to pay for it (obviating the need for corporate sponsors), they'll only use it if it permits unrestricted file sharing, and in that case the system will be sued out of existence. Which is why the future of P2P is going to be freeware running a distributed model, so there are no central servers. That means no-one has to pay for the central servers, and there are no large targets to sue, so it means it can be designed to permit unrestricted content distribution, which will make it popular with the users. With such a system in place, even if inefficient, why would users ever want to adopt a commercial system with data restriction? The essence of successful P2P is copyright violation. (discuss)
"Jailed for not being funny." "Anything for a laugh." And so on. My take is simply this: anyone this stupid really belongs in jail. I don't think there's anything humorous about it at all. (discuss)
By the same token, I would object to a transplantation to any person with incurable cancer. My father died of cancer and among other things it consumed his liver. At that time transplants weren't possible, but even if they had been, a transplant for him would have been useless because his pancreas was also gone and he had cancer elsewhere in his body. He ultimately died because of liver failure, but if it hadn't been that it would have been something else. So though I have a great deal of sympathy for people who are HIV positive, I also don't think that they are suitable candidates for transplantation surgery. It is true that modern drug treatment can extend life for such people, but even with that they're still going to die sooner (and become disabled sooner) than would someone who doesn't have the disease. It's not that I think that a transplant can't help someone with HIV; it's that I think that same organ would help someone else even more, and there aren't enough to go around. (discuss)
Of course, a "refer" doesn't necessarily mean a page with a link to me. What it sometimes means is the page that you (my loyal crew) were at just before coming here by clicking on your shortcut you have for me, and I must say that some of you have very strange reading habits. I still haven't been able to figure out just what "www.hungryhippo.com" is or why anyone would look there, given that it comes up as a nonexistent page for me. For a while I wondered if someone was just trying to play games with my mind. (Now I bet I get to see a lot of really weird ones. Spare me.) I reset the statistics every few days, and I last reset them about a week ago. I registered with Google to be on their list of places to crawl, and I've noticed since then that several other crawlers have found me. I know that some people object to that, but I don't mind and I find a lot of hits from search engines now, including some I've never heard of: search.rediff.com, for example. In the last week, I've gotten nearly 140 refers from Google alone, which is fine by me. (I have no idea what they were searching for, though I could probably find out by getting into Linux and actually looking at Apache's log file. I suspect quite a few of them were looking for CDMA information, since my CDMA FAQ is top of the list of hits for the phrase "CDMA FAQ".) But I get a bit of a charge out of finding an all-new refer, and today I got a doozie: My entry on Yahoo has gone live and I've already gotten about 15 refers from there. Of course, once I'm no longer "new" and sink down to the end of the page I'll probably never see another one. Jeeze, that page is long. (Why, oh why, didn't I name this AAA Clueless instead of USS Clueless?) (discuss)
The comparable thing for Apple is application software. There exist PC makers whose market share is smaller than Apple's (Micron PC, for instance) but their products use the same software as Dell and Compaq and all other PCs. They use the same gas stations. But Apple's product needs its own; its apps are not compatible. This means that Apple, unique among manufacturers of desktop computers, needs an entirely separate retail support structure of stores and of ISVs to create products for those stores. That, in turn, means that the absolute installed base of Apple's computers has to be large enough to create a market big enough to sustain that structure, irrespective of how large any competing systems are. By being different, Apple also has to go it alone. It can't leverage off of overall market mass or share the expense of the support structure with other companies. It may well be (some might claim) that Mac software is nitromethane, but regardless of performance that still makes it rare and expensive, or potentially could make it become rare and expensive. (This is what killed OS/2, for instance.) Apple is unique among desktop computer makers in that a 3% share really may spell trouble. It remains to be seen whether that really is adequate for an incompatible architecture. The ISVs have been very nervous about it for a long time, and if they abandon the platform then it is walking dead. (discuss)
The reason is that modern warfare isn't won by weapons; it's won by logistics. An artillery piece without ammunition is a useless tube of steel. A tank without fuel is a statue. "Logistics" refers to the problem of getting adequate supplies to the front, which sounds easy and isn't. Anyone can field an army, but a modern army without supplies is useless, and modern warfare consumes supplies at a ferocious rate. The miracle of Operation Desert Storm wasn't that we were able to field six divisions or that they did as well as they did, but that we were able to keep them supplied with food, water, fuel and ammunition. But while those divisions only represented about a third of our field strength, it took every bit of logistics support we had, which is worrisome. The only National Guard units which were called up for Desert Storm were transportation units. This is known as the "tooth-to-tail" problem, and a modern army requires a huge tail for every tooth. To field an infantry division of 18,000 men it can take upwards of 100,000 men behind the line (extending all the way back to the mother country) moving supplies and performing other support functions and coordinating everything. And behind that there may be a million civilians working on farms or in factories or driving trucks or operating trains. That's why it is a truism that "amateurs discuss tactics, but professionals discuss logistics." If you've got supply and your enemy doesn't, strategy and tactics become simple. Is the new Iranian anti-armor missile a good weapon? Probably not. One reason is that high tech weapons like this can't be produced on an as-needed basis. They simply take too long; if you go to war, you fight with what you have, which means you have to stockpile them ahead of time. That is enormously expensive. And once you've stockpiled them, you have to maintain them so that they're still useful when you finally need them. The US can afford to do this, but Iran can't. If this missile is as good as they claim it is (about which I have my doubts) then Iran would have to build and maintain several hundred of them before they became militarily significant, and if they try to do this they will either destroy their economy or will starve the rest of their military. Also, weapons like this are vulnerable. The backbone of US military doctrine since World War II has been to achieve and maintain air supremacy in any theater where US ground forces operate. This grants you an enormous advantage: you can scout easily and see where your enemy's forces are while preventing him from doing the same to you; you can attack his logistics while keeping your own safe, and you can launch air attacks against his ground forces while keeping your own safe. Desert Storm represented the ultimate expression of this tactic and the lopsided casualty figures speak for themselves. That's a rich man's war. It's a way of spending money to save soldier's lives. The US is willing to do this and is one of the few countries capable of doing so. In WWII the Germans learned that their forces would always be smothered by artillery fire during any operation on an American front, because the Americans were willing and able to expend immense amounts of artillery munitions in an effort to save American soldiers. German artillery was just as good as American artillery but never had the amount of ammunition that American artillery had. Which brings up the fact that a high tech anti-armor missile is a rich man's weapon. The US is quite willing to fire million dollar missiles to take out half-million-dollar tanks (with some missiles not hitting). Iran doesn't have those kinds of resources. If the US in WWII was fighting a rich man's war, then the USSR was using the opposite tactic. For the USSR supplies were dear but men were cheap, so they used tactics which were effective but which resulted in huge casualties to the Red Army. They won against the Germans while taking five casualties for every one they inflicted on the Germans, but that's because they could afford to sustain losses like that and the Germans could not. Equally, the US won against the Germans because the US could afford to spend three times the supply per soldier. If the Iranians build these missiles they'll have to work out how to protect them against weeks of air assault before any opportunity arises to use them against ground attack. (That's true against Israel, too.) So they have to accept that they'll lose perhaps three quarters of their supply of these missiles without firing them, and that the remaining missiles won't have a 100% success rate. Are they willing or even capable of spending that kind of money on an anti-armor defense? Not a chance. Or are they willing and able to field an air force capable of defending these missiles on the ground? Unlikely. Absent that, these weapons become like the Iraqi Scuds: an annoyance but not militarily significant. In a logistical war, the goal is to use your logistics advantage in such a way as to present your enemy with an unsolvable problem. That was the reason why in WWII there was fighting in Italy. Italy was a place where an attacking force could not be ignored by the Germans. If a substantial British-American force operated there without opposition, it could move up to the top of the nation and invade France, or Germany itself through Austria. But as long as it was opposed by a substantial German force it was not a threat. So since Germany was up against a man-power limit and the US was not as much so, British and American forces in Italy tied down Germans who could have been better used in France or the USSR. And indeed by late 1944, the Germans really were in the situation of being "one army short", which is one of the big reasons they lost. If you can sustain 2500 miles of front but your enemy can only sustain 2000 miles, then you want to have 2500 miles of front because then your enemy will be undefended somewhere. A weapon is only useful if the nation which fields it has the logistics to support it. Is this new Iranian missile a good weapon? It may be. Is it a good missile for the Iranian military? Not a chance. Of course, it is impolite to correct an opponent when he's making a mistake. If this missile is a logistics blunder for the Iranians, then the best thing for our military people to do is to act worried about it and to encourage the Iranians to deploy it. (Besides which, this may give our military an excuse to pry more money out of Congress.) (discuss) Update: It's been pointed out to me that I screwed up. The picture in the news article of the missile lying on a truck bed isn't the anti-armor missile about which I thought I was writing. In fact it doesn't say how big the anti-armor missile is, but it's likely an infantry weapon on the order of TOW. Oh, well; all that work about something they weren't actually doing. A front-line infantry weapon would be much more reasonable for Iran's military situation. It also would be less of a threat for a number of reasons, not least of which is that the US Army has been facing these kinds of weapons from the Warsaw Pact for 25 years and I think probably has worked out doctrine for dealing with them. Also, a small missile armed with a shaped charge warhead (along the lines of TOW) isn't really a threat to a tank equipped with Chobham armor such as the US M1. Chobham armor was designed precisely to defend against shaped charges. That's why NATO tanks fire APDS rounds against enemy tanks now rather than HEAT.
I have several objections to this concept. First, at the rate that cost-of-computing is falling, centralizing it doesn't make sense. For nearly anything except mondo-bit-crunching, it's not that expensive now to buy hardware to solve your problem. Second, any centralized system will be vulnerable to attack and any failure in it will have massive consequences. As an engineer, I don't like single points of failure. Distributed systems are inherently more resiliant. Third, I'm afraid that if some poor or politically-motivated choices (e.g. Java) are made in establishing this model that it will institutionalize those choices. IBM's done it before; we're still living with the consequences of their choice of the x86 over the 68K for the original PC. (discuss)
They crow about the fact that they can stay one step ahead of the pirates. Actually, they can't. The reason is that deploying new countermeasures of this kind (such as changing the size and shape of the noise spike which is interjected) is a slow process, whereas changing the code which defeats it is rapid. A new countermeasure is deployed on new CDs which are distributed in physical form in huge quantities, whereas the crack will be software distributed on the internet. (discuss)
Update 20010802: Seems it was never really a threat in the first place.
That's the problem: fabs have reached the point where no companies can actually afford to operate them exclusively for their own products. One of the few which does this successfully is AMD, but they've done it by staying starved. AMD makes its own chips, but it also contracts outside for a substantial part of its production, so it is saturating its own organic capacity and even if it faces an economic downturn in business its own fab will remain saturated. Other major semiconductor companies like IBM and Intel create a lot of chips for themselves, but they're also major foundries and because of this can spread the capital expense around. Until now, Motorola has been designing its own chips for things like cell phones and using them exclusively in-house. Their new plan is to also sell those chips outside, but the industry is skeptical about this kind of conflict of interest (I know about this first hand) and I'm not sure I believe that this is actually going to bring in $billions in new chip business. Equally, the PPC is reaching the end of its useful life, being supplanted in embedded by the ARM (which Moto recently licensed) and on the desktop by the amazingly resilient x86 architecture (which is also at the end of its useful life, soon to be replaced by Itanium and Sledgehammer and ARM). Volume in the PPC isn't high enough to justify the awesome expense which would be required to keep it competitive, so it has been languishing. If there's any solution to Moto's semiconductor woes, part of that is going to have to be getting into the foundry business big time (as Intel is) to subsidize the capital cost of equipment. It may also include a radical change in business model, and abandonment of entire product lines or even businesses, and a substantial deemphasis on in-house designing. With the recent announcement by Palm that it would port its OS to the everpresent ARM, Moto will no longer have the inside track on the PDA business. Moto is going to make ARM-based PDA chips, but it will be competing with Intel and probably with Palm itself, who could go to foundries and make itw own chips. I see no future for Motorola in the PDA chip business and I predict they'll be completely out of it (except as a foundry) within two years. I don't know what the complete answer might be for Moto's chip business (or even if there is an answer) but they are going to have to completely rethink their business model. Their current business model is the one which was successful in the 1980's, back when all chip companies owned their own fabs. That was possible when a fab cost less than $100 million. When a modern bleeding-edge fab can come in above $3 billion, it's no longer viable. (discuss)
Is this really all that much better? Do you really want to follow the advice of an analyst who isn't willing or able to invest his own money based on his advice? Perhaps another route could be followed here. Right now SEC regulations require that all significant "insider" stock trades be published. I think the best answer would be to permit analysts to trade whatever they want with their own money, but to force them to reveal all trades concerning stocks they officially follow as analysts. Their votes with their wallets would be the most sincere analysis they'd be doing; if they issue a "strong buy" on a stock but sell their own holdings, it isn't too difficult to figure out their real opinion. (discuss)
In an ideal world there would never be any "missing persons". In an ideal world, everyone who died would instantly be found and their remains instantly identified. We're not living in an ideal world, and in fact hundreds of people in the US go missing every year and are never seen again. With millions of people dying every year, it's simply how things are. In the UK right now, there is a boy in foster care, attending school. He was discovered wandering the streets. Despite having him right there whom they can interview, the police can't figure out who he is. That's the way it is in the real world; this isn't any mystery novel where some genius figures out every puzzle and solves every problem. Chandra Levy was last seen about three months ago, and her parents want to know what happened to her. That's completely understandable, and if I was them I would, too. It happens to be the case that they are wealthy, and they've hired a publicist whose job it is to keep her story on the front pages by trickling out new developments about her on a regular basis. They're doing this for three reasons: to embarrass and pressure Representative Condit (who they suspect of being complicit in her disappearance), to keep her picture on the newspapers of America in hopes someone will recognize her and provide a lead, and to light a fire under the DC police so that they'll use a disproportionate amount of their resources on this case. So far only the second one has failed, which is unfortunate because it was the most important one. But they've been notably successful at toasting the DC police, who have undertaken foolish things like a mass search of DC parks even though they don't have any reason to believe that she might be found in them. The DC police are starting to make "probably can never be solved" noises and have called off the search, and that is as it should be. They'll keep working on this case at a reduced level, but if they haven't been able to turn up a lead in this case in the two months they've been working hard on it, then they probably never will by straightforward detective work. Further concerted effort is going to be a waste. It's going to take a fluke; someone will stumble on unknown human remains someday, and tests might indentify them as belonging to her. This is not the answer that her parents want to hear. I'm sorry for them. But Chandra Levy was not the only person in DC who needs the services of the DC police, and it's time for them to get back to work on all the other cases in their jurisdiction. It's also time for the newspapers of America to stop lunging for the bait every time the Levy's publicist tosses another hook in the water. This story is now old until and unless something really important develops -- like discovery of a corpse. (discuss)
The other thing this would ban is deliberate attempts to harvest human eggs to be fertilized in order to create cells for fetal research. It would not ban the use of spare fetuses left over from in vitro fertilization clinics, which is the primary source now for fetal cell research. What's not clear to me is how it would affect research on adult stem cells; it may also make that illegal. The margin of victory in the House on this bill was much greater than the Republican majority, and I know that some Republicans opposed this bill, so a great many Democrats must have voted for it. The Senate Majority leader has stated that he favors this bill. It remains to be seen how many Representatives and Senators will vote the other way when the issue of fetal stem cell research finally comes before them. The indications are that it will not go the same way. In the mean time, our President still hasn't issued a decision on fetal stem cell research, and I wish he'd get on with it. What is he waiting for? (discuss) Update 20010801: It looks as if the House vote was a show-vote, an opportunity for Representatives to take a stance for the benefit of the voters back home, secure in the knowledge that the other chamber will reject it. This happens now and again, and it makes more sense of this. The Washington Post speculates that there is a good chance this bill will be defeated in the Senate.
The future is now, or at least getting there. They are actually working on doing exactly that, at least with voices. By taking on the order of 40 hours of spoken material from a given person they are working on a technique where fragments could be pieced together to create whatever spoken speech you want with that voice. I'm a little skeptical that they would have the ability to control intonation and expression this way; they might be able to create the word sequences they want (or even to synthesize words which are not among those recorded) but controlling emphasis would be harder. Still, it's not impossible and they're just getting started. There have been several movies now which were completely computer-synthesized, most famously "Toy Story". With the release this year of "Final Fantasy", though, that process has taken a substantial leap because this is the first time that they've actually tried to create realistic looking protagonists. No-one will be fooled, but it's an amazing achievement and if you look how far they've come in just ten years it becomes evident that within another ten it will be possible to create synthesized movies indistinguishable from reality. Actually, in some areas they are doing those kinds of things now; computer-synthesized special effects are nearly ubiquitous in action films nowadays, and it's become the tool-of-choice for many kinds of TV advertisements. In some cases it's become quite difficult to tell where reality ends and synthesis starts. But until now computerized animated films have always used real human voice-actors. Read the credits for "Toy Story" and you'll see Tom Hanks and Tim Allen. Look at the credits for "Shrek" and you see Mike Meyers and Eddie Murphy. Will that be the case in 10 years? One of the problems that producers of family-story sit-coms have is that child actors grow up. Over the course of a five year run, adult actors don't really change all that much, but kids will change quite dramatically and that cute four year old becomes an obnoxious nine year old. But not if they are synthesized. New Shirley Temple movies, anyone? There are two future steps to watch for. First will be the emergence of a TV show where the actors are all digitized from real humans. The humans will come in and be paid for for an extended metric session, where their bodies and movements are digitized and their voices captured. They'll then be handed a big check and no longer needed. In this, as in so much else, new technology will demand new law. I think what will eventually emerge is a licensing scheme where the original actor will be paid a royalty each time their synth is used, and may be able to specify within limits what kind of material the synth can be used in e.g. "no porn". This will lead to the second step: the emergence of a completely synthesized star, not based on any specific human at all. There's already been interesting work where pictures of a large number of people's faces were captured and averaged, to create a synthetic face not belonging to any of them. Interestingly, if you start with forty faces and do this, the resulting average face is considered "good looking" by people no matter who the original forty faces were. After a reasonable number of these actors have been digitized, someone will average them to create a completely new synthetic person unlike anyone who actually exists (except, perhaps, by coincidence) and may do the same thing with their voices. Or perhaps they'll just hire forty people off the street to come in and be digitized. Issue a casting call for forty Chinese women and create a generic beautiful Chinese synth actress. Or forty AA men. Do the same thing three times with the same ethnic group and you'll end up with three synths, all different, all attractive. And such a completely synthetic actor could make movies essentially forever without aging or being hurt by doing his own stunts. Of course, advances in the state of the art could make a given version of the mesh obsolete, but you'd retain the original film used for the digitization and could create new meshes as necessary to keep up, while retaining the distinctive look of your synth. Makeup and costuming are obviously no problem, and if you need to age the person for a part, you no longer mess with rubber protheses; just apply an aging-algorithm to the mesh and you're set. (For that matter, with proper analysis it might be possible to do the opposite and take an adult and produce a child version.) If you've got a female synth and she isn't built right for the part you've cast her into, just twist a dial and grow the relevant parts. No more need for falsies or creative support clothing. Need a hunchback or a scar? Does the part require an amputation? It's all just ones and zeros. Want to cast your star into an ethnic role they don't match? No problem. Take your nordic mesh and run the "oriental" algorithm on it. Which brings up an interesting question about the audience: how much of the experience of enjoying these kinds of entertainments comes from the fact that we grow to know and like specific actors? Will we enjoy these kinds of things as much if the actors in them are different each time, or if they don't exist at all? I think we will. One of the most beloved characters in the history of cinema is Bugs Bunny, who clearly has no reality outside our minds. At the time he was created he had to borrow his voice from the great Mel Blanc, but in future such a character would not need to, because his voice could be synthesized right along with his visage. Who's to say that a completely synthetic character in future could not become just as beloved as Bugs, or Charlie Brown? (And how many kids have you seen carrying around "Woody" and "Buzz" dolls?) I don't think that this will make real human actors and actresses obsolete in film, but synths will arise and rival human actors in popularity within thirty years. Some may be realistic and idealized, some may be caricatures and some may be |