USS Clueless Stardate 20011010.1553

  USS Clueless

             Voyages of a restless mind

Main:
normal
long
no graphics

Contact
Log archives
Best log entries
Other articles

Site Search

Stardate 20011010.1553 (On Screen): Sean writes:

There was a piece on NPR this morning about Internet security and how hard it's going to be to do because software is designed for efficiency and then the security issues are plugged when they occur and that leaves lots of holes open for cyberterrorists and the only way to fix it is to get designers to build in security from the start.

But I'm not sure. Hasn't there been success in some quarters with mobilizing hackers to help with security? Couldn't we do that now? Some of the motivation would be patriotism. Some would be rewards?

Security issues are a subset of the more general problem of finding bugs. It sounds as if NPR reached someone who knew what they were talking about: in fact the best way to prevent bugs is to work from a clean design. It is virtually impossible to take an existing unclean design and really find and fix all the bugs. For one thing, it's impractical to actually thoroughly test any non-trivial software package. The number of test cases becomes astronomical; it would literally take until the Sun explodes to try them all.

For another, as a general rule, for every two bugs you fix, you create one new one -- and it may be more serious than the ones you just fixed. During the final stages of a project, the testing staff will report bugs as they're found, and what we usually do is to evaluate each bug on the basis of risk and reward: how serious a bug is it and what is the chance that fixing it will create something new? If it's not serious but the risk is high, we usually make the decision to deliberately not fix it. Any bug with low risk (i.e. fixing a mispelling in some text) will usually get fixed if time allows. When you have a bug which is important and also risky, then you have to search your soul and make a call. (That's when software managers earn their pay.)

None of this has anything to do with who is actually doing it (hackers or hired people); it's simply a mathematical fact that full testing can't be done in any reasonable amount of time.

It is, however, possible to create highly reliable programs, but doing so requires using rigorous design procedures and maintaining a lot of discipline. Far and away the most important thing to do if you want true reliability is to freeze the performance specification. "Feature creep" is easily the biggest source of problems; it makes it almost impossible to create and stick to a clean design. The second most important thing to do is to not start coding too soon. On a well run project, you won't write a single line of production code until at least half the project duration has passed. But these things are rarely done; they're expensive and they make management extremely nervous. (The third most important thing is to not try to be clever. "Straightforward" is better than "nifty", because "nifty" is usually fragile.)

As to applying hackers to the problem, I think that would be worse than useless. The kind of discipline which is needed to really do it well is almost diametrically opposite to the hacker approach to life and code, which is almost totally undisciplined. When was the last time that hackers wrote the user manual before they started writing code? But that's what you really need to do, because the "user manual" is the detailed performance specification. (discuss)

Captured by MemoWeb from http://denbeste.nu/entries/00001038.shtml on 9/16/2004