1.5 Summary
In this first chapter, we hope we've challenged you
with some new ideas about security vulnerabilities. We particularly
hope that you may now consider that the blame for security
vulnerabilities belongs, to some degree, to all of us who buy and use
the seriously flawed programs available today.
This point of view does not minimize or try to mitigate the
responsibility of software producers for security quality. They
should be held to the highest standards and hung out to dry if they
fail. But it does in fact "take two to
tango," and customers (particularly, the U.S.
government, the biggest software customer, so far as we know, in the
world) bear some responsibility to demand secure software.
Those among us who produce software, of course, have a special
responsibility and a unique opportunity to improve matters. Our
discipline has not reached the state of understanding and sound
practice exemplified by those bridge builders shown on the cover of
this book, but the folks driving their virtual vehicles over our
structures rely on us nevertheless to keep them safe.
In Chapter 2, we'll exhibit the
most important architectural principles and engineering concepts you
can employ to make your software as secure as possible. In that
chapter, we'll try to pass along some distilled
security wisdom from the generation of coders that built the
Internet.
Have you ever written a program section with a security hole? Really?
How do you know? And, if you are sure you haven't,
why haven't you?
Do programmers writing code today know more about security than
programmers writing code 30 years ago?
If you accept the principle of writing code that is
"just secure enough" for your own
applications, do you think it is socially responsible for software
vendors to do the same?
Visualize one of your favorite programs. What is it? Are you seeing a
series of lines on a computer screen or piece of paper? Or is the
"program" the series of
machine-language instructions? Is it perhaps the algorithm or
heuristic, or maybe the very input-to-output transformations that do
the useful work? Now consider: in which of these various forms do
most vulnerabilities appear? Also, will the same bug-fighting
techniques succeed in all of these instantiations?
Which are more dangerous: cars without seat belts or Internet-capable
programs with bad security? If the former, for how long will that be
true? Is that within the lifetime of software you are working on, or
will work on some day?
Suppose you were responsible for the security of a web server. Which
would make you feel safer: keeping the server in a room around the
corner from your office or keeping it in another office building
(also owned by your company) around the world? Why? Would it make a
difference if you "knew"—had
physically met—one or more workers in that remote building?
Are the people you know more trustworthy than those you
don't?
Are you and your friends better engineers than we are?
What are you doing to make the software you use more secure?
Can you think of a safe way for software vendors to ensure that their
customers install security patches? Should the process be automated?
Should vendors be launching patch-installation worms that exploit a
vulnerability in order to install a fix for it?
Should software vendors be shielded from product liability?
|
|