3.6 Summary
As this chapter has shown, there is more to designing a secure
application than merely "being
careful" and "avoiding
mistakes." In fact, we have known many experienced
and capable programmers who first came to believe in the need for
rigorous software engineering techniques when given the
responsibility for maintaining security-sensitive code. It can be a
humbling experience.
We hope that we have impressed on you the need for methodical
security needs assessment as part of the design stage of any
application software project. We also hope that
you'll find useful our pointers to rigorous methods
you can use to select appropriate security technologies and controls.
Most importantly, we hope you will agree that designing errors out at
the start is the best hope for security. Files that are never created
can never be read or changed inappropriately. Treating all users the
same—in fact, paying no attention to user
identification—can (if appropriate) be much safer than relying
on inadequate authentication. A password that is not required, and
never coined, cannot be lent, stolen, or compromised.
We've found that such simplifications can be made
feasible more often than is generally understood. It is always
worthwhile to look for these opportunities.
In the next chapter, we turn from architecture and design to the
struggle for well-executed code. The best designs, of course, can be
subverted or compromised by poor implementation. Perhaps the insights
you've gained here (and the thorough appreciation
for the complexity of secure design) will give you extra impetus to
aspire to zero-defect implementation.
What is the difference between design and architecture?
In our discussion of risk mitigation options, one of the
possibilities in the scenario we sketched involved taking your
e-commerce server offline to avoid the loss of a
day's worth of online purchase records. But that
response may be too severe for the particular threat we postulated.
Can you think of a similar threat that would justify that preemptive
action?
Why, when we were considering whether your application could
withstand a request cascade, did we ask whether you had decided on a
stateless design? (Hint: remember the
SYN flood attacks we've
been discussing?)
Does the idea of performing a thorough risk assessment of your
application seem like too much trouble to you?
Why do you think explaining your design problems to an empty chair
helps you come up with a solution? What can this teach you about how
the security design process works (when it works)?
Why might adopting a security model for your application that is
unrelated to the way "users" think
of it enhance the security of your application?
|
|