Team LiB   Previous Section   Next Section

6.3 Good Practices Through the Lifecycle

In the following sections, we broadly divide tools and testing methods into several categories: design-time, implementation-time, and operations-time. These broad categories should easily fit into just about any development methodology you decide to use.

6.3.1 At Design Time

Because design flaws in software can be so costly and time-consuming to repair, taking the extra effort to review software designs for security flaws is time well spent. That's the good news. The bad news is that there aren't a lot of tools and methodologies you can use to automate or streamline the process. As a result, most of the recommendations we make in this section are procedural in nature. What's more, the ones that involve automated tools and languages (e.g., formal methods analysis) can take a great deal of extra time, effort, and money. For this reason, we recommend the use of such tools and languages only for the most critical niche of software design projects—for example, those that can impact public safety.

In reviewing software designs for security flaws, your chief weapons are knowledge and experience. The following recommendations are primarily intended to best exploit the aggregated experiences of the design team, their peers, and others in the design review process.

Perform a design review

Regardless of how formal or informal your software development process is, we hope that at a bare minimum, you document your application's design before proceeding to implement it in source code. This is the first point at which you should review the security of the software. We've found that an effective way of doing this is to write a process flow diagram, or flow chart, of how the software will work. The diagram should depict the application's process flow, major decision points, and so forth. To review the diagram, the reviewer should look at the design from the perspective of how it implements the architectural principles we discussed in Chapter 2.

For example, what privileges are required by the software at each step of the process? How does the design lend itself to modularization of code and compartmentalization of privileges? What communications among processes or computers are needed in implementing the software? What might an attacker do to misuse the design in unintended ways? A thorough review of the design can spot fatal flaws before they get implemented; mistakes such as flawed assumptions of trust should be readily visible in the design.

What might this type of design review have told the TCP designers about their design's ability to withstand an attack like the SYN floods? As we've seen, initiating a TCP session follows a pretty rigid protocol. Do you think that the design of the TCP stack would have been different if the designers had reviewed the design while under the assumption that an attacker might deliberately try to break their code by not following the protocol? We'll never know, of course, but our point is that this is exactly how you should approach design reviews: think like an attacker.

Conduct desk checks and peer review

Desk checks and peer reviews are simply methods for performing design reviews. That is, have other people read your design documents, looking for problems. The reason for this is simple: it isn't sufficient to review your own design work. It's always best to have someone else conduct an impartial review of the design. Note that we're not referring to a code-level review here, but rather a review of the design of the software.

Understand the psychology of hunting flaws

Going out and searching for problems is not always well received. In fact, you really need to be convinced that you want to find flaws in order to find them. It isn't enough to use an independent team. It's vital that a culture of finding and fixing flaws is viewed as being a positive thing for your business, because the end result is better-quality software applications. Participants need to be encouraged and nurtured, which is by no means an easy or trivial accomplishment.

Use scorecards

The use of scorecards can make peer reviews more effective. Scorecards are conceptually similar to checklists, which we discuss later in this section; unlike checklists, however, scorecards are used by the reviewer, in essence, to grade the security of an application. Their principal benefits are that they give the reviewer a consistent, and perhaps even quantifiable, list of criteria on which to score an application. Like checklists, they reduce the likelihood of human error through inadvertent omissions of key questions and criteria.

Use formal methods, if appropriate

For applications with a critical need for safety, designs can be described and mathematically verified using a technique known as formal methods. In the formal methods process, the design is described in a high-level mathematical language such as Z (pronounced "zed"). Once you have described the design at this level, you can use software tools similar to compilers to look at the design and analyze it for flaws.

A classic example used in many software engineering academic programs involves analysis of the design of a traffic light system. In the formal description of the design, the designer articulates certain safety scenarios that must never occur, such as giving both roads a green light at a traffic signal. The formal analysis of the design methodically tests every possible combination of logic to verify that such a logical scenario can never exist. This type of formal analysis can eliminate, or at least drastically reduce the potential for, human error in the design by pinpointing logical fallacies and other such problems.

Returning once again to our TCP SYN flood problem, we believe that formal methods could have been useful in modeling some of the TCP session negotiation protocol at a design level. As with modeling the finite states of a traffic light design, the TCP team could have used a tool such as Z to model the design and then test it exhaustively for logic loopholes. Because the vast majority of popular applications on the Internet use TCP, formal methods could well have been justified as a means of testing the design had they been available at the time.

Use attack graphs

One method that escaped from academia in the 1990s and has rapidly gained a following is the use of attack graphs, representing paths through a system that end in a "successful" attack. It's a formalization of the "What bad thing can happen?" reasoning we've discussed throughout the book. Hand-drawn attack trees can be very useful, and, if your circumstances allow, we recommend that you explore the automated generation of attack graphs, derived from a formal security model as a tool for automated design review.

Analyze the deployment environment

We covered operations factors in depth in Chapter 5. It's a good idea to start considering operating environment issues during the design phase of an application. For example, look at which network protocols are currently in use in the intended operational environment to ensure that your application will be able to work within that environment. It's possible that your application design may require the use of additional network protocols, either locally or over longer-haul network connections.

In many large enterprise environments, you will need to go through various administrative application and approval processes to get new network protocols approved in a production data center. It's best to start that process now, making sure that you are including your organization's security personnel in the design process.

Analyze network protocol usage

As with the previous practice, we recommend that you start looking at which network protocols make the most sense for your application during this stage of the development process. In some cases, you will have a good amount of flexibility in deciding which protocols to use for the operation and maintenance of your application; in others, you won't.

For example, if your application makes use of a third-party application platform (such as a database), then you might not have much choice about which protocols the platform uses to speak with other computers on the network. In that case, it's important to consider the security aspects of the network protocols during the design phase. If the provided protocols aren't sufficiently secure by themselves, then you may need to encapsulate them using other means, such as an IPSec-based Virtual Private Network (VPN) solution. The important point is that you be cognizant of the security issues with regard to the network protocols that you're designing into the application system as a whole; then, you can make informed decisions on how to ensure an adequate level of security around them.

Use checklists

As we've pointed out before, aviators have understood the benefits of checklists for decades. In designing software, it's all too easy to forget one (or more) key security issues. And don't restrict yourself to checklists only at design time. Use checklists for each phase of your development methodology. In some highly formal development environments, it may even be a good idea to get independent sign-off on each of the checklists. Either way, make sure to develop a series of checklists, and modify them over time, as needs develop and as new concepts must be added. (For a good example, see the sidebar SAG: Twenty Questions.)

Automate the process

To the extent that it is feasible, it is a very good idea to apply automation to the design process. This may be as simple as providing your software designers with powerful tools for exchanging ideas and experiences, collaborating with others, and so on. We feel that automating, or at least facilitating, knowledge transfer in this way can only have positive effects on the overall design team's efficiency and effectiveness.

We are happy to report that the state of the art of secure application design has recently advanced far enough to meet this specification. We are aware of two software projects, conducted roughly simultaneously and now in production, that have successfully automated the process of risk assessment and countermeasure selection. We discuss them later on in this chapter, in Section 6.4, as the two go hand in hand in the context of designing secure code.

SAG: Twenty Questions

As an example of the kind of checklists we've found useful in the past, we present here one such scheme we've used. Its purpose is to facilitate a quick assessment of the security of an application. The name "SAG" stands for Security At a Glance. (You can also look at it as an instance of the old parlor game, Twenty Questions.)

One of the best uses for such checklists is to assign numeric weights to the questions, so that the person who answers them can calculate a security score. This checklist was in fact originally developed (as a CGI script) for just that purpose. The original weights are in parentheses just before each question. You'll want to adjust the scores for your own purposes. We know they don't add up to 100, but that's another story! It's the relative values of the questions that are important.

  1. (5 points) The consequences of the most severe security breach imaginable, in terms of damage to the corporation or restoration costs, would be less than $N million.

  2. (5 points) The application's end of life (EOL) is scheduled to occur in the next N months.

  3. (4 points) There are fewer than N users of this application system.

  4. (3 points) The percentage of application system users who are employees is greater than N%.

  5. (4 points) More than N% of the application system users have been explicitly told about the sensitivity of the information or the risk presented to the corporation in the event of a security breach.

  6. (3 points) Security training courses that are unique to this application system are available and mandatory for all system users and support personnel.

  7. (3 points) The application system administrators are trained to recognize "social engineering" attacks, and they have effective policies and procedures to defend against such attacks.

  8. (3 points) A current application system security policy exists and has been distributed to system users and support personnel.

  9. (2 points) A plan for detecting attacks or misuse of the application system has been developed; a staff has been assigned to test it; and a team meets periodically to discuss and update it.

  10. (4 points) Procedures, roles, and responsibilities for disaster recovery have been defined; training exists; and testing of recovery plans has occurred.

  11. (4 points) Appropriate configuration management processes are in place to protect the application system source code in development and lifecycle updates.

  12. (0 points) This application system requires a password for users to gain access.

  13. (4 points) All user ID logins are unique (i.e., no group logins exist).

  14. (4 points) This application system uses role-based access control.

  15. (3 points) This application system uses other techniques in addition to Unix system password/application logon for authentication/ authorization.

  16. (4 points) With this application system, passwords are never transmitted across the network (WAN) in cleartext.

  17. (3 points) Encryption is used to protect data when it is transferred between servers and clients.

  18. (1 point) Database audits on application system data are performed periodically and frequently.

  19. (3 points) System configuration audits are performed periodically and frequently.

  20. (2 points) Account authorization and privilege assignments are checked at least once a month to ensure that they remain consistent with end-user status.

6.3.2 At Implementation Time

Testing of source code is intended primarily to detect a finite set of known implementation shortcomings in programs. That's not to say that the tools and methodologies we discuss in this section can't or won't ever detect design flaws. They may well do so. But their main function is to detect implementation flaws. When they do point out design flaws, your development process should be flexible enough to accommodate going back to the design and rectifying the situation there.

The good news (as we've briefly discussed in Chapter 4) is that numerous tools exist for automating—at least to some degree—the testing process. What's more, reviewing software for flaws is a reasonably well-known and well-documented discipline. This is no doubt because of the fact that software security flaws are arguably easiest to spot in source code.

There are two main ways that you can check a program for coding flaws: statically and dynamically. We discuss dynamic testing tools in Section 6.3.3 (though we recommend you use them throughout development) and focus on statically checking software in the following list. The approaches mentioned here range from relatively simple tools that look for known problems to more formal means of scouring code for flaws.

Use static code checkers

There are a number of different commercial and noncommercial static code checkers; Table 6-1 lists some of those available.

Table 6-1. Static code checkers

Tool

Description and URL

RATS

Scans C, C++, Perl, Python, and PHP source files for common security flaws. Released under the GNU General Public License (GPL).

www.securesoftware.com/download_rats.htm

Splint

Secure Programming Lint (SPLINT) from the University of Virginia's Computer Science department. Freely available (under the GNU General Public License). Scans C source code for security vulnerabilities and programming mistakes.

www.splint.org

Uno

UNO is named after the three common flaws that it detects: use of uninitialized variables; nil-pointer references; out of bounds array indexing. Although not specifically designed as a security checker, it can be used to scan C source code for common software defects. Developed by Gerard Holzmann and freely available at Bell Labs.

spinroot.com/gerard/

What all of these tools have in common is that they parse through and scan software source code and screen it for potential security pitfalls such as buffer overflows.

The process here is conceptually similar to that of an anti-virus scanning product. That is, the code checker looks through the source code for any of a series of known and previously defined problem conditions. As such, you are likely to get some false positives and—more dangerously—false negatives. We recommend that you use these tools on all source code that goes into an application—just be aware that such tools are not panaceas. There is no doubt that many a buffer overflow could have been avoided had more developers been using this kind of screening tool and methodology.

Use independent verification and validation methodology, if appropriate

In Chapter 4 we briefly discussed independent verification and validation (IV&V) approaches that you can use during the implementation stage. Although there are very few environments or applications that justify this degree of scrutiny,[1] now is the time to employ it if that's what your application demands.

[1] Although there are not a great many texts on IV&V, Robert O. Lewis' book, Independent Verification and Validation: A Life Cycle Engineering Process for Quality Software, Interscience, 1992, is a good starting point. See also www.comp-soln.com/IVV_whitepaper.pdf.

Use checklists

In the previous section, we discussed design-time checklists and noted that it's important to develop checklists for each phase of development. Checklists for code implementers are important in that process, although they're bound to be relatively difficult to develop. Naturally, implementation checklists are going to be quite language-specific, as well as specific to a particular development environment and culture. A good starting point is to have a checklist of tests that each software module and/or product goes through, and to have the testing team sign off on each test.

6.3.3 At Operations Time

The task of securing software is not done until the system is securely deployed. In addition to the design-time and coding-time approaches we've already discussed, there are many tools and procedures that you can use to test the security of a production environment. Entire books could be (and have been) written on the topic of testing and maintaining the security of production application environments. In this section, we limit ourselves to listing a series of recommended approaches that focus directly on spotting and fixing security flaws in the software.

Use runtime checkers

Earlier in Section 6.3.2, we discussed software tools that can analyze software source code statically. Runtime (dynamic) checkers provide a more empirical approach to the same issue. They typically run at an abstraction layer between the application software and the operating system. They work by intercepting system calls and such, and they screen each call for correctness before passing it to the operating system to be executed.

The classic example of this approach involves intercepting common system calls and screening them for potential buffer overflow situations before allowing the calls to take place. Screening at this point can help prevent such errors from occurring during the execution of your software. Note, though, that there is an overhead burden associated with this concept, and in some cases, the performance degradation is unacceptable. In those cases, though, it may well be acceptable to run this type of checking during preproduction testing of the software. If you do so, it's probably a good idea to leave the hooks for these calls in the software for later debugging and testing purposes. (Depending on your needs, you'll probably want to remove any and all debugging hooks before the code goes into production operations.)

Available runtime checking tools include those listed in Table 6-2.

Table 6-2. Runtime code checkers

Tool

Description and URL

Libsafe

Attempts to prevent buffer overflows during software execution on many Linux platforms. Freely available in source code and binary executable formats from Avaya under the GNU Lesser General Public License.

www.research.avayalabs.com/project/libsafe

PurifyPlus

Commercially available runtime checker from IBM's Rational Software. Includes a module that detects software flaws such as memory leaks. Versions are available for Windows, Unix, and Linux environments.

www.rational.com/products/pqc/index.jsp

Immunix tools

Three tools we know of from Wirex Communications, Inc. as part of their "Immunix" version of Linux are worth investigating. These are Stackguard, FormatGuard, and RaceGuard. They provide runtime support for preventing buffer overflows and other common security coding flaws. Much of Immunix (which is now a commercial product) was developed as a DARPA-funded research project; the tools we've mentioned are available as GPL software.

www.immunix.org

Use profilers

For several years, academic researchers have been conducting some intriguing research on the use of software profilers for security purposes. The concept of a profiler is that the behavior of a program (e.g., its system calls, files that need to be read/written) gets defined and then monitored for anomalies. The definition of an application's standard behavior can be either statically performed by the developer during the development of the software or empirically by observing a statistical profile of an application during (presumably) safe operating conditions. In either case, the software is then monitored continuously for any anomalies from its normal mode of operation. These anomalies could indicate malicious use of the application or the presence of a computer virus, worm, or other malicious software attacking the application.

Note that this form of statistical profiling (or static profiling) is different from similar methods used for intrusion detection per se, but only insofar as the profiles are maintained for system applications instead of for users specifically.

Available profiling tools include those listed in Table 6-3.

Table 6-3. Profiling tools

Tool

Description and URL

Papillon

Written specifically for Sun's Solaris Operating Environment (Version 8 and 9). Attempts to screen and prevent attacks by system users.

www.roqe.org/papillon/

Janus

Used for "sandboxing" untrusted applications by restricting the system calls that they can make. Janus is a policy enforcement and general-purpose profiling tool. Currently, it supports Linux and is freely available. Developed by David Wagner and Tal Garfinkel at the University of California at Berkeley.

www.cs.berkeley.edu/~daw/janus/

Gprof

Included as part of the GNU binutils collection of tools. Produces an execution profile of what functions get called, and so on, from C, Pascal, or FORTRAN77 program source code.

www.gnu.org

www.gnu.org/manual/gprof-2.9.1/html_mono/gprof.html

Include penetration testing in the QA cycle

Almost all organizations that undertake software development projects use some form of quality assurance (QA) methodology. Doing penetration testing of applications during the QA process can be highly beneficial, especially because the QA is normally done before the software is deployed into production. Ideally, the testing should use available tools as well as manual processes to detect potential design and implementation flaws in the software. There are a number of tools, both free and commercial, available for performing various types of network-based vulnerability scans. Tools include those listed in Table 6-4.

Table 6-4. Penetration testing tools

Tool

Description and URL

Nmap

Perhaps the most widely used network port scanner in use. Written by Fyodor and freely available under the terms of the GNU General Public License.

www.nmap.org

Nessus

Performs vulnerability testing. Nessus essentially picks up where Nmap leaves off. Originally developed by Renaud Deraison and kept up to date by Renaud and an ever-growing community of users. Also freely available under the GPL.

www.nessus.org

ISS Internet Scanner

No doubt the most popular of many commercial products for doing vulnerability scans at a network level. ISS (the company) also sells a wide range of other security products, including a host-based vulnerability scanner and intrusion detection tools.

www.iss.net

Use black-box testing or fault-injection tools

Most penetration tests analyze the security of a network and/or of an operating system. However, application-specific tests are now becoming available as well. The field of application vulnerability scanners is by no means as mature as that of system scanners, but a few such tools do exist. By their nature, application scanners are more specific to a particular type of application (e.g., web-based, database) than are network-based vulnerability scanners, which are specific to a network protocol suite.

Most present-day application-level testing has been primarily empirical and most often performed by a process referred to as black-box testing. (For an entertaining description of the first systemic black-box testing of Unix utilities, see Barton Miller's article about the Fuzz program, "An Empirical Study of the Reliability Of UNIX Utilities.") In this process, the tester (whether human or automated) tries to make the application fail by deliberately providing faulty inputs, parameters, etc. By doing so, common mistakes, such as buffer overflows, cross-site scripting, and SQL injection can be discovered—preferably before the software is actually used in production mode.

A related technique is fault injection. First used as a precise technique in the mid-1990s, fault injection uses code mutation, real or simulated hardware failures, and other modifications to induce stress in the system and determine its robustness.

Some available application scanners are listed in Table 6-5.

Table 6-5. Application scanning tools

Tool

Description and URL

Appscan

Application scanner (for web-based application) that functions by attempting various fault-injection functions. Commercially available from Sanctum.

www.sanctuminc.com

whisker

CGI scanner that scans web-based applications for common CGI flaws. Freely available from "Rain Forest Puppy."

www.securiteam.com/tools/3R5QHQAPPY.html

ISS Database Scanner

Scans a select group of database server applications (including MS-SQL, Sybase, and Oracle) for common flaws. Commercially available from Internet Security Systems.

www.iss.net

Use a network protocol analyzer

Software developers have long appreciated the use of dynamic debugging tools for tracing through their software one instruction at a time, in order to find out the cause of a bug. We recommend that you use a network protocol analyzer as a similar debugging—and security verification—tool when testing software.

For example, there is no substitute for actually verifying what takes place during an encryption session key exchange, or that the network protocol is indeed encrypting the usernames/passwords as they're sent through the network. Run a network protocol analyzer and watch everything that traverses the network, and then validate that the application design and implementation are actually doing what the developer intended them to do. Although it's laborious and not a lot of fun, this methodology should be in every software developer's toolbox.

Take advantage of intrusion detection systems

Although intrusion detection system (IDS) tools aren't commonly considered part of secure coding practices, the careful use of such tools can greatly enhance the security of your application. Look for tools that let you customize and create your own signatures of particular attacks. Then implement the tools so they're looking for attacks that are highly specific to your application. These might include attackers trying to access the crown jewels of your application (e.g., someone attempting to modify user access authorization tables, without permission to do so) and its data, for example. By tailoring IDS systems in this way, you can greatly reduce the rate of false alarms. (Many popular IDS tools have been criticized for producing far too many such alarms.)

Do open source monitoring

Open source monitoring (OSM) is another example of something that's not conventionally considered to be a function of secure coding, yet it can be highly effective at finding security problems or pitfalls in an application. The process of OSM is to scan publicly-accessible information sources—particularly web sites and discussion forums on the Internet—for indications of security exposures. These can fall into several categories, such as:

  • Attackers discussing vulnerabilities in an application. This is especially true if you are using third-party software.

  • Vendor announcements of new product vulnerabilities that can directly or indirectly impact the security of the application.

  • Material (e.g., design notes or vulnerability lists) that could undermine the security of third-party packages (like libraries) that you use in your application.

  • Discussions on forums that indicate active threats targeted at your industry or the technologies that you're deploying for this application.

Use checklists

Just as we've seen in the previous stages of application development, it's useful to assemble and follow a checklist approach for validating security measures at operations time. This is true even (in fact, especially) if a different team of people is responsible for the day-to-day operations of the equipment supporting the application.

    Team LiB   Previous Section   Next Section