3.5 Case StudiesThis section includes several case studies, including a relic from the mid-1980s implementing role-based access control, a couple of classic wrapper programs, a secure mail delivery system, and the 802.11 wireless LAN security design. We've carefully selected these examples—from real experiences—to give you insight into how others have approached difficult design problems. 3.5.1 Case 1: Access Control ExecutiveThe Access Control Executive (ACE) was a software system that Mark codesigned and coengineered in the mid-1980s. It provided key security services to many famous museums in Europe (and many other tightly secured locations). We include it as a case study as an example of a well-thought-out mental model. We called the software the Access Control Executive because it ran as a background process and controlled access to all system resources. The ACE was consulted before any utility was successfully initiated; before any file was (through an application) opened, written, or closed; and before any vault (let's say) was opened. It gave a ruling on whether the action was to be permitted, denied, or modified, and this ruling was communicated back to the caller in real time.[6]
By design, each application program had to ask the ACE for permission to do anything risky. Yet none of them was burdened with specific code to do this. Instead, at initialization each invoked a single routine that "signed on" to the ACE, and at the same time each one modified on the fly its copy of a small part of the system's library controlling input and output, program invocation, and a few other operations. An alternate scheme we considered was to build the applications themselves with a modified version of the library. This would have obviated the need for any changes to the application source code at all! We rejected the idea because we worried that it would burden future maintainers of the code, forcing them to understand and follow highly specialized build techniques for all the applications. Another feature that distinguished the ACE was what we called the capability cube. Part of the specification of the system was that the customer had to be able to control all actions on the basis of five parameters of the request:
We decided to treat these decision combinations as a five-dimensional matrix, a kind of cube. Once that security model had been selected, it was easy then to write utilities to build, maintain, and inspect the cube. Because the cube was a sparse matrix, we added utilities to compress and expand it as needed to conserve memory and disk space. We wrote a utility to control the user/role table, too, and also supplied a way to build a map showing the relationship between physical terminal locations.[7]
When the ACE got a request, it simply consulted the cube to see if an action was permitted, and relayed the information as a decision. ACE also implemented a peculiar twist on the playpen technique we discussed earlier. We tried to model it after the construction of Mayan temples, which remained strong even after multiple earthquakes. As we understood it, the Mayans made their temple walls several meters thick, for strength; but much of the inside was a rubble-filled cavity. As earthquakes shook the walls (as they still do) the rubble jumps and dissipates much of the kinetic energy. Aspiring to do the same, we engineered our system so that it would, under attack, collapse inwardly in stages, appearing at each phase to have been utterly compromised but in reality still mounting defenses against the attacker.[8]
Another notable aspect of the project was the way its design goals were specified. Our contracted-for goal was to protect critical resources for specified periods of time:
This case study teaches several lessons. The following are especially important:
3.5.2 Case 2: AusCERT Overflow WrapperThe overflow.c program is a wrapper that (as far as we can tell) works. AusCERT, the famous Australian security team, released it in 1997. Here's what the documentation at ftp://ftp.auscert.org.au/pub/auscert/tools/overflow_wrapper says about it:
Here is what the code looks like (with most of the comments/documentation removed): static char Version[] = "overflow_wrapper-1.1 V1.1 13-May-1997"; #include <stdio.h> #include <syslog.h> /* * This wrapper will exit without executing REAL_PROG when * given any command line arguments which exceed MAXARGLEN in length. */ main(argc,argv,envp) int argc; char *argv[]; char *envp[]; { int i; for (i=0; i<argc; i++) { if (strlen(argv[i]) > MAXARGLEN) { fprintf(stderr,"You have exceeded the argument \ length ...Exiting\n"); #ifdef SYSLOG syslog(LOG_DAEMON|LOG_ERR,"%.32s: possible buffer \ overrun attack by uid %d\n", argv[0], getuid( )); #endif exit(1); } } execve(REAL_PROG, argv, envp); perror("execve failed"); exit(1); } Breathtakingly simple, isn't it? It aspires to a significantly more modest functionality than smrsh, as you will soon see. There is no complex character smashing. It seems to us to be foolproof, an excellent example both of sound design and satisfactory implementation. This case study teaches us an important lesson about the value of simplicity in a design. 3.5.3 Case 3: Sendmail Restricted ShellThe smrsh program is a Unix utility written by an expert programmer. It was created as a security retrofit for Sendmail. The idea was to compensate for Sendmail's many security design flaws by restricting the programs to which Sendmail itself can pass control. Here is the official description:
That assessment turns out to be too optimistic. So too were these comments within the program:
Despite these comments—and despite the fact that smrsh was explicitly designed to prevent security compromises resulting from manipulation of the shells that Sendmail invoked—smrsh was found to have two such vulnerabilities. Actually, early versions of the utility did not have the bugs; they were introduced during code maintenance, and discovered in the version of smrsh released with Sendmail 8.12.6. If you are comfortable reading C code, take a look at this extract from the buggy version. (Even those who don't know C may get some helpful general impressions from this code.) Remember: this is a wrapper you are looking at. The goal is to sanitize input and restrict which programs can be run as a consequence of a command-line parameter. /* ** Disallow special shell syntax. This is overly restrictive, ** but it should shut down all attacks. ** Be sure to include 8-bit versions, since many shells strip ** the address to 7 bits before checking. */ if (strlen(SPECIALS) * 2 >= sizeof specialbuf) { #ifndef DEBUG syslog(LOG_ERR, "too many specials: %.40s", SPECIALS); #endif /* ! DEBUG */ exit(EX_UNAVAILABLE); } (void) sm_strlcpy(specialbuf, SPECIALS, sizeof specialbuf); for (p = specialbuf; *p != '\0'; p++) *p |= '\200'; (void) sm_strlcat(specialbuf, SPECIALS, sizeof specialbuf); /* ** Do a quick sanity check on command line length. */ if (strlen(par) > (sizeof newcmdbuf - sizeof CMDDIR - 2)) { (void) sm_io_fprintf(smioerr, SM_TIME_DEFAULT, "%s: command too long: %s\n", prg, par); #ifndef DEBUG syslog(LOG_WARNING, "command too long: %.40s", par); #endif /* ! DEBUG */ exit(EX_UNAVAILABLE); } q = par; newcmdbuf[0] = '\0'; isexec = false; while (*q != '\0') { /* ** Strip off a leading pathname on the command name. For ** example, change /usr/ucb/vacation to vacation. */ /* strip leading spaces */ while (*q != '\0' && isascii(*q) && isspace(*q)) q++; if (*q == '\0') { if (isexec) { (void) sm_io_fprintf(smioerr, SM_TIME_DEFAULT, "%s: missing command to exec\n", prg); #ifndef DEBUG syslog(LOG_CRIT, "uid %d: missing command to exec", (int) getuid()); #endif /* ! DEBUG */ exit(EX_UNAVAILABLE); } break; } /* find the end of the command name */ p = strpbrk(q, " \t"); if (p == NULL) cmd = &q[strlen(q)]; else { *p = '\0'; cmd = p; } /* search backwards for last / (allow for 0200 bit) */ while (cmd > q) { if ((*--cmd & 0177) == '/') { cmd++; break; } } /* cmd now points at final component of path name */ /* allow a few shell builtins */ if (strcmp(q, "exec") == 0 && p != NULL) { addcmd("exec ", false, strlen("exec ")); /* test _next_ arg */ q = ++p; isexec = true; continue; } else if (strcmp(q, "exit") == 0 || strcmp(q, "echo") == 0) { addcmd(cmd, false, strlen(cmd)); /* test following chars */ } else { char cmdbuf[MAXPATHLEN]; /* ** Check to see if the command name is legal. */ if (sm_strlcpyn(cmdbuf, sizeof cmdbuf, 3, CMDDIR, "/", cmd) >= sizeof cmdbuf) { /* too long */ (void) sm_io_fprintf(smioerr, SM_TIME_DEFAULT, "%s: %s not available for sendmail programs \ (filename too long)\n", prg, cmd); if (p != NULL) *p = ' '; #ifndef DEBUG syslog(LOG_CRIT, "uid %d: attempt to use %s \ (filename too long)", (int) getuid(), cmd); #endif /* ! DEBUG */ exit(EX_UNAVAILABLE); } #ifdef DEBUG (void) sm_io_fprintf(smioout, SM_TIME_DEFAULT, "Trying %s\n", cmdbuf); #endif /* DEBUG */ if (access(cmdbuf, X_OK) < 0) { /* oops.... crack attack possiblity */ (void) sm_io_fprintf(smioerr, SM_TIME_DEFAULT, "%s: %s not available for sendmail programs \ \n",prg, cmd); if (p != NULL) *p = ' '; #ifndef DEBUG syslog(LOG_CRIT, "uid %d: attempt to use %s", (int) getuid(), cmd); #endif /* ! DEBUG */ exit(EX_UNAVAILABLE); } This code excerpt is reminiscent of many programs we've written and others we've never finished writing: the logic became so intricate that we, having studied hundreds of similar efforts, despaired of ever being confident of its correctness and searched out another approach. At any rate, there is a problem. At a time when the code was running on and protecting thousands of sites around the world, the following "security advisory" was issued by a group called "SecuriTeam":
Of course, it's not remarkable that the code has these simple errors. (A patch was executed and released within days by sendmail.org.) And we don't hold it up as an example simply for the jejune thrill of exhibiting a wrapper program that can itself be subverted by command-line manipulation. Rather, we cite this example because we think it makes our case. Without a sound underlying security design—and unless maintenance is consistent with that of design and is subjected to the same level of quality control—even the most expert and motivated programmer can produce a serious vulnerability. As a matter of fact, we could have shown off a few of our own bugs and made the same point! 3.5.4 Case 4: Postfix Mail Transfer AgentWietse Venema at IBM's Thomas J. Watson Research Center set out to write a replacement for the problematic Sendmail Mail Transfer Agent (MTA[9]). In so doing, he created an extraordinary example of the design of a secure application[10]. With permission from Dr. Venema, we quote here (with minor editing for style consistency) his security design discussion, explaining the architectural principles that he followed for the Postfix mailer. (The descriptions of Postfix were excerpted from material at Dr. Venema's web site, http://www.porcupine.org/. Note that we've given this case study more space than most of the others because the lessons it teaches are so significant.)
By definition, mail software processes information from potentially untrusted sources. Therefore, mail software must be written with great care, even when it runs with user privileges and even when it does not talk directly to a network. Postfix is a complex system. The initial release has about 30,000 lines of code (after deleting the comments). With a system that complex, the security of the system should not depend on a single mechanism. If it did, one single error would be sufficient to compromise the entire mail system. Therefore, Postfix uses multiple layers of defense to control the damage from software and other errors. Postfix also uses multiple layers of defense to protect the local system against intruders. Almost every Postfix daemon can run in a chroot jail with fixed low privileges. There is no direct path from the network to the security-sensitive local delivery programs—an intruder has to break through several other programs first. Postfix does not even trust the contents of its own queue files or the contents of its own IPC messages. Postfix filters sender-provided information before exporting it via environment variables. Last but not least, no Postfix program is setuid. [The setuid feature, which is described in more detail in Chapter 4, is a Unix mechanism for running a program with a prespecified user identification. This enables the programmer to ensure that the program runs with known privileges and permissions.] Postfix is based on semiresident, mutually cooperating processes that perform specific tasks for each other, without any particular parent-child relationship. Again, doing work in separate processes gives better insulation than using one big program. In addition, the Postfix approach has the following advantage: a service, such as address rewriting, is available to every Postfix component program, without incurring the cost of process creation just to rewrite one address. Postfix is implemented as a resident master server that runs Postfix daemon processes on demand (daemon processes to send or receive network mail messages, daemon processes to deliver mail locally, and so on). These processes are created up to a configurable number, and they are reused a configurable number of times and go away after a configurable amount of idle time. This approach drastically reduces process creation overhead while still providing good insulation from separate processes. As a result of this architecture, Postfix is easy to strip down to the bare minimum. Subsystems that are turned off cannot be exploited. Firewalls do not need local delivery. On client workstations, one disables both the SMTP listener and local delivery subsystems, or the client mounts the maildrop directory from a file server and runs no resident Postfix processes at all. Now, let's move on to the next stage of the development lifecycle and discuss the design of Postfix itself. 3.5.4.1 Least privilegeAs we described earlier, most Postfix daemon programs can be run at fixed low privilege in a jail environment using the Unix chroot function. This is especially true for the programs that are exposed to the network: the SMTP server and SMTP client. Although chroot, even when combined with low privilege, is no guarantee against system compromise, it does add a considerable hurdle. And every little bit helps. 3.5.4.2 InsulationPostfix uses separate processes to insulate activities from each other. In particular, there is no direct path from the network to the security-sensitive local delivery programs. First an intruder has to break through multiple programs. Some parts of the Postfix system are multithreaded. However, all programs that interact with the outside world are single-threaded. Separate processes give better insulation than multiple threads within a shared address space. 3.5.4.3 Controlled environmentNo Postfix mail delivery program runs under the control of a user process. Instead, most Postfix programs run under the control of a resident master daemon that runs in a controlled environment, without any parent-child relationship to user processes. This approach eliminates exploits that involve signals, open files, environment variables, and other process attributes that the Unix system passes on from a possibly malicious parent to a child. 3.5.4.4 Use of profiles and privilegesNo Postfix program is setuid. In our opinion, introducing the setuid concept was the biggest mistake made in Unix history. The setuid feature (and its weaker cousin, setgid) causes more trouble than it is worth. Each time a new feature is added to the Unix system, setuid creates a security problem: shared libraries, the /proc file system, multilanguage support, to mention just a few examples. So setuid makes it impossible to introduce some of the features that make Unix successors such as plan9 so attractive—for example, per-process filesystem namespaces. Early in the process of designing Postfix, the maildrop queue directory was world-writable, to enable local processes to submit mail without assistance from a setuid or setgid command or from a mail daemon process. The maildrop directory was not used for mail coming in via the network, and its queue files were not readable for unprivileged users. A writable directory opens up opportunities for annoyance: a local user can make hard links to someone else's maildrop files so they don't go away and/or are delivered multiple times; a local user can fill the maildrop directory with garbage and try to make the mail system crash; and a local user can hard link someone else's files into the maildrop directory and try to have them delivered as mail. However, Postfix queue files have a specific format; less than one in 1012 non-Postfix files would be recognized as a valid Postfix queue file. Because of the potential for misbehavior, Postfix has now abandoned the world-writable maildrop directory and uses a small setgid postdrop helper program for mail submission. 3.5.4.5 TrustAs mentioned earlier, Postfix programs do not trust the contents of queue files or of the Postfix internal IPC messages. Queue files have no on-disk record for deliveries to sensitive destinations such as files or commands. Instead, programs, such as the local delivery agent, attempt to make security-sensitive decisions on the basis of first-hand information. Of course, Postfix programs do not trust data received from the network, either. In particular, Postfix filters sender-provided data before exporting it via environment variables. If there is one lesson that people have learned from web site security disasters it is this one: don't let any data from the network near a shell. Filtering is the best we can do. 3.5.4.6 Large inputsPostfix provides a number of defenses against large inputs:
3.5.4.7 Other defensesOther Postfix defenses include:
[That concludes the "guest selection" authored by Dr. Venema. Thanks, Wietse!] This case study teaches several lessons. The following are especially important:
3.5.5 Case 5: TCP WrappersTCP Wrappers, also written (and given away) by the ubiquitous Wietse Venema, is a very popular tool for helping secure Unix systems. Although Dr. Venema has chosen to call it a "wrapper," we somehow think it is better characterized as an example of interposition. In any event, the way it works is both elegant and simple. Figure 3-5 shows its operation. Figure 3-5. TCP WrappersWhenever a new network connection is created, Unix's inetd is typically responsible for invoking the appropriate program to handle the interaction. In the old days, inetd would just start up the program—let's say the telnet handler—and pass the connection information to it. But when TCP Wrappers is installed, it gets invoked by inetd instead of the handler. It performs some sanity checking, logging, and so forth, and then—if continuing is consistent with the system's security policy—it passes the connection to the original target program. What does Dr. Venema say about it? His claims are modest:
Take a look at what a recent RAND/DARPA study on critical infrastructure protection issues ("The Day After... In Cyberspace II") says about this approach:
TCP Wrappers is an inspiring example of what is possible with intelligent security retrofitting. It teaches us several important lessons:
3.5.6 Case 6: 802.11 Wireless LAN Security Design ErrorsThe enormously popular IEEE 802.11 suite of protocols provides a standard for wireless local area networking over radio frequencies. One of the early security requirements for 802.11 was to provide security that was "equivalent" to having a wired (i.e., private) local area network, so that only authorized devices and users could send or receive the data packets. To provide this security, the standard defined Wired Equivalence Protocol (WEP) encryption and authentication mechanisms. Although the goals were admirable, unfortunately WEP turned out to be a perfect example of how not to design security, and we think looking at the mistakes that were made in WEP provide valuable lessons to be learned.[11]
First, WEP was specified to be optional, and it was therefore allowable for manufacturers to ship access points with WEP turned off by default. Unfortunately, the vast majority of users simply never turn the WEP option on, perhaps due to laziness or fear of the unknown. Studies have shown that between one-third and two-thirds of all installed and active access points do not have WEP turned on, allowing attackers direct access to wireless networks. Second, despite the efforts of the design committee, WEP has no fewer than four significant cryptographic design errors. To understand the errors, let's look at the basic design of WEP. An unencrypted 802.11 packet has two parts: a header and a body with data: [header] [ data body] WEP adds a 24-bit initialization vector (IV) in plaintext, encrypts the body, and appends a 32-bit CRC integrity check (also encrypted) as follows: [header] [24 bit IV] [encrypted body] [encrypted 32 bit CRC] The encryption is done by taking up to 104 bits of a shared secret key and adding the plaintext IV, to form a 128-bit encryption key. This key is used with the RC4 encryption algorithm to create a stream of bytes that are exclusive OR'ed into the stream of body and check bytes. Authentication of the client station to the access point is done with a challenge-response protocol: the access point picks a random 128-bit challenge and sends it to the client station. The station then has to WEP-encrypt the challenge packet and send the encrypted version back to the access point (AP): This overall design of WEP has the following major cryptographic errors:
This case study teaches several lessons. The following are especially important:
|