CONTENTS |
Most computers spend a lot of time doing nothing. If you start a system monitor tool and watch the CPU utilization, you'll see what I mean -- it's rare to see one hit 100%, even when you are running multiple programs.[1] There are just too many delays built in to software: disk accesses, network traffic, database queries, waiting for users to click a button, and so on. In fact, the majority of a modern CPU's capacity is often spent in an idle state; faster chips help speed up performance demand peaks, but much of their power can go largely unused.
Early on in computing, programmers realized that they could tap into such unused processing power, by running more than one program at the same time. By dividing up the CPU's attention among a set of tasks, its capacity need not go to waste while any given task is waiting for an external event to occur. The technique is usually called parallel processing, because tasks seem to be performed at once, overlapping and parallel in time. It's at the heart of modern operating systems, and gave rise to the notion of multiple active-window computer interfaces we've all grown to take for granted. Even within a single program, dividing processing up into tasks that run in parallel can make the overall system faster, at least as measured by the clock on your wall.
Just as importantly, modern software systems are expected to be responsive to users, regardless of the amount of work they must perform behind the scenes. It's usually unacceptable for a program to stall while busy carrying out a request. Consider an email-browser user interface, for example; when asked to fetch email from a server, the program must download text from a server over a network. If you have enough email and a slow enough Internet link, that step alone can take minutes to finish. But while the download task proceeds, the program as a whole shouldn't stall -- it still must respond to screen redraws, mouse clicks, etc.
Parallel processing comes to the rescue here too. By performing such long-running tasks in parallel with the rest of the program, the system at large can remain responsive no matter how busy some of its parts may be.
There are two built-in ways to get tasks running at the same time in Python -- process forks, and spawned threads. Functionally, both rely on underlying operating system services to run bits of Python code in parallel. Procedurally, they are very different in terms of interface, portability, and communication. At this writing, process forks don't work on Windows (more on this in a later note), but Python's thread support works on all major platforms. Moreover, there are additional Windows-specific ways to launch programs that are similar to forks.
In this chapter, which is a continuation of our look at system interfaces available to Python programmers, we explore Python's built-in tools for starting programs in parallel, as well as communicating with those programs. In some sense, we've already starting doing so -- the os.system and os.popen calls introduced and applied in the prior chapter are a fairly portable way to spawn and speak with command-line programs too. Here, our emphasis is on introducing more direct techniques -- forks, threads, pipes, signals, and Windows-specific launcher tools. In the next chapter (and the remainder of this book), we use these techniques in more realistic programs, so be sure you understand the basics here before flipping ahead.
Forked processes are the traditional way to structure parallel tasks, and are a fundamental part of the Unix tool set. Forking is based on the notion of copying programs: when a program calls the fork routine, the operating system makes a new copy of that program in memory, and starts running that copy in parallel with the original. Some systems don't really copy the original program (it's an expensive operation), but the new copy works as if it was a literal copy.
After a fork operation, the original copy of the program is called the parent process, and the copy created by os.fork is called the child process. In general, parents can make any number of children, and children can create child processes of their own -- all forked processes run independently and in parallel under the operating system's control. It is probably simpler in practice than theory, though; the Python script in Example 3-1 forks new child processes until you type a "q" at the console.
# forks child processes until you type 'q' import os def child( ): print 'Hello from child', os.getpid( ) os._exit(0) # else goes back to parent loop def parent( ): while 1: newpid = os.fork( ) if newpid == 0: child( ) else: print 'Hello from parent', os.getpid( ), newpid if raw_input( ) == 'q': break parent( )
Python's process forking tools, available in the os module, are simply thin wrappers over standard forking calls in the C library. To start a new, parallel process, call the os.fork built-in function. Because this function generates a copy of the calling program, it returns a different value in each copy: zero in the child process, and the process ID of the new child in the parent. Programs generally test this result to begin different processing in the child only; this script, for instance, runs the child function in child processes only.[2]
Unfortunately, this won't work on Windows today; fork is at odds with the Windows model, and a port of this call is still in the works. But because forking is ingrained into the Unix programming model, this script works well on Unix and Linux:
[mark@toy]$ python fork1.py Hello from parent 671 672 Hello from child 672 Hello from parent 671 673 Hello from child 673 Hello from parent 671 674 Hello from child 674 q
These messages represent three forked child processes; the unique identifiers of all the processes involved are fetched and displayed with the os.getpid call. A subtle point: The child process function is also careful to exit explicitly with an os._exit call. We'll discuss this call in more detail later in this chapter, but if it's not made, the child process would live on after the child function returns (remember, it's just a copy of the original process). The net effect is that the child would go back to the loop in parent and start forking children of its own (i.e., the parent would have grandchildren). If you delete the exit call and rerun, you'll likely have to type more than one "q" to stop, because multiple processes are running in the parent function.
In Example 3-1, each process exits very soon after it starts, so there's little overlap in time. Let's do something slightly more sophisticated to better illustrate multiple forked processes running in parallel. Example 3-2 starts up 10 copies of itself, each copy counting up to 10 with a one-second delay between iterations. The time.sleep built-in call simply pauses the calling process for a number of seconds (pass a floating-point value to pause for fractions of seconds).
############################################################ # fork basics: start 10 copies of this program running in # parallel with the original; each copy counts up to 10 # on the same stdout stream--forks copy process memory, # including file descriptors; fork doesn't currently work # on Windows: use os.spawnv to start programs on Windows # instead; spawnv is roughly like a fork+exec combination; ############################################################ import os, time def counter(count): for i in range(count): time.sleep(1) print '[%s] => %s' % (os.getpid( ), i) for i in range(10): pid = os.fork( ) if pid != 0: print 'Process %d spawned' % pid else: counter(10) os._exit(0) print 'Main process exiting.'
When run, this script starts 10 processes immediately and exits. All 10 forked processes check in with their first count display one second later, and every second thereafter. Child processes continue to run, even if the parent process that created them terminates:
mark@toy]$ python fork-count.py Process 846 spawned Process 847 spawned Process 848 spawned Process 849 spawned Process 850 spawned Process 851 spawned Process 852 spawned Process 853 spawned Process 854 spawned Process 855 spawned Main process exiting. [mark@toy]$ [846] => 0 [847] => 0 [848] => 0 [849] => 0 [850] => 0 [851] => 0 [852] => 0 [853] => 0 [854] => 0 [855] => 0 [847] => 1 [846] => 1 ...more output deleted...
The output of all these processes shows up on the same screen, because they all share the standard output stream. Technically, a forked process gets a copy of the original process's global memory, including open file descriptors. Because of that, global objects like files start out with the same values in a child process. But it's important to remember that global memory is copied, not shared -- if a child process changes a global object, it changes its own copy only. (As we'll see, this works differently in threads, the topic of the next section.)
Examples Example 3-1 and Example 3-2 child processes simply ran a function within the Python program and exited. On Unix-like platforms, forks are often the basis of starting independently running programs that are completely different from the program that performed the fork call. For instance, Example 3-3 forks new processes until we type "q" again, but child processes run a brand new program instead of calling a function in the same file.
# starts programs until you type 'q' import os parm = 0 while 1: parm = parm+1 pid = os.fork( ) if pid == 0: # copy process os.execlp('python', 'python', 'child.py', str(parm)) # overlay program assert 0, 'error starting program' # shouldn't return else: print 'Child is', pid if raw_input( ) == 'q': break
If you've done much Unix development, the fork/exec combination will probably look familiar. The main thing to notice is the os.execlp call in this code. In a nutshell, this call overlays (i.e., replaces) the program running in the current process with another program. Because of that, the combination of os.fork and os.execlp means start a new process, and run a new program in that process -- in other words, launch a new program in parallel with the original program.
The arguments to os.execlp specify the program to be run by giving command-line arguments used to start the program (i.e., what Python scripts know as sys.argv). If successful, the new program begins running and the call to os.execlp itself never returns (since the original program has been replaced, there's really nothing to return to). If the call does return, an error has occurred, so we code an assert after it that will always raise an exception if reached.
There are a handful of os.exec variants in the Python standard library; some allow us to configure environment variables for the new program, pass command-line arguments in different forms, and so on. All are available on both Unix and Windows, and replace the calling program (i.e., the Python interpreter). exec comes in eight flavors, which can be a bit confusing unless you generalize:
The basic "v" exec form is passed an executable program's name, along with a list or tuple of command-line argument strings used to run the executable (that is, the words you would normally type in a shell to start a program).
The basic "l" exec form is passed an executable's name, followed by one or more command-line arguments passed as individual function arguments. This is the same as os.execv(program, (cmdarg1, cmdarg2,...)).
Adding a "p" to the execv and execl names means that Python will locate the executable's directory using your system search-path setting (i.e., PATH).
Adding an "e" to the execv and execl names means an extra, last argument is a dictionary containing shell environment variables to send to the program.
Adding both "p" and "e" to the basic exec names means to use the search-path, and accept a shell environment settings dictionary.
So, when the script in Example 3-3 calls os.execlp, individually passed parameters specify a command line for the program to be run on, and the word "python" maps to an executable file according to the underlying system search-path setting ($PATH). It's as if we were running a command of the form python child.py 1 in a shell, but with a different command-line argument on the end each time.
Just as when typed at a shell, the string of arguments passed to os.execlp by the fork-exec script in Example 3-3 starts another Python program file, shown in Example 3-4.
import os, sys print 'Hello from child', os.getpid( ), sys.argv[1]
Here is this code in action on Linux. It doesn't look much different from the original fork1.py, but it's really running a new program in each forked process. The more observant readers may notice that the child process ID displayed is the same in the parent program and the launched child.py program -- os.execlp simply overlays a program in the same process:
[mark@toy]$ python fork-exec.py Child is 1094 Hello from child 1094 1 Child is 1095 Hello from child 1095 2 Child is 1096 Hello from child 1096 3 q
There are other ways to start up programs in Python, including the os.system and os.popen we met in Chapter 2 (to start shell command lines), and the os.spawnv call we'll meet later in this chapter (to start independent programs on Windows); we further explore such process-related topics in more detail later in this chapter. We'll also discuss additional process topics in later chapters of this book. For instance, forks are revisited in Chapter 10, to deal with "zombies" -- dead processes lurking in system tables after their demise.
Threads are another way to start activities running at the same time. They sometimes are called "lightweight processes," and they are run in parallel like forked processes, but all run within the same single process. For applications that can benefit from parallel processing, threads offer big advantages for programmers:
Because all threads run within the same process, they don't generally incur a big startup cost to copy the process itself. The costs of both copying forked processes and running threads can vary per platform, but threads are usually considered less expensive in terms of performance overhead.
Threads can be noticeably simpler to program too, especially when some of the more complex aspects of processes enter the picture (e.g., process exits, communication schemes, and "zombie" processes covered in Chapter 10).
Also because threads run in a single process, every thread shares the same global memory space of the process. This provides a natural and easy way for threads to communicate -- by fetching and setting data in global vmemory. To the Python programmer, this means that global (module-level) variables and interpreter components are shared among all threads in a program: if one thread assigns a global variable, its new value will be seen by other threads. Some care must be taken to control access to shared global objects, but they are still generally simpler to use than the sorts of process communication tools necessary for forked processes we'll meet later in this chapter (e.g., pipes, streams, signals, etc.).
Perhaps most importantly, threads are more portable than forked processes. At this writing, the os.fork is not supported on Windows at all, but threads are. If you want to run parallel tasks portably in a Python script today, threads are likely your best bet. Python's thread tools automatically account for any platform-specific thread differences, and provide a consistent interface across all operating systems.
Using threads is surprisingly easy in Python. In fact, when a program is started it is already running a thread -- usually called the "main thread" of the process. To start new, independent threads of execution within a process, we either use the Python thread module to run a function call in a spawned thread, or the Python threading module to manage threads with high-level objects. Both modules also provide tools for synchronizing access to shared objects with locks.
Since the basic thread module is a bit simpler than the more advanced threading module covered later in this section, let's look at some of its interfaces first. This module provides a portable interface to whatever threading system is available in your platform: its interfaces work the same on Windows, Solaris, SGI, and any system with an installed "pthreads" POSIX threads implementation (including Linux). Python scripts that use the Python thread module work on all of these platforms without changing their source code.
Let's start off by experimenting with a script that demonstrates the main thread interfaces. The script in Example 3-5 spawns threads until you reply with a "q" at the console; it's similar in spirit to (and a bit simpler than) the script in Example 3-1, but goes parallel with threads, not forks.
# spawn threads until you type 'q' import thread def child(tid): print 'Hello from thread', tid def parent( ): i = 0 while 1: i = i+1 thread.start_new(child, (i,)) if raw_input( ) == 'q': break parent( )
There are really only two thread-specific lines in this script: the import of the thread module, and the thread creation call. To start a thread, we simply call the thread.start_new function, no matter what platform we're programming on.[3] This call takes a function object and an arguments tuple, and starts a new thread to execute a call to the passed function with the passed arguments. It's almost like the built-in apply function (and like apply, also accepts an optional keyword arguments dictionary), but in this case, the function call begins running in parallelwith the rest of the program.
Operationally speaking, the thread.start_new call itself returns immediately with no useful value, and the thread it spawns silently exits when the function being run returns (the return value of the threaded function call is simply ignored). Moreover, if a function run in a thread raises an uncaught exception, a stack trace is printed and the thread exits, but the rest of the program continues.
In practice, though, it's almost trivial to use threads in a Python script. Let's run this program to launch a few threads; it can be run on both Linux and Windows this time, because threads are more portable than process forks:
C:\...\PP2E\System\Threads>python thread1.py Hello from thread 1 Hello from thread 2 Hello from thread 3 Hello from thread 4 q
Each message here is printed from a new thread, which exits almost as soon as it is started. To really understand the power of threads running in parallel, we have to do something more long-lived in our threads. The good news is that threads are both easy and fun to play with in Python. Let's mutate the fork-count program of the prior section to use threads. The script in Example 3-6 starts 10 copies of its counter running in parallel threads.
################################################## # thread basics: start 10 copies of a function # running in parallel; uses time.sleep so that # main thread doesn't die too early--this kills # all other threads on both Windows and Linux; # stdout shared: thread outputs may be intermixed ################################################## import thread, time def counter(myId, count): # this function runs in threads for i in range(count): #time.sleep(1) print '[%s] => %s' % (myId, i) for i in range(10): # spawn 10 threads thread.start_new(counter, (i, 3)) # each thread loops 3 times time.sleep(4) print 'Main thread exiting.' # don't exit too early
Each parallel copy of the counter function simply counts from zero up to two here. When run on Windows, all 10 threads run at the same time, so their output is intermixed on the standard output stream:
C:\...\PP2E\System\Threads>python thread-count.py ...some lines deleted... [5] => 0 [6] => 0 [7] => 0 [8] => 0 [9] => 0 [3] => 1 [4] => 1 [1] => 0 [5] => 1 [6] => 1 [7] => 1 [8] => 1 [9] => 1 [3] => 2 [4] => 2 [1] => 1 [5] => 2 [6] => 2 [7] => 2 [8] => 2 [9] => 2 [1] => 2 Main thread exiting.
In fact, these threads' output is mixed arbitrarily, at least on Windows -- it may even be in a different order each time you run this script. Because all 10 threads run as independent entities, the exact ordering of their overlap in time depends on nearly random system state at large at the time they are run.
If you care to make this output a bit more coherent, uncomment (that is, remove the # before) the time.sleep(1) call in the counter function and rerun the script. If you do, each of the 10 threads now pauses for one second before printing its current count value. Because of the pause, all threads check in at the same time with the same count; you'll actually have a one-second delay before each batch of 10 output lines appears:
C:\...\PP2E\System\Threads>python thread-count.py ...some lines deleted... [7] => 0 [6] => 0 pause... [0] => 1 [1] => 1 [2] => 1 [3] => 1 [5] => 1 [7] => 1 [8] => 1 [9] => 1 [4] => 1 [6] => 1 pause... [0] => 2 [1] => 2 [2] => 2 [3] => 2 [5] => 2 [9] => 2 [7] => 2 [6] => 2 [8] => 2 [4] => 2 Main thread exiting.
Even with the sleep synchronization active, though, there's no telling in what order the threads will print their current count. It's random on purpose -- the whole point of starting threads is to get work done independently, in parallel.
Notice that this script sleeps for four seconds at the end. It turns out that, at least on my Windows and Linux installs, the main thread cannot exit while any spawned threads are running; if it does, all spawned threads are immediately terminated. Without the sleep here, the spawned threads would die almost immediately after they are started. This may seem ad hoc, but isn't required on all platforms, and programs are usually structured such that the main thread naturally lives as long as the threads it starts. For instance, a user interface may start an FTP download running in a thread, but the download lives a much shorter life than the user interface itself. Later in this section, we'll see different ways to avoid this sleep with global flags, and will also meet a "join" utility in a different module that lets us wait for spawned threads to finish explicitly.
One of the nice things about threads is that they automatically come with a cross-task communications mechanism: shared global memory. For instance, because every thread runs in the same process, if one Python thread changes a global variable, the change can be seen by every other thread in the process, main or child. This serves as a simple way for a program's threads to pass information back and forth to each other -- exit flags, result objects, event indicators, and so on.
The downside to this scheme is that our threads must sometimes be careful to avoid changing global objects at the same time -- if two threads change an object at once, it's not impossible that one of the two changes will be lost (or worse, will corrupt the state of the shared object completely). The extent to which this becomes an issue varies per application, and is sometimes a nonissue altogether.
But even things that aren't obviously at risk may be at risk. Files and streams, for example, are shared by all threads in a program; if multiple threads write to one stream at the same time, the stream might wind up with interleaved, garbled data. Here's an example: if you edit Example 3-6, comment-out the sleep call in counter, and increase the per-thread count parameter from 3 to 100, you might occasionally see the same strange results on Windows that I did:
C:\...\PP2E\System\Threads\>python thread-count.py | more ...more deleted... [5] => 14 [7] => 14 [9] => 14 [3] => 15 [5] => 15 [7] => 15 [9] => 15 [3] => 16 [5] => 16 [7] => 16 [9] => 16 [3] => 17 [5] => 17 [7] => 17 [9] => 17 ...more deleted...
Because all 10 threads are trying to write to stdout at the same time, once in a while the output of more than one thread winds up on the same line. Such an oddity in this script isn't exactly going to crash the Mars Lander, but it's indicative of the sorts of clashes in time that can occur when our programs go parallel. To be robust, thread programs need to control access to shared global items like this such that only one thread uses it at once.[4]
Luckily, Python's thread module comes with its own easy-to-use tools for synchronizing access to shared objects among threads. These tools are based on the concept of a lock -- to change a shared object, threads acquire a lock, make their changes, and then release the lock for other threads to grab. Lock objects are allocated and processed with simple and portable calls in the thread module, and are automatically mapped to thread locking mechanisms on the underlying platform.
For instance, in Example 3-7, a lock object created by thread.allocate_lock is acquired and released by each thread around the print statement that writes to the shared standard output stream.
################################################## # synchronize access to stdout: because it is # shared global, thread outputs may be intermixed ################################################## import thread, time def counter(myId, count): for i in range(count): mutex.acquire( ) #time.sleep(1) print '[%s] => %s' % (myId, i) mutex.release( ) mutex = thread.allocate_lock( ) for i in range(10): thread.start_new_thread(counter, (i, 3)) time.sleep(6) print 'Main thread exiting.'
Python guarantees that only one thread can acquire a lock at any given time; all other threads that request the lock are blocked until a release call makes it available for acquisition. The net effect of the additional lock calls in this script is that no two threads will ever execute a print statement at the same point in time -- the lock ensures mutually exclusive access to the stdout stream. Hence, the output of this script is the same as the original thread_count.py, except that standard output text is never munged by overlapping prints.
The Global Interpreter Lock and ThreadsStrictly speaking, Python currently uses a global interpreter lock mechanism, which guarantees that at most one thread is running code within the Python interpreter at any given point in time. In addition, to make sure that each thread gets a chance to run, the interpreter automatically switches its attention between threads at regular intervals (by releasing and acquiring the lock after a number of bytecode instructions), as well as at the start of long-running operations (e.g., on file input/output requests). This scheme avoids problems that could arise if multiple threads were to update Python system data at the same time. For instance, if two threads were allowed to simultaneously change an object's reference count, the result may be unpredictable. This scheme can also have subtle consequences. In this chapter's threading examples, for instance, the stdout stream is likely corrupted only because each thread's call to write text is a long-running operation that triggers a thread switch within the interpreter. Other threads are then allowed to run and make write requests while a prior write is in progress. Moreover, even though the global interpreter lock prevents more than one Python thread from running at the same time, it is not enough to ensure thread safety in general, and does not address higher-level synchronization issues at all. For example, if more than one thread might attempt to update the same variable at the same time, they should generally be given exclusive access to the object with locks. Otherwise, it's not impossible that thread switches will occur in the middle of an update statement's bytecode. Consider this code: import thread, time count = 0 def adder(): global count count = count + 1 # concurrently update a shared global count = count + 1 # thread swapped out in the middle of this for i in range(100): thread.start_new(adder, ()) # start 100 update threads time.sleep(5) print count As is, this code fails on Windows due to the way its threads are interleaved (you get a different result each time, not 200), but works if lock acquire/release calls are inserted around the addition statements. Locks are not strictly required for all shared object access, especially if a single thread updates an object inspected by other threads. As a rule of thumb, though, you should generally use locks to synchronize threads whenever update rendezvous are possible, rather than relying on the current thread implementation. Interestingly, the above code also works if the thread-switch check interval is made high enough to allow each thread to finish without being swapped out. The sys.setcheckinterval(N) call sets the frequency with which the interpreter checks for things like thread switches and signal handlers. This interval defaults to 10, the number of bytecode instructions before a switch; it does not need to be reset for most programs, but can be used to tune thread performance. Setting higher values means that switches happen less often: threads incur less overhead, but are less responsive to events. If you plan on mixing Python with C, also see the thread interfaces described in the Python/C API standard manual. In threaded programs, C extensions must release and reacquire the global interpreter lock around long-running operations, to let other Python threads run. |
Incidentally, uncommenting the time.sleep call in this version's counter function makes each output line show up one second apart. Because the sleep occurs while a thread holds the lock, all other threads are blocked while the lock holder sleeps. One thread grabs the lock, sleeps one second and prints; another thread grabs, sleeps, and prints, and so on. Given 10 threads counting up to 3, the program as a whole takes 30 seconds (10 x 3) to finish, with one line appearing per second. Of course, that assumes that the main thread sleeps at least that long too; to see how to remove this assumption, we need to move on to the next section.
Thread module locks are surprisingly useful. They can form the basis of higher-level synchronization paradigms (e.g., semaphores), and can be used as general thread communication devices.[5] For example, Example 3-8 uses a global list of locks to know when all child threads have finished.
################################################## # uses mutexes to know when threads are done # in parent/main thread, instead of time.sleep; # lock stdout to avoid multiple prints on 1 line; ################################################## import thread def counter(myId, count): for i in range(count): stdoutmutex.acquire( ) print '[%s] => %s' % (myId, i) stdoutmutex.release( ) exitmutexes[myId].acquire( ) # signal main thread stdoutmutex = thread.allocate_lock( ) exitmutexes = [] for i in range(10): exitmutexes.append(thread.allocate_lock( )) thread.start_new(counter, (i, 100)) for mutex in exitmutexes: while not mutex.locked( ): pass print 'Main thread exiting.'
A lock's locked method can be used to check its state. To make this work, the main thread makes one lock per child, and tacks them onto a global exitmutexes list (remember, the threaded function shares global scope with the main thread). On exit, each thread acquires its lock on the list, and the main thread simply watches for all locks to be acquired. This is much more accurate than naively sleeping while child threads run, in hopes that all will have exited after the sleep.
But wait -- it gets even simpler: since threads share global memory anyhow, we can achieve the same effect with a simple global list of integers, not locks. In Example 3-9, the module's namespace (scope) is shared by top-level code and the threaded function as before -- name exitmutexes refers to the same list object in the main thread and all threads it spawns. Because of that, changes made in a thread are still noticed in the main thread without resorting to extra locks.
#################################################### # uses simple shared global data (not mutexes) to # know when threads are done in parent/main thread; #################################################### import thread stdoutmutex = thread.allocate_lock( ) exitmutexes = [0] * 10 def counter(myId, count): for i in range(count): stdoutmutex.acquire( ) print '[%s] => %s' % (myId, i) stdoutmutex.release( ) exitmutexes[myId] = 1 # signal main thread for i in range(10): thread.start_new(counter, (i, 100)) while 0 in exitmutexes: pass print 'Main thread exiting.'
The main threads of both of the last two scripts fall into busy-wait loops at the end, which might become significant performance drains in tight applications. If so, simply add a time.sleep call in the wait loops to insert a pause between end tests and free up the CPU for other tasks. Even threads must be good citizens.
Both of the last two counting thread scripts produce roughly the same output as the original thread_count.py -- albeit, without stdout corruption, and with different random ordering of output lines. The main difference is that the main thread exits immediately after (and no sooner than!) the spawned child threads:
C:\...\PP2E\System\Threads>python thread-count-wait2.py ...more deleted... [2] => 98 [6] => 97 [0] => 99 [7] => 97 [3] => 98 [8] => 97 [9] => 97 [1] => 99 [4] => 98 [5] => 98 [2] => 99 [6] => 98 [7] => 98 [3] => 99 [8] => 98 [9] => 98 [4] => 99 [5] => 99 [6] => 99 [7] => 99 [8] => 99 [9] => 99 Main thread exiting.
Of course, threads are for much more than counting. We'll put shared global data like this to more practical use in a later chapter, to serve as completion signals from child processing threads transferring data over a network, to a main thread controlling a Tkinter GUI user interface display (see Section 11.4 in Chapter 11).
The standard Python library comes with two thread modules -- thread , the basic lower-level interface illustrated thus far, and threading, a higher-level interface based on objects. The threading module internally uses the thread module to implement objects that represent threads and common synchronization tools. It is loosely based on a subset of the Java language's threading model, but differs in ways that only Java programmers would notice.[6] Example 3-10 morphs our counting threads example one last time to demonstrate this new module's interfaces.
####################################################### # uses higher-level java like threading module object # join method (not mutexes or shared global vars) to # know when threads are done in parent/main thread; # see library manual for more details on threading; ####################################################### import threading class mythread(threading.Thread): # subclass Thread object def __init__(self, myId, count): self.myId = myId self.count = count threading.Thread.__init__(self) def run(self): # run provides thread logic for i in range(self.count): # still synch stdout access stdoutmutex.acquire( ) print '[%s] => %s' % (self.myId, i) stdoutmutex.release( ) stdoutmutex = threading.Lock( ) # same as thread.allocate_lock( ) threads = [] for i in range(10): thread = mythread(i, 100) # make/start 10 threads thread.start( ) # start run method in a thread threads.append(thread) for thread in threads: thread.join( ) # wait for thread exits print 'Main thread exiting.'
The output of this script is the same as that shown for its ancestors earlier (again, randomly distributed). Using the threading module is largely a matter of specializing classes. Threads in this module are implemented with a Thread object -- a Python class which we customize per application by providing a run method that defines the thread's action. For example, this script subclasses Thread with its own mythread class; mythread's run method is what will be executed by the Thread framework when we make a mythread and call its start method.
In other words, this script simply provides methods expected by the Thread framework. The advantage of going this more coding-intensive route is that we get a set of additional thread-related tools from the framework "for free." The Thread.join method used near the end of this script, for instance, waits until the thread exits (by default); we can use this method to prevent the main thread from exiting too early, rather than the time.sleep calls and global locks and variables we relied on in earlier threading examples.
The example script also uses threading.Lock to synchronize stream access (though this name is just a synonym for thread.allocate_lock in the current implementation). Besides Thread and Lock, the threading module also includes higher-level objects for synchronizing access to shared items (e.g., Semaphore, Condition, Event), and more; see the library manual for details. For more examples of threads and forks in general, see the following section and the examples in Part III.
As we've seen, unlike C there is no "main" function in Python -- when we run a program, we simply execute all the code in the top-level file, from top to bottom (i.e., in the filename we listed in the command line, clicked in a file explorer, and so on). Scripts normally exit when Python falls off the end of the file, but we may also call for program exit explicitly with the built-in sys.exit function:
>>> sys.exit( ) # else exits on end of script
Interestingly, this call really just raises the built-in SystemExit exception. Because of this, we can catch it as usual to intercept early exits and perform cleanup activities; if uncaught, the interpreter exits as usual. For instance:
C:\...\PP2E\System>python >>> import sys >>> try: ... sys.exit( ) # see also: os._exit, Tk( ).quit( ) ... except SystemExit: ... print 'ignoring exit' ... ignoring exit >>>
In fact, explicitly raising the built-in SystemExit exception with a Python raise statement is equivalent to calling sys.exit. More realistically, a try block would catch the exit exception raised elsewhere in a program; the script in Example 3-11 exits from within a processing function.
def later( ): import sys print 'Bye sys world' sys.exit(42) print 'Never reached' if __name__ == '__main__': later( )
Running this program as a script causes it to exit before the interpreter falls off the end of the file. But because sys.exit raises a Python exception, importers of its function can trap and override its exit exception, or specify a finally cleanup block to be run during program exit processing:
C:\...\PP2E\System\Exits>python testexit_sys.py Bye sys world C:\...\PP2E\System\Exits>python >>> from testexit_sys import later >>> try: ... later( ) ... except SystemExit: ... print 'Ignored...' ... Bye sys world Ignored... >>> try: ... later( ) ... finally: ... print 'Cleanup' ... Bye sys world Cleanup C:\...\PP2E\System\Exits>
It's possible to exit Python in other ways too. For instance, within a forked child process on Unix we typically call the os._exit function instead of sys.exit, threads may exit with a thread.exit call, and Tkinter GUI applications often end by calling something named Tk( ).quit( ). We'll meet the Tkinter module later in this book, but os and thread exits merit a look here. When os._exit is called, the calling process exits immediately rather than raising an exception that could be trapped and ignored, as shown in Example 3-12.
def outahere( ): import os print 'Bye os world' os._exit(99) print 'Never reached' if __name__ == '__main__': outahere( )
Unlike sys.exit, os._exit is immune to both try/except and try/finally interception:
C:\...\PP2E\System\Exits>python testexit_os.py Bye os world C:\...\PP2E\System\Exits>python >>> from testexit_os import outahere >>> try: ... outahere( ) ... except: ... print 'Ignored' ... Bye os world C:\...\PP2E\System\Exits>python >>> from testexit_os import outahere >>> try: ... outahere( ) ... finally: ... print 'Cleanup' ... Bye os world
Both the sys and os exit calls we just met accept an argument that denotes the exit status code of the process (it's optional in the sys call, but required by os). After exit, this code may be interrogated in shells, and by programs that ran the script as a child process. On Linux, we ask for the "status" shell variable's value to fetch the last program's exit status; by convention a nonzero status generally indicates some sort of problem occurred:
[mark@toy]$ python testexit_sys.py Bye sys world [mark@toy]$ echo $status 42 [mark@toy]$ python testexit_os.py Bye os world [mark@toy]$ echo $status 99
In a chain of command-line programs, exit statuses could be checked along the way as a simple form of cross-program communication. We can also grab hold of the exit status of a program run by another script. When launching shell commands, it's provided as the return value of an os.system call, and the return value of the close method of an os.popen object; when forking programs, the exit status is available through the os.wait and os.waitpid calls in a parent process. Let's look at the shell commands case first:
[mark@toy]$ python >>> import os >>> pipe = os.popen('python testexit_sys.py') >>> pipe.read( ) 'Bye sys world\012' >>> stat = pipe.close( ) # returns exit code >>> stat 10752 >>> hex(stat) '0x2a00' >>> stat >> 8 42 >>> pipe = os.popen('python testexit_os.py') >>> stat = pipe.close( ) >>> stat, stat >> 8 (25344, 99)
When using os.popen, the exit status is actually packed into specific bit positions of the return value, for reasons we won't go into here; it's really there, but we need to shift the result right by eight bits to see it. Commands run with os.system send their statuses back through the Python library call:
>>> import os >>> for prog in ('testexit_sys.py', 'testexit_os.py'): ... stat = os.system('python ' + prog) ... print prog, stat, stat >> 8 ... Bye sys world testexit_sys.py 10752 42 Bye os world testexit_os.py 25344 99
|
To learn how to get the exit status from forked processes, let's write a simple forking program: the script in Example 3-13 forks child processes and prints child process exit statuses returned by os.wait calls in the parent, until a "q" is typed at the console.
############################################################ # fork child processes to watch exit status with os.wait; # fork works on Linux but not Windows as of Python 1.5.2; # note: spawned threads share globals, but each forked # process has its own copy of them--exitstat always the # same here but will vary if we start threads instead; ############################################################ import os exitstat = 0 def child( ): # could os.exit a script here global exitstat # change this process's global exitstat = exitstat + 1 # exit status to parent's wait print 'Hello from child', os.getpid( ), exitstat os._exit(exitstat) print 'never reached' def parent( ): while 1: newpid = os.fork( ) # start a new copy of process if newpid == 0: # if in copy, run child logic child( ) # loop until 'q' console input else: pid, status = os.wait( ) print 'Parent got', pid, status, (status >> 8) if raw_input( ) == 'q': break parent( )
Running this program on Linux (remember, fork also didn't work on Windows as I wrote the second edition of this book) produces the following results:
[mark@toy]$ python testexit_fork.py Hello from child 723 1 Parent got 723 256 1 Hello from child 724 1 Parent got 724 256 1 Hello from child 725 1 Parent got 725 256 1 q
If you study this output closely, you'll notice that the exit status (the last number printed) is always the same -- the number 1. Because forked processes begin life as copies of the process that created them, they also have copies of global memory. Because of that, each forked child gets and changes its own exitstat global variable, without changing any other process's copy of this variable.
In contrast, threads run in parallel within the same process and share global memory. Each thread in Example 3-14 changes the single shared global variable exitstat.
############################################################ # spawn threads to watch shared global memory change; # threads normally exit when the function they run returns, # but thread.exit( ) can be called to exit calling thread; # thread.exit is the same as sys.exit and raising SystemExit; # threads communicate with possibly locked global vars; ############################################################ import thread exitstat = 0 def child( ): global exitstat # process global names exitstat = exitstat + 1 # shared by all threads threadid = thread.get_ident( ) print 'Hello from child', threadid, exitstat thread.exit( ) print 'never reached' def parent( ): while 1: thread.start_new_thread(child, ( )) if raw_input( ) == 'q': break parent( )
Here is this script in action on Linux; the global exitstat is changed by each thread, because threads share global memory within the process. In fact, this is often how threads communicate in general -- rather than exit status codes, threads assign module-level globals to signal conditions (and use thread module locks to synchronize access to shared globals if needed):
[mark@toy]$ /usr/bin/python testexit_thread.py Hello from child 1026 1 Hello from child 2050 2 Hello from child 3074 3 q
Unlike forks, threads run on Windows today too; this program works the same there, but thread identifiers differ -- they are arbitrary but unique among active threads, and so may be used as dictionary keys to keep per-thread information:
C:\...\PP2E\System\Exits>python testexit_thread.py Hello from child -587879 1 Hello from child -587879 2 Hello from child -587879 3 q
Speaking of exits, a thread normally exits silently when the function it runs returns, and the function return value is ignored. Optionally, the thread.exit function can be called to terminate the calling thread explicitly. This call works almost exactly like sys.exit (but takes no return status argument), and works by raising a SystemExit exception in the calling thread. Because of that, a thread can also prematurely end by calling sys.exit, or directly raising SystemExit. Be sure to not call os._exit within a thread function, though -- doing so hangs the entire process on my Linux system, and kills every thread in the process on Windows!
When used well, exit status can be used to implement error-detection and simple communication protocols in systems composed of command-line scripts. But having said that, I should underscore that most scripts do simply fall off the end of the source to exit, and most thread functions simply return; explicit exit calls are generally employed for exceptional conditions only.
As we saw earlier, when scripts spawn threads -- tasks that run in parallel within the program -- they can naturally communicate by changing and inspecting shared global memory. As we also saw, some care must be taken to use locks to synchronize access to shared objects that can't be updated concurrently, but it's a fairly straightforward communication model.
Things aren't quite as simple when scripts start processes and programs. If we limit the kinds of communications that can happen between programs, there are many options available, most of which we've already seen in this and the prior chapters. For example, the following can all be interpreted as cross-program communication devices:
Command-line arguments
Standard stream redirections
Pipes generated by os.popen calls
Program exit status codes
Shell environment variables
Even simple files
For instance, sending command-line options and writing to input streams lets us pass in program execution parameters; reading program output streams and exit codes gives us a way to grab a result. Because shell variable settings are inherited by spawned programs, they provide another way to pass context in. Pipes made by os.popen and simple files allow even more dynamic communication -- data can be sent between programs at arbitrary times, not only at program start and exit.
Beyond this set, there are other tools in the Python library for doing IPC -- Inter-Process Communication. Some vary in portability, and all vary in complexity. For instance, in Chapter 10 of this text we will meet the Python socket module, which lets us transfer data between programs running on the same computer, as well as programs located on remote networked machines.
In this section, we introduce pipes -- both anonymous and named -- as well as signals -- cross-program event triggers. Other IPC tools are available to Python programmers (e.g., shared memory; see module mmap), but not covered here for lack of space; search the Python manuals and web site for more details on other IPC schemes if you're looking for something more specific.
Pipes, another cross-program communication device, are made available in Python with the built-in os.pipe call. Pipes are unidirectional channels that work something like a shared memory buffer, but with an interface resembling a simple file on each of two ends. In typical use, one program writes data on one end of the pipe, and another reads that data on the other end. Each program only sees its end of the pipes, and processes it using normal Python file calls.
Pipes are much more within the operating system, though. For instance, calls to read a pipe will normally block the caller until data becomes available (i.e., is sent by the program on the other end), rather than returning an end-of-file indicator. Because of such properties, pipes are also a way to synchronize the execution of independent programs.
Pipes come in two flavors -- anonymous and named. Named pipes (sometimes called "fifos") are represented by a file on your computer. Anonymous pipes only exist within processes, though, and are typically used in conjunction with process forks as a way to link parent and spawned child processes within an application -- parent and child converse over shared pipe file descriptors. Because named pipes are really external files, the communicating processes need not be related at all (in fact, they can be independently started programs).
Since they are more traditional, let's start with a look at anonymous pipes. To illustrate, the script in Example 3-15 uses the os.fork call to make a copy of the calling process as usual (we met forks earlier in this chapter). After forking, the original parent process and its child copy speak through the two ends of a pipe created with os.pipe prior to the fork. The os.pipe call returns a tuple of two file descriptors -- the low-level file identifiers we met earlier -- representing the input and output sides of the pipe. Because forked child processes get copies of their parents' file descriptors, writing to the pipe's output descriptor in the child sends data back to the parent on the pipe created before the child was spawned.
import os, time def child(pipeout): zzz = 0 while 1: time.sleep(zzz) # make parent wait os.write(pipeout, 'Spam %03d' % zzz) # send to parent zzz = (zzz+1) % 5 # goto 0 after 4 def parent( ): pipein, pipeout = os.pipe( ) # make 2-ended pipe if os.fork( ) == 0: # copy this process child(pipeout) # in copy, run child else: # in parent, listen to pipe while 1: line = os.read(pipein, 32) # blocks until data sent print 'Parent %d got "%s" at %s' % (os.getpid( ), line, time.time( )) parent( )
If you run this program on Linux ( pipe is available on Windows today, but fork is not), the parent process waits for the child to send data on the pipe each time it calls os.read. It's almost as if the child and parent act as client and server here -- the parent starts the child and waits for it to initiate communication.[7] Just to tease, the child keeps the parent waiting one second longer between messages with time.sleep calls, until the delay has reached four seconds. When the zzz delay counter hits 005, it rolls back down to 000 and starts again:
[mark@toy]$ python pipe1.py Parent 1292 got "Spam 000" at 968370008.322 Parent 1292 got "Spam 001" at 968370009.319 Parent 1292 got "Spam 002" at 968370011.319 Parent 1292 got "Spam 003" at 968370014.319 Parent 1292 got "Spam 004Spam 000" at 968370018.319 Parent 1292 got "Spam 001" at 968370019.319 Parent 1292 got "Spam 002" at 968370021.319 Parent 1292 got "Spam 003" at 968370024.319 Parent 1292 got "Spam 004Spam 000" at 968370028.319 Parent 1292 got "Spam 001" at 968370029.319 Parent 1292 got "Spam 002" at 968370031.319 Parent 1292 got "Spam 003" at 968370034.319
If you look closely, you'll see that when the child's delay counter hits 004, the parent ends up reading two messages from the pipe at once -- the child wrote two distinct messages, but they were close enough in time to be fetched as a single unit by the parent. Really, the parent blindly asks to read at most 32 bytes each time, but gets back whatever text is available in the pipe (when it becomes available at all). To distinguish messages better, we can mandate a separator character in the pipe. An end-of-line makes this easy, because we can wrap the pipe descriptor in a file object with os.fdopen, and rely on the file object's readline method to scan up through the next \n separator in the pipe. Example 3-16 implements this scheme.
# same as pipe1.py, but wrap pipe input in stdio file object # to read by line, and close unused pipe fds in both processes import os, time def child(pipeout): zzz = 0 while 1: time.sleep(zzz) # make parent wait os.write(pipeout, 'Spam %03d\n' % zzz) # send to parent zzz = (zzz+1) % 5 # roll to 0 at 5 def parent( ): pipein, pipeout = os.pipe( ) # make 2-ended pipe if os.fork( ) == 0: # in child, write to pipe os.close(pipein) # close input side here child(pipeout) else: # in parent, listen to pipe os.close(pipeout) # close output side here pipein = os.fdopen(pipein) # make stdio input object while 1: line = pipein.readline( )[:-1] # blocks until data sent print 'Parent %d got "%s" at %s' % (os.getpid( ), line, time.time( )) parent( )
This version has also been augmented to close the unused end of the pipe in each process (e.g., after the fork, the parent process closes its copy of the output side of the pipe written by the child); programs should close unused pipe ends in general. Running with this new version returns a single child message to the parent each time it reads from the pipe, because they are separated with markers when written:
[mark@toy]$ python pipe2.py Parent 1296 got "Spam 000" at 968370066.162 Parent 1296 got "Spam 001" at 968370067.159 Parent 1296 got "Spam 002" at 968370069.159 Parent 1296 got "Spam 003" at 968370072.159 Parent 1296 got "Spam 004" at 968370076.159 Parent 1296 got "Spam 000" at 968370076.161 Parent 1296 got "Spam 001" at 968370077.159 Parent 1296 got "Spam 002" at 968370079.159 Parent 1296 got "Spam 003" at 968370082.159 Parent 1296 got "Spam 004" at 968370086.159 Parent 1296 got "Spam 000" at 968370086.161 Parent 1296 got "Spam 001" at 968370087.159 Parent 1296 got "Spam 002" at 968370089.159
Pipes normally only let data flow in one direction -- one side is input, one is output. What if you need your programs to talk back and forth, though? For example, one program might send another a request for information, and then wait for that information to be sent back. A single pipe can't generally handle such bidirectional conversations, but two pipes can -- one pipe can be used to pass requests to a program, and another can be used to ship replies back to the requestor.[8]
The module in Example 3-17 demonstrates one way to apply this idea to link the input and output streams of two programs. Its spawn function forks a new child program, and connects the input and output streams of the parent to the output and input streams of the child. That is:
When the parent reads from its standard input, it is reading text sent to the child's standard output.
When the parent writes to its standard output, it is sending data to the child's standard input.
The net effect is that the two independent programs communicate by speaking over their standard streams.
############################################################ # spawn a child process/program, connect my stdin/stdout # to child process's stdout/stdin -- my reads and writes # map to output and input streams of the spawned program; # much like popen2.popen2 plus parent stream redirection; ############################################################ import os, sys def spawn(prog, *args): # pass progname, cmdline args stdinFd = sys.stdin.fileno( ) # get descriptors for streams stdoutFd = sys.stdout.fileno( ) # normally stdin=0, stdout=1 parentStdin, childStdout = os.pipe( ) # make two ipc pipe channels childStdin, parentStdout = os.pipe( ) # pipe returns (inputfd, outoutfd) pid = os.fork( ) # make a copy of this process if pid: os.close(childStdout) # in parent process after fork: os.close(childStdin) # close child ends in parent os.dup2(parentStdin, stdinFd) # my sys.stdin copy = pipe1[0] os.dup2(parentStdout, stdoutFd) # my sys.stdout copy = pipe2[1] else: os.close(parentStdin) # in child process after fork: os.close(parentStdout) # close parent ends in child os.dup2(childStdin, stdinFd) # my sys.stdin copy = pipe2[0] os.dup2(childStdout, stdoutFd) # my sys.stdout copy = pipe1[1] args = (prog,) + args os.execvp(prog, args) # new program in this process assert 0, 'execvp failed!' # os.exec call never returns here if __name__ == '__main__': mypid = os.getpid( ) spawn('python', 'pipes-testchild.py', 'spam') # fork child program print 'Hello 1 from parent', mypid # to child's stdin sys.stdout.flush( ) # subvert stdio buffering reply = raw_input( ) # from child's stdout sys.stderr.write('Parent got: "%s"\n' % reply) # stderr not tied to pipe! print 'Hello 2 from parent', mypid sys.stdout.flush( ) reply = sys.stdin.readline( ) sys.stderr.write('Parent got: "%s"\n' % reply[:-1])
This spawn function in this module does not work on Windows -- remember, fork isn't yet available there today. In fact, most of the calls in this module map straight to Unix system calls (and may be arbitrarily terrifying on first glance to non-Unix developers). We've already met some of these (e.g., os.fork), but much of this code depends on Unix concepts we don't have time to address well in this text. But in simple terms, here is a brief summary of the system calls demonstrated in this code:
os.fork copies the calling process as usual, and returns the child's process ID in the parent process only.
os.execvp overlays a new program in the calling process; it's just like the os.execlp used earlier but takes a tuple or list of command-line argument strings (collected with the *args form in the function header).
os.pipe returns a tuple of file descriptors representing the input and output ends of a pipe, as in earlier examples.
os.close(fd) closes descriptor-based file fd.
os.dup2(fd1,fd2) copies all system information associated with the file named by file descriptor fd1 to the file named by fd2.
In terms of connecting standard streams, os.dup2 is the real nitty-gritty here. For example, the call os.dup2(parentStdin,stdinFd) essentially assigns the parent process's stdin file to the input end of one of the two pipes created; all stdin reads will henceforth come from the pipe. By connecting the other end of this pipe to the child process's copy of the stdout stream file with os.dup2(childStdout,stdoutFd), text written by the child to its sdtdout winds up being routed through the pipe to the parent's stdin stream.
To test this utility, the self-test code at the end of the file spawns the program shown in Example 3-18 in a child process, and reads and writes standard streams to converse with it over two pipes.
import os, time, sys mypid = os.getpid( ) parentpid = os.getppid( ) sys.stderr.write('Child %d of %d got arg: %s\n' % (mypid, parentpid, sys.argv[1])) for i in range(2): time.sleep(3) # make parent process wait by sleeping here input = raw_input( ) # stdin tied to pipe: comes from parent's stdout time.sleep(3) reply = 'Child %d got: [%s]' % (mypid, input) print reply # stdout tied to pipe: goes to parent's stdin sys.stdout.flush( ) # make sure it's sent now else blocks
Here is our test in action on Linux; its output is not incredibly impressive to read, but represents two programs running independently and shipping data back and forth through a pipe device managed by the operating system. This is even more like a client/server model (if you imagine the child as the server). The text in square brackets in this output went from the parent process, to the child, and back to the parent again -- all through pipes connected to standard streams:
[mark@toy]$ python pipes.py Child 797 of 796 got arg: spam Parent got: "Child 797 got: [Hello 1 from parent 796]" Parent got: "Child 797 got: [Hello 2 from parent 796]"
These two processes engage in a simple dialog, but it's already enough to illustrate some of the dangers lurking in cross-program communications. First of all, notice that both programs need to write to stderr to display a message -- their stdout streams are tied to the other program's input stream. Because processes share file descriptors, stderr is the same in both parent and child, so status messages show up in the same place.
More subtly, note that both parent and child call sys.stdout.flush after they print text to the stdout stream. Input requests on pipes normally block the caller if there is no data available, but it seems that shouldn't be a problem in our example -- there are as many writes as there are reads on the other side of the pipe. By default, though, sys.stdout is buffered, so the printed text may not actually be transmitted until some time in the future (when the stdio output buffers fill up). In fact, if the flush calls are not made, both processes will get stuck waiting for input from the other -- input that is sitting in a buffer and is never flushed out over the pipe. They wind up in a deadlock state, both blocked on raw_input calls waiting for events that never occur.
Keep in mind that output buffering is really a function of the filesystem used to access pipes, not pipes themselves (pipes do queue up output data, but never hide it from readers!). In fact it only occurs in this example because we copy the pipe's information over to sys.stdout -- a built-in file object that uses stdio buffering by default. However, such anomalies can also occur when using other cross-process tools, such as the popen2 and popen3 calls introduced in Chapter 2.
In general terms, if your programs engage in a two-way dialogs like this, there are at least three ways to avoid buffer-related deadlock problems:
As demonstrated in this example, manually flushing output pipe streams by calling file flush method is an easy way to force buffers to be cleared.
It's possible to use pipes in unbuffered mode -- either use low-level os module calls to read and write pipe descriptors directly, or (on most systems) pass a buffer size argument of to os.fdopen to disable stdio buffering in the file object used to wrap the descriptor. For fifos, described in the next section, do the same for open.
Simply use the -u Python command-line flag to turn off buffering for the sys.stdout stream.
The last technique merits a few more words. Try this: delete all the sys.stdout.flush calls in both Examples Example 3-17 and Example 3-18 (files pipes.py and pipes-testchild.py) and change the parent's spawn call in pipes.py to this (i.e., add a -u command-line argument):
spawn('python', '-u', 'pipes-testchild.py', 'spam')
Then start the program with a command line like this: python -u pipes.py. It will work as it did with manual stdout flush calls, because stdout will be operating in unbuffered mode. Deadlock in general, though, is a bigger problem than we have space to address here; on the other hand, if you know enough to want to do IPC in Python, you're probably already a veteran of the deadlock wars.
On some platforms, it is also possible to create a pipe that exists as a file. Such files are called "named pipes" (or sometimes, "fifos"), because they behave just like the pipes created within the previous programs, but are associated with a real file somewhere on your computer, external to any particular program. Once a named pipe file is created, processes read and write it using normal file operations. Fifos are unidirectional streams, but a set of two fifos can be used to implement bidirectional communication just as we did for anonymous pipes in the prior section.
Because fifos are files, they are longer-lived than in-process pipes and can be accessed by programs started independently. The unnamed, in-process pipe examples thus far depend on the fact that file descriptors (including pipes) are copied to child processes. With fifos, pipes are accessed instead by a filename visible to all programs regardless of any parent/child process relationships. Because of that, they are better suited as IPC mechanisms for independent client and server programs; for instance, a perpetually running server program may create and listen for requests on a fifo, that can be accessed later by arbitrary clients not forked by the server.
In Python, named pipe files are created with the os.mkfifo call, available today on Unix-like platforms and Windows NT (but not on Windows 95/98). This only creates the external file, though; to send and receive data through a fifo, it must be opened and processed as if it were a standard file. Example 3-19 is a derivation of the pipe2.py script listed earlier, written to use fifos instead of anonymous pipes.
######################################################### # named pipes; os.mkfifo not avaiable on Windows 95/98; # no reason to fork here, since fifo file pipes are # external to processes--shared fds are irrelevent; ######################################################### import os, time, sys fifoname = '/tmp/pipefifo' # must open same name def child( ): pipeout = os.open(fifoname, os.O_WRONLY) # open fifo pipe file as fd zzz = 0 while 1: time.sleep(zzz) os.write(pipeout, 'Spam %03d\n' % zzz) zzz = (zzz+1) % 5 def parent( ): pipein = open(fifoname, 'r') # open fifo as stdio object while 1: line = pipein.readline( )[:-1] # blocks until data sent print 'Parent %d got "%s" at %s' % (os.getpid( ), line, time.time( )) if __name__ == '__main__': if not os.path.exists(fifoname): os.mkfifo(fifoname) # create a named pipe file if len(sys.argv) == 1: parent( ) # run as parent if no args else: # else run as child process child( )
Because the fifo exists independently of both parent and child, there's no reason to fork here -- the child may be started independently of the parent, as long as it opens a fifo file by the same name. Here, for instance, on Linux the parent is started in one xterm window, and then the child is started in another. Messages start appearing in the parent window only after the child is started:
[mark@toy]$ python pipefifo.py Parent 657 got "Spam 000" at 968390065.865 Parent 657 got "Spam 001" at 968390066.865 Parent 657 got "Spam 002" at 968390068.865 Parent 657 got "Spam 003" at 968390071.865 Parent 657 got "Spam 004" at 968390075.865 Parent 657 got "Spam 000" at 968390075.867 Parent 657 got "Spam 001" at 968390076.865 Parent 657 got "Spam 002" at 968390078.865 [mark@toy]$ file /tmp/pipefifo /tmp/pipefifo: fifo (named pipe) [mark@toy]$ python pipefifo.py -child
For lack of a better analogy, signals are a way to poke a stick at a process. Programs generate signals to trigger a handler for that signal in another process. The operating system pokes too -- some signals are generated on unusual system events and may kill the program if not handled. If this sounds a little like raising exceptions in Python it should; signals are software-generated events, and the cross-process analog of exceptions. Unlike exceptions, though, signals are identified by number, are not stacked, and are really an asynchronous event mechanism controlled by the operating system, outside the scope of the Python interpreter.
In order to make signals available to scripts, Python provides a signal module that allows Python programs to register Python functions as handlers for signal events. This module is available on both Unix-like platforms and Windows (though the Windows version defines fewer kinds of signals to be caught). To illustrate the basic signal interface, the script in Example 3-20 installs a Python handler function for the signal number passed in as a command-line argument.
########################################################## # catch signals in Python; pass signal number N as a # command-line arg, use a "kill -N pid" shell command # to send this process a signal; most signal handlers # restored by Python after caught (see network scripting # chapter for SIGCHLD details); signal module avaiable # on Windows, but defines only a few signal types there; ########################################################## import sys, signal, time def now( ): return time.ctime(time.time( )) # current time string def onSignal(signum, stackframe): # python signal handler print 'Got signal', signum, 'at', now( ) # most handlers stay in effect signum = int(sys.argv[1]) signal.signal(signum, onSignal) # install signal handler while 1: signal.pause( ) # wait for signals (or: pass)
There are only two signal module calls at work here:
signal.signal takes a signal number and function object, and installs that function to handle that signal number when it is raised. Python automatically restores most signal handlers when signals occur, so there is no need to recall this function within the signal handler itself to re-register the handler. That is, except for SIGCHLD, a signal handler remains installed until explicitly reset (e.g., by setting the handler to SIG_DFL to restore default behavior, or to SIG_IGN to ignore the signal). SIGCHLD behavior is platform-specific.
signal.pause makes the process sleep until the next signal is caught. A time.sleep call is similar but doesn't work with signals on my Linux box -- it generates an interrupted system call error. A busy while 1: pass loop here would pause the script too, but may squander CPU resources.
Here is what this script looks like running on Linux: a signal number to watch for (12) is passed in on the command line, and the program is made to run in the background with a & shell operator (available in most Unix-like shells):
[mark@toy]$ python signal1.py 12 & [1] 809 [mark@toy]$ ps PID TTY TIME CMD 578 ttyp1 00:00:00 tcsh 809 ttyp1 00:00:00 python 810 ttyp1 00:00:00 ps [mark@toy]$ kill -12 809 [mark@toy]$ Got signal 12 at Fri Sep 8 00:27:01 2000 kill -12 809 [mark@toy]$ Got signal 12 at Fri Sep 8 00:27:03 2000 kill -12 809 [mark@toy]$ Got signal 12 at Fri Sep 8 00:27:04 2000 [mark@toy]$ kill -9 809 # signal 9 always kills the process
Inputs and outputs are a bit jumbled here, because the process prints to the same screen used to type new shell commands. To send the program a signal, the kill shell command takes a signal number and a process ID to be signalled (809); every time a new kill command sends a signal, the process replies with a message generated by a Python signal handler function.
The signal module also exports a signal.alarm function for scheduling a SIGALRM signal to occur at some number of seconds in the future. To trigger and catch timeouts, set the alarm and install a SIGALRM handler as in Example 3-21.
########################################################## # set and catch alarm timeout signals in Python; # time.sleep doesn't play well with alarm (or signal # in general in my Linux PC), so call signal.pause # here to do nothing until a signal is received; ########################################################## import sys, signal, time def now( ): return time.ctime(time.time( )) def onSignal(signum, stackframe): # python signal handler print 'Got alarm', signum, 'at', now( ) # most handlers stay in effect while 1: print 'Setting at', now( ) signal.signal(signal.SIGALRM, onSignal) # install signal handler signal.alarm(5) # do signal in 5 seconds signal.pause( ) # wait for signals
Running this script on Linux causes its onSignal handler function to be invoked every five seconds:
[mark@toy]$ python signal2.py Setting at Fri Sep 8 00:27:53 2000 Got alarm 14 at Fri Sep 8 00:27:58 2000 Setting at Fri Sep 8 00:27:58 2000 Got alarm 14 at Fri Sep 8 00:28:03 2000 Setting at Fri Sep 8 00:28:03 2000 Got alarm 14 at Fri Sep 8 00:28:08 2000 Setting at Fri Sep 8 00:28:08 2000
Generally speaking, signals must be used with cautions not made obvious by the examples we've just seen. For instance, some system calls don't react well to being interrupted by signals, and only the main thread can install signal handlers and respond to signals in a multithreaded program.
When used well, though, signals provide an event-based communication mechanism. They are less powerful than data streams like pipes, but are sufficient in situations where you just need to tell a program that something important has occurred at all, and not pass along any details about the event itself. Signals are sometimes also combined with other IPC tools. For example, an initial signal may inform a program that a client wishes to communicate over a named pipe -- the equivalent of tapping someone's shoulder to get their attention before speaking. Most platforms reserve one or more SIGUSR signal numbers for user-defined events of this sort.
Suppose just for a moment, that you've been asked to write a big Python book, and want to provide a way for readers to easily start the book's examples on just about any platform that Python runs on. Books are nice, but it's awfully fun to be able to click on demos right away. That is, you want to write a general and portable launcher program in Python, for starting other Python programs. What to do?
In this chapter, we've seen how to portably spawn threads, but these are simply parallel functions, not external programs. We've also learned how to go about starting new, independently running programs, with both the fork/exec combination, and tools for launching shell commands such as os.popen. Along the way, though, I've also been careful to point out numerous times that the os.fork call doesn't work on Windows today, and os.popen fails in Python release 1.5.2 and earlier when called from a GUI program on Windows; either of these constraints may be improved by the time you read this book (e.g., 2.0 improves os.popen on Windows), but they weren't quite there yet as I wrote this chapter. Moreover, for reasons we'll explore later, the os.popen call is prone to blocking (pausing) its caller in some scenarios.
Luckily, there are other ways to start programs in the Python standard library, albeit in platform-specific fashion:
The os.spawnv and os.spawnve calls launch programs on Windows, much like a fork/exec call combination on Unix-like platforms.
The os.system call can be used on Windows to launch a DOS start command, which opens (i.e., runs) a file independently based on its Windows filename associations, as though it were clicked.
Tools in the Python win32all extensions package provide other, less standardized ways to start programs (e.g., the WinExec call).
Of these, the spawnv call is the most complex, but also the most like forking programs in Unix. It doesn't actually copy the calling process (so shared descriptor operations won't work), but can be used to start a Windows program running completely independent of the calling program. The script in Example 3-22 makes the similarity more obvious -- it launches a program with a fork/exec combination in Linux, or an os.spawnv call on Windows.
############################################################ # start up 10 copies of child.py running in parallel; # use spawnv to launch a program on Windows (like fork+exec) # P_OVERLAY replaces, P_DETACH makes child stdout go nowhere ############################################################ import os, sys for i in range(10): if sys.platform[:3] == 'win': pypath = r'C:\program files\python\python.exe' os.spawnv(os.P_NOWAIT, pypath, ('python', 'child.py', str(i))) else: pid = os.fork( ) if pid != 0: print 'Process %d spawned' % pid else: os.execlp('python', 'python', 'child.py', str(i)) print 'Main process exiting.'
Call os.spawnv with a process mode flag, the full directory path to the Python interpreter, and a tuple of strings representing the DOS command line with which to start a new program. The process mode flag is defined by Visual C++ (whose library provides the underlying spawnv call); commonly used values include:
P_OVERLAY: spawned program replaces calling program, like os.exec
P_DETACH: starts a program with full independence, without waiting
P_NOWAIT: runs the program without waiting for it to exit; returns its handle
P_WAIT: runs the program and pauses until it finishes; returns its exit code
Run a dir(os) call to see other process flags available, and either run a few tests or see VC++ documentation for more details; things like standard stream connection policies vary between the P_DETACH and P_NOWAIT modes in subtle ways. Here is this script at work on Windows, spawning 10 independent copies of the child.py Python program we met earlier in this chapter:
C:\...\PP2E\System\Processes>type child.py import os, sys print 'Hello from child', os.getpid( ), sys.argv[1] C:\...\PP2E\System\Processes>python spawnv.py Hello from child -583587 0 Hello from child -558199 2 Hello from child -586755 1 Hello from child -562171 3 Main process exiting. Hello from child -581867 6 Hello from child -588651 5 Hello from child -568247 4 Hello from child -563527 7 Hello from child -543163 9 Hello from child -587083 8
Notice that the copies print their output in random order, and the parent program exits before all children do; all these programs are really running in parallel on Windows. Also observe that the child program's output shows up in the console box where spawnv.py was run; when using P_NOWAIT standard output comes to the parent's console, but seems to go nowhere when using P_DETACH instead (most likely a feature, when spawning GUI programs).
The os.spawnve call works the same as os.spawnv, but accepts an extra fourth dictionary argument to specify a different shell environment for the spawned program (which, by default, inherits all the parent's settings).
The os.system and os.popen calls can be used to start command lines on Windows just as on Unix-like platforms (but with the portability caveats about popen mentioned earlier). On Windows, though, the DOS start command combined with os.system provides an easy way for scripts to launch any file on the system, using Windows filename associations. Starting a program file this way makes it run as independently as its starter. Example 3-23 demonstrates these launch techniques.
############################################################ # start up 5 copies of child.py running in parallel; # - on Windows, os.system always blocks its caller, # and os.popen currently fails in a GUI programs # - using DOS start command pops up a DOS box (which goes # away immediately when the child.py program exits) # - running child-wait.py with DOS start, 5 independent # DOS console windows popup and stay up (1 per program) # DOS start command uses file name associations to know # to run Python on the file, as though double-clicked in # Windows explorer (any filename can be started this way); ############################################################ import os, sys for i in range(5): #print os.popen('python child.py ' + str(i)).read( )[:-1] #os.system('python child.py ' + str(i)) #os.system('start child.py ' + str(i)) os.system('start child-wait.py ' + str(i)) print 'Main process exiting.'
Uncomment one of the lines in this script's for loop to experiment with these schemes on your computer. On mine, when run with either of the first two calls in the loop uncommented, I get the following sort of output -- the text printed by five spawned Python programs:
C:\...\PP2E\System\Processes>python dosstart.py Hello from child -582331 0 Hello from child -547703 1 Hello from child -547703 2 Hello from child -547651 3 Hello from child -547651 4 Main process exiting.
The os.system call usually blocks its caller until the spawned program exits; reading the output of a os.popen call has the same blocking effect (the reader waits for the spawned program's output to be complete). But with either of the last two statements in the loop uncommented, I get output that simply looks like this:
C:\...\PP2E\System\Processes>python dosstart.py Main process exiting.
In both cases, I also see five new and completely independent DOS console windows appear on my display; when the third line in the loop is uncommented, all the DOS boxes go away right after they appear; when the last line in the loop is active, they remain on the screen after the dosstart program exits because the child-wait script pauses for input before exit.
To understand why, you first need to know how the DOS start command works in general. Roughly, a DOS command line of the form start command works as if command were typed in the Windows "Run" dialog box available in the Start button menu. If command is a filename, it is opened exactly as if its name had been double-clicked in the Windows Explorer file selector GUI.
For instance, the following three DOS commands automatically start Internet Explorer on a file index.html, my registered image viewer program on a uk-1.jpg, and my sound media player program on file sousa.au. Windows simply opens the file with whatever program is associated to handle filenames of that form. Moreover, all three of these programs run independently of the DOS console box where the command is typed:
C:\temp>start c:\stuff\website\public_html\index.html C:\temp>start c:\stuff\website\public_html\uk-1.jpg C:\...\PP2E\System\Processes>start ..\..\Internet\Ftp\sousa.au
Now, because the start command can run any file and command line, there is no reason it cannot also be used to start an independently running Python program:
C:\...\PP2E\System\Processes>start child.py 1
Because Python is registered to open names ending in .py when it is installed, this really does work -- script child.py is launched independently of the DOS console window, even though we didn't provide the name or path of the Python interpreter program. Because child.py simply prints a message and exits, though, the result isn't exactly satisfying: a new DOS window pops up to serve as the script's standard output, and immediately goes away when the child exits (it's that Windows "flash feature" described earlier!). To do better, add a raw_input call at the bottom of the program file to wait for a key press before exiting:
C:\...\PP2E\System\Processes>type child-wait.py import os, sys print 'Hello from child', os.getpid( ), sys.argv[1] raw_input("Press <Enter>") # don't flash on Windows C:\...\PP2E\System\Processes>start child-wait.py 2
Now the child's DOS window pops up and stays up after the start command has returned. Pressing the Enter key in the pop-up DOS window makes it go away.
Since we know that Python's os.system and os.popen can be called by a script to run any command line that can be typed at a DOS shell prompt, we can also start independently running programs from a Python script by simply running a DOS start command line. For instance:
C:\...\PP2E>python >>> import os >>> >>> cmd = r'start c:\stuff\website\public_html\index.html' # start IE browser >>> os.system(cmd) # runs independent 0 >>> file = r'gui\gifs\pythonPowered.gif' # start image viewer >>> os.system('start ' + file) # IE opens .gif for me 0 >>> os.system('start ' + 'Gui/gifs/PythonPowered.gif') # fwd slashes work too 0 >>> os.system(r'start Internet\Ftp\sousa.au') # start media bar 0
The four Python os.system calls here start whatever web-page browser, image viewer, and sound player are registered on your machine to open .html, .gif, and .au files (unless these programs are already running). The launched programs run completely independent of the Python session -- when running a DOS start command, os.system does not wait for the spawned program to exit. For instance, Figure 3-1 shows the .gif file handler in action on my machine, generated by both the second and third os.system calls in the preceding code.
Now, since we also know that a Python program be can started from a command line, this yields two ways to launch Python programs:
C:\...\PP2E>python >>> os.system(r'python Gui\TextEditor\textEditor.pyw') # start and wait 0 >>> os.system(r'start Gui\TextEditor\textEditor.pyw') # start, go on 0
When running a python command, the os.system call waits (blocks) for the command to finish. When running a start command it does not -- the launched Python program (here, PyEdit, a text editor GUI we'll meet in Chapter 9) runs independent of the os.system caller. And finally, that's why the following call in dosstart.py generates a new, independent instance of child-wait.py :
C:\...\PP2E\System\Processes>python >>> os.system('start child-wait.py 1') 0
When run, this call pops up a new, independent DOS console window to serve as the standard input and output streams of the child-wait program. It truly is independent -- in fact, it keeps running if we exit both this Python interpreter session and the DOS console box where the command was typed.[9] An os.popen call can launch a start command too; but since it normally starts commands independently anyhow, the only obvious advantages of start here are the pop-up DOS box, and the fact that Python need not be in the system search path setting:
>>> file = os.popen('start child-wait.py 1') # versus: python child-wait... >>> file.read( ) 'Hello from child -413849 1\012Press <Enter>'
Which scheme to use, then? Using os.system or os.popen to run a python command works fine, but only if your users have added the python.exe directory to their system search path setting. Running a DOS start command is often a simpler alternative to both running python commands and calling the os.spawnv function, since filename associations are automatically installed along with Python, and os.spawnv requires a full directory path to the Python interpreter program (python.exe). On the other hand, running start commands with os.system calls can fail on Windows for very long command-line strings:
>>> os.system('start child-wait.py ' + 'Z'*425) # okay- 425 Zs in dos popup 0 >>> os.system('start child-wait.py ' + 'Z'*450) # fails- msg, not exception Access is denied. 0 >>> os.popen('python child-wait.py ' + 'Z'*500).read( ) # works if PATH set >>> os.system('python child-wait.py ' + 'Z'*500) # works if PATH set >>> pypath = r'C:\program files\python\python.exe' # this works too >>> os.spawnv(os.P_NOWAIT, pypath, ('python', 'child-wait.py', 'Z'*500))
As a rule of thumb, use os.spawnv if your commands are (or may be) long. For instance, we'll meet a script in Chapter 4, that launches web browsers to view HTML files; even though a start command applied to an HTML file will automatically start a browser program, this script instead must use os.spawnv to accommodate potentially long directory paths in HTML filenames.
For more information on other Windows-specific program launcher tools, see O'Reilly's Python Programming on Win32. Other schemes are even less standard than those shown here, but are given excellent coverage in that text.
With all these different ways to start programs on different platforms, it can be difficult to remember what tools to use in a given situation. Moreover, some of these tools are called in ways that are complicated enough to easily forget (for me, at least). I write scripts that need to launch Python programs often enough that I eventually wrote a module to try and hide most of the underlying details. While I was at it, I made this module smart enough to automatically pick a launch scheme based on the underlying platform. Laziness is the mother of many a useful module.
Example 3-24 collects many of the techniques we've met in this chapter in a single module. It implements an abstract superclass, LaunchMode, which defines what it means to start a Python program, but doesn't define how. Instead, its subclasses provide a run method that actually starts a Python program according to a given scheme, and (optionally) define an announce method to display a program's name at startup time.
############################################################### # launch Python programs with reusable launcher scheme classes; # assumes 'python' is on your system path (but see Launcher.py) ############################################################### import sys, os, string pycmd = 'python' # assume it is on your system path class LaunchMode: def __init__(self, label, command): self.what = label self.where = command def __call__(self): # on call, ex: button press callback self.announce(self.what) self.run(self.where) # subclasses must define run( ) def announce(self, text): # subclasses may redefine announce( ) print text # methods instead of if/elif logic def run(self, cmdline): assert 0, 'run must be defined' class System(LaunchMode): # run shell commands def run(self, cmdline): # caveat: blocks caller os.system('%s %s' % (pycmd, cmdline)) # unless '&' added on Linux class Popen(LaunchMode): # caveat: blocks caller def run(self, cmdline): # since pipe closed too soon os.popen(pycmd + ' ' + cmdline) # 1.5.2 fails in Windows GUI class Fork(LaunchMode): def run(self, cmdline): assert hasattr(os, 'fork') # for linux/unix today cmdline = string.split(cmdline) # convert string to list if os.fork( ) == 0: # start new child process os.execvp(pycmd, [pycmd] + cmdline) # run new program in child class Start(LaunchMode): def run(self, cmdline): # for windows only assert sys.platform[:3] == 'win' # runs independent of caller os.system('start ' + cmdline) # uses Windows associations class Spawn(LaunchMode): # for windows only def run(self, cmdline): # run python in new process assert sys.platform[:3] == 'win' # runs independent of caller #pypath = r'C:\program files\python\python.exe' try: # get path to python pypath = os.environ['PP2E_PYTHON_FILE'] # run by launcher? except KeyError: # if so configs env from Launcher import which, guessLocation pypath = which('python.exe', 0) or guessLocation('python.exe', 1,0) os.spawnv(os.P_DETACH, pypath, ('python', cmdline)) # P_NOWAIT: dos box class Top_level(LaunchMode): def run(self, cmdline): # new window, same process assert 0, 'Sorry - mode not yet implemented' # tbd: need GUI class info if sys.platform[:3] == 'win': PortableLauncher = Spawn # pick best launcher for platform else: # need to tweak this code elsewhere PortableLauncher = Fork class QuietPortableLauncher(PortableLauncher): def announce(self, text): pass def selftest( ): myfile = 'launchmodes.py' program = 'Gui/TextEditor/textEditor.pyw ' + myfile # assume in cwd raw_input('default mode...') launcher = PortableLauncher('PyEdit', program) launcher( ) # no block raw_input('system mode...') System('PyEdit', program)( ) # blocks raw_input('popen mode...') Popen('PyEdit', program)( ) # blocks if sys.platform[:3] == 'win': raw_input('DOS start mode...') # no block Start('PyEdit', program)( ) if __name__ == '__main__': selftest( )
Near the end of the file, the module picks a default class based on the sys.platform attribute: PortableLauncher is set to a class that uses spawnv on Windows and one that uses the fork/exec combination elsewhere. If you import this module and always use its PortableLauncher attribute, you can forget many of the platform-specific details enumerated in this chapter.
To run a Python program, simply import the PortableLauncher class, make an instance by passing a label and command line (without a leading "python" word), and then call the instance object as though it were a function. The program is started by a call operation instead of a method, so that the classes in this module can be used to generate callback handlers in Tkinter-based GUIs. As we'll see in the upcoming chapters, button-presses in Tkinter invoke a callable-object with no arguments; by registering a PortableLauncher instance to handle the press event, we can automatically start a new program from another program's GUI.
When run standalone, this module's selftest function is invoked as usual. On both Windows and Linux, all classes tested start a new Python text editor program (the upcoming PyEdit GUI program again) running independently with its own window. Figure 3-2 shows one in action on Windows; all spawned editors open the launchmodes.py source file automatically, because its name is passed to PyEdit as a command-line argument. As coded, both System and Popen block the caller until the editor exits, but PortableLauncher (really, Spawn or Fork) and Start do not:[10]
C:\...\PP2E>python launchmodes.py default mode... PyEdit system mode... PyEdit popen mode... PyEdit DOS start mode... PyEdit
As a more practical application, this file is also used by launcher scripts designed to run examples in this book in a portable fashion. The PyDemos and PyGadgets scripts at the top of this book's examples directory tree (view CD-ROM content online at http://examples.oreilly.com/python2) simply import PortableLauncher, and register instances to respond to GUI events. Because of that, these two launcher GUIs run on both Windows and Linux unchanged (Tkinter's portability helps too, of course). The PyGadgets script even customizes PortableLauncher to update a label in a GUI at start time:
class Launcher(launchmodes.PortableLauncher): # use wrapped launcher class def announce(self, text): # customize to set GUI label Info.config(text=text)
We'll explore these scripts in Part II (but feel free to peek at the end of Chapter 8, now). Because of this role, the Spawn class in this file uses additional tools to search for the Python executable's path -- required by os.spawnv. It calls two functions exported by a file Launcher.py to find a suitable python.exe, whether or not the user has added its directory to their system PATH variable's setting. The idea is to start Python programs, even if Python hasn't been installed in the shell variables on the local machine. Because we're going to meet Launcher.py in Chapter 4, though, I'm going to postpone further details for now.
In this and the prior chapters, we've met most of the commonly used system tools in the Python library. Along the way, we've also learned how to use them to do useful things like start programs, process directories, and so on. The next two chapters are something of a continuation of this topic -- they use the tools we've just met to implement scripts that do useful and more realistic system-level work, so read on for the rest of this story.
Still, there are other system-related tools in Python that appear even later in this text. For instance:
Sockets (used to communicate with other programs and networks) are introduced in Chapter 10.
Select calls (used to multiplex among tasks) are also introduced in Chapter 10 as a way to implement servers.
File locking calls in the fcntl module appear in Chapter 14.
Regular expressions (string pattern matching used by many text processing tools) don't appear until Chapter 18.
Moreover, things like forks and threads are used extensively in the Internet scripting chapters: see the server implementations in Chapter 10 and the FTP and email GUIs in Chapter 11. In fact, most of this chapter's tools will pop up constantly in later examples in this book -- about what one would expect of general-purpose, portable libraries.
Last but not necessarily least, I'd like to point out one more time that there are many additional tools in the Python library that don't appear in this book at all -- with some 200 library modules, Python book authors have to pick and choose their topics frugally! As always, be sure to browse the Python library manuals early and often in your Python career.
[1] To watch on Windows, click the Start button, select Programs/Accessories/System Tools/System Monitor, and monitor Processor Usage. The graph rarely climbed above 50% on my laptop machine while writing this (at least until I typed while 1: pass in a Python interactive session console window).
[2] At least in the current Python implementation, calling os.fork in a Python script actually copies the Python interpreter process (if you look at your process list, you'll see two Python entries after a fork). But since the Python interpreter records everything about your running script, it's okay to think of fork as copying your program directly. It really will, if Python scripts are ever compiled to binary machine code.
[3] This call is also available as thread.start_new_thread, for historical reasons. It's possible that one of the two names for the same function may become deprecated in future Python releases, but both appear in this text's examples.
[4] For a more detailed explanation of this phenomenon, see The Global Interpreter Lock and Threads.
[5] They cannot, however, be used to directly synchronize processes. Since processes are more independent, they usually require locking mechanisms that are more long-lived and external to programs. In Chapter 14, we'll meet a fcntl.flock library call that allows scripts to lock and unlock files, and so is ideal as a cross-process locking tool.
[6] But in case this means you: Python's lock and condition variables are distinct objects, not something inherent in all objects, and Python's Thread class doesn't have all the features of Java's. See Python's library manual for further details.
[7] We will clarify the notions of "client" and "server" in Chapter 10. There, we'll communicate with sockets (which are very roughly like bidirectional pipes for networks), but the overall conversation model is similar. Named pipes (fifos), described later, are a better match to the client/server model, because they can be accessed by arbitrary, unrelated processes (no forks are required). But as we'll see, the socket port model is generally used by most Internet scripting protocols.
[8] This really does have real-world applications. For instance, I once added a GUI interface to a command-line debugger for a C-like programming language by connecting two processes with pipes. The GUI ran as a separate process that constructed and sent commands to the existing debugger's input stream pipe and parsed the results that showed up in the debugger's output stream pipe. In effect, the GUI acted like a programmer typing commands at a keyboard. By spawning command-line programs with streams attached by pipes, systems can add new interfaces to legacy programs.
[9] And remember, if you want to start a Python GUI program this way and not see the new DOS standard stream console box at all, simply name the script child-wait.pyw ; the "w" on the end tells the Windows Python port to avoid the DOS box. For DOS jockeys: the start command also allows a few interesting options: /m (run minimized), /max (run maximized), /r (run restored -- the default), and /w (don't return until the other program exits -- this adds caller blocking if you need it). Type start /? for help. And for any Unix developers peeking over the fence: you can also launch independent programs with os.system -- append the & background operator to the command line.
[10] This is fairly subtle. Technically, Popen only blocks its caller because the input pipe to the spawned program is closed too early, when the os.popen call's result is garbage-collected in Popen.run; os.popen normally does not block (in fact, assigning its result here to a global variable postpones blocking, but only until the next Popen object run frees the prior result). On Linux, adding a & to the end of the constructed command line in the System and Popen.run methods makes these objects no longer block their callers when run. Since the fork/exec, spawnv, and system/start schemes seem at least as good in practice, these Popen block states have not been addressed. Note too that the Start scheme does not generate a DOS console pop-up window in the self-test, only because the text editor program file's name ends in a .pyw extension; starting .py program files with os.system normally creates the console pop-up box.
CONTENTS |