recv()
data without having to pause for data. so for the first time ever, i turn to threads, which brings me to my questions.recv()
ing data and writes it to the window (in this case the terminal. ill make a gui later) and then another continuously waiting for input which then send()
s it to the server threading
module and you might also want to check out the multiprocessing
module also. The python documentation is a good place to start.IO operation times. |
B) Python doesn't have "true" multithreading in the sense that it actually uses the CPU's multiple threads. Instead it simulates threading. So you won't gain any processing power using threading in python but it is still good for IO bottlenecks. |
If you know how multi-threading works on one language you should be able to transfer that knowledge to any language really. If you are just looking for how to use multi-threading in python I would check out the threading module and you might also want to check out the multiprocessing module also. The python documentation is a good place to start. |
DTSCode wrote: |
---|
what is that exactly? |
alright and one last question: whats the different between the following: -threads -processes -services (i think thats a windows thing though) -applications |
CONCURRENCY WITH MULTIPLE PROCESSES The first way to make use of concurrency within an appli- cation is to divide the application into multiple, separate, single-threaded processes that are run at the same time, much as you can run your web browser and word proces- sor at the same time. These separate processes can then pass messages to each other through all the normal inter- process communication channels (signals, sockets, files, pipes, and so on), as shown in figure 1.3. One downside is that such communication between processes is often either complicated to set up or slow or both, because operating systems typically provide a lot of protection between processes to avoid one process accidentally modi- fying data belonging to another process. Another down- side is that there’s an inherent overhead in running multiple processes: it takes time to start a process, the operating system must devote internal resources to man- aging the process, and so forth. Of course, it’s not all downside: the added protection operating systems typically provide between processes and the higher-level communication mechanisms mean that it can be easier to write safe concurrent code with processes rather than threads. Indeed, environments such as that provided for the Erlang programming language use processes as the fundamental building block of concurrency to great effect. Using separate processes for concurrency also has an additional advantage—you can run the separate processes on distinct machines connected over a network. Though this increases the communication cost, on a carefully designed system it can be a cost- effective way of increasing the available parallelism and improving performance. CONCURRENCY WITH MULTIPLE THREADS The alternative approach to concurrency is to run multiple threads in a single pro- cess. Threads are much like lightweight processes: each thread runs independently of the others, and each thread may run a different sequence of instructions. But all threads in a process share the same address space, and most of the data can be accessed directly from all threads—global variables remain global, and pointers or ref- erences to objects or data can be passed around among threads. Although it’s often possible to share memory among processes, this is complicated to set up and often hard to manage, because memory addresses of the same data aren’t necessarily the same in different processes. Figure 1.4 shows two threads within a process communi- cating through shared memory. The shared address space and lack of protection of data between threads makes the overhead associated with using multi- ple threads much smaller than that from using multiple pro- cesses, because the operating system has less bookkeeping to do. But the flexibility of shared memory also comes with a price: if data is accessed by multiple threads, the application programmer must ensure that the view of data seen by each thread is consistent whenever it is accessed. The issues surrounding sharing data between threads and the tools to use and guidelines to follow to avoid problems are covered throughout this book, notably in chapters 3, 4, 5, and 8. The problems are not insurmountable, provided suitable care is taken when writing the code, but they do mean that a great deal of thought must go into the communica- tion between threads. The low overhead associated with launching and communicat- ing between multiple threads within a process compared to launching and communi- cating between multiple single-threaded processes means that this is the favored approach to concurrency in mainstream languages including C++, despite the poten- tial problems arising from the shared memory. In addition, the C++ Standard doesn’t provide any intrinsic support for communication between processes, so applications that use multiple processes will have to rely on platform-specific APIs to do so. This book therefore focuses exclusively on using multithreading for concurrency, and future refer- ences to concurrency assume that this is achieved by using multiple threads. Having clarified what we mean by concurrency, let’s now look at why you would use concurrency in your applications. |