Common Usage
t.is_alive()
Threads in Python are executed in a separate system-level thread (e.g. a POSIX thread or a Windows thread)
These threads will be fully managed by the operating system. Once started, the threads will execute independently until the target function returns. Threads can be started by querying the
The state of a thread object to see if it is still executing t.is_alive()
()
It is possible to add a thread to the current thread and wait for it to terminate
The Python interpreter continues executing the rest of the code only after all threads have terminated
daemon
For threads that need to run for a long time or background tasks that need to run all the time, you can use background threads (also known as daemon threads)
Example:
t=Thread(target=func,args(1,),daemon=True)
()
Background threads cannot wait, these threads are automatically destroyed when the main thread terminates
Summary:
Background threads cannot wait, however, these threads are automatically destroyed when the main thread terminates. You can't terminate a thread, you can't send it a letter
No. It is not possible to adjust its scheduling or perform other advanced operations. If you need these features, you need to add them yourself. For example
If you need to terminate a thread, then that thread must be programmed to exit by polling at a specific point
If a thread performs some blocking operation like I/O, terminating the thread by polling will make coordination between threads tricky.
For example, if a thread keeps blocking on an I/O operation, it will never be able to return, and it will not be able to check if it has been terminated.
To handle these issues correctly, you need to utilize timeout loops to carefully manipulate threads.
Inter-Thread Communication
queue
The safest way for a thread to send data to another thread would be a queue in the queue library.
Let's look at the usage example first, here is a simple producer and consumer model:
from queue import Queue from threading import Thread import random import time _sentinel = object() def producer(out_q): n = 10 while n: (1) data = (0, 10) out_q.put(data) print("The producer produced the data{0}".format(data)) n -= 1 out_q.put(_sentinel) def consumer(in_q): while True: data = in_q.get() print("Consumers consumed{0}".format(data)) if data is _sentinel: in_q.put(_sentinel) break q = Queue() t1 = Thread(target=consumer, args=(q,)) t2 = Thread(target=producer, args=(q,)) () ()
The above code sets a special value _sentinel to terminate execution when this value is obtained
There's one thing to note about the queue function:
The Queue object already contains the necessary locks though, the main ones being and
And methods like (), (), (), etc. are not thread-safe
Thread communication using queues is a one-way, uncertain process. Typically, there is no way to know when the thread receiving the data received it and started working on it. But queues provide some basic features: q.task_done() and ()
If a thread needs to be notified immediately after another thread has finished processing a specific data task, you can put the data to be sent and an Event together to use it
About Event in Thread
Threads have a very critical property: each thread runs independently and has an unpredictable state
The thread synchronization problem is more problematic if other threads in the program need to determine their next action by determining the state of each thread.
Solution:
Using Event from the threading library
The Event object contains a signal flag that can be set by the thread, which allows the thread to wait for certain events to occur.
In the initialized state, the signal flag in the event object is set to false.
If a thread is waiting for an event object and the flag of the event is false, the thread will be blocked until the flag is true.
A thread that sets the flag of an event object to true wakes up all threads waiting for this event object.
Understand through a code example:
from threading import Thread, Event import time def countdown(n, started_evt): print("countdown starting") # set sets the event's identifier to True started_evt.set() while n > 0: print("T-mins", n) n -= 1 (2) # Initialized started_evt is False started_evt = Event() print("Launching countdown") t = Thread(target=countdown, args=(10, started_evt,)) () # will wait until event's flag is True started_evt.wait() print("countdown is running")
And as a result, we can also see that running is printed only after the thread has executed set
The actual use of event objects is best done in a single pass, create an event object, let some thread wait for the object, and once the object is set to Tru, it should be discarded. Although we can reset the event object by using the clear() method, there is no way to ensure that it is safe to clean up the event object and re-assign a value to it. There will be missed events, deadlocks, and other problems.
An important feature of the event object is that when it is set to True, it wakes up all the threads waiting for it. If waking up a single thread, it is best to use a Condition or a Semaphore.
Similar to event, there is a Condition in the thread.
About Condition in threads
A quote about Condition's official website:
Aconditionvariableisalwaysassociatedwithsomekindoflock;:youdon'thavetotrackitseparately.
()methodreleasesthelock,andthenblocksuntilanotherthreadawakensitbycallingnotify()ornotify_all().Onceawakened,wait().
But it's important to note:
The methods, notify() and notify_all(), do not release the lock, which means that the thread, or the woken thread, will not execute wait() immediately.
We can implement the functionality of a period timer through the Conditon object, which can be detected by other threads whenever the timer times out, the code example is as follows:
import threading import time class PeriodicTimer: """ Here's a timer made """ def __init__(self, interval): self._interval = interval self._flag = 0 self._cv = () def start(self): t = (target=) = True () def run(self): while True: (self._interval) with self._cv: # This point is still very interesting ^^ self._flag ^= 1 self._cv.notify_all() def wait_for_tick(self): with self._cv: last_flag = self._flag while last_flag == self._flag: self._cv.wait() # The following two are two separate tasks that need to be executed at regular intervals def countdown(nticks): while nticks > 0: ptimer.wait_for_tick() print('T-minus', nticks) nticks -= 1 def countup(last): n = 0 while n < last: ptimer.wait_for_tick() print('Counting', n) n += 1 ptimer = PeriodicTimer(5) () (target=countdown, args=(10,)).start() (target=countup, args=(5,)).start()
On the use of locks in threads
To safely use mutable objects in multiple threads, you need to use the Lock object in the threading library
Let's start by looking at a basic use of locks:
import threading class SharedCounter: def __init__(self, initial_value=0): self._value = initial_value self._value_lock = () def incr(self,delta = 1): with self._value_lock: self._value += delta def decr(self, delta=1): with self._value_lock: self._value -= delta
The Lock object is used in conjunction with a block of with statements to ensure mutually exclusive execution, so that only one thread at a time can execute the block of code contained in the with statement. the with statement automatically acquires a lock before the code is executed, and releases it at the end of the execution.
The scheduling of threads is inherently uncertain, so incorrect use of locking mechanisms in multithreaded programs can lead to random data
Damage or other unusual errors, which we call competing conditions
You may have seen some "old python programmers".
Still going through _value_lock.acquire() and _value_lock.release(), it clearly seems that
Or WITH is more convenient and less prone to error, after all, you can't guarantee that you'll forget to release the lock that one time
To avoid deadlocks, programs that use locking mechanisms should set each thread to acquire only one lock at a time.
There are other synchronization primitives available in the threading library: the RLock and Semaphore objects. But these two usage scenarios are relatively special
RLock (Re-entrant Lock) can be acquired multiple times by the same thread and is mainly used to implement locking and synchronization based on the Detect Object pattern. When using this type of lock, only one thread can use the complete function or method in the class when the lock is held, as shown in the example below:
import threading class SharedCounter: _lock = () def __init__(self,initial_value=0): self._value = initial_value def incr(self,delta=1): with SharedCounter._lock: self._value += delta def decr(self,delta=1): with SharedCounter._lock: (-delta)
The lock in this example is a class variable, i.e., a class-level lock that is shared by all instances, which ensures that only one thread at a time can call a method of this class. Unlike standard locks, a method that already holds a lock does not need to acquire the lock again when it calls a method that also applies the lock, such as the decr method in the example above.
This approach is characterized by the use of a single lock no matter how many instances of this class there are. It is therefore more memory efficient in cases where a large number of usage counters are required.
Cons: Problems with lock contention when using a large number of threads in a program and updating the counter frequently.
A semaphore object is a synchronization primitive based on a shared counter. If the counter is not zero, the with statement decrements the counter by one.
The thread is allowed to execute. with statement execution ends, the counter is incremented by 1. If the counter is 0, the thread is blocked until another thread finishes and increments the counter by 1. However, semaphores are not recommended, adding complexity and affecting program performance.
So semaphores are more suitable for programs that need to introduce signals or limits between threads. For example, limiting the amount of concurrency in a piece of code
from threading import Semaphore import requests _fetch_url_sema = Semaphore(5) def fetch_url(url): with _fetch_url_sema: return (url)
Locking mechanism to prevent deadlocks
In multithreaded programs, a large part of the deadlock problem is caused by multiple threads acquiring multiple locks at the same time.
For example, if a thread acquires a first lock and blocks while acquiring a second lock, the thread may block other threads from executing, causing the entire program to die.
One solution: assign a unique id to each lock in the program, and then only allow multiple locks to be used according to an ascending rule.
import threading from contextlib import contextmanager # Store information that a lock has been requested _local = () @contextmanager def acquire(*locks): # Sort the locks by id locks = sorted(locks, key=lambda x: id(x)) acquired = getattr(_local, 'acquired', []) if acquired and max(id(lock) for lock in acquired) >= id(locks[0]): raise RuntimeError("Lock order Violation") (locks) _local.acquired = acquired try: for lock in locks: () yield finally: for lock in reversed(locks): () del acquired[-len(locks):] x_lock = () y_lock = () def thread_1(): while True: with acquire(x_lock,y_lock): print("Thread-1") def thread_2(): while True: with acquire(y_lock,x_lock): print("Thread-2") t1 = (target=thread_1) = True () t2 = (target=thread_2) = True ()
By sorting, locks are acquired in a fixed order regardless of the order in which they are requested.
() is also used here to hold information about the requested locks
The same thing can be used to hold information about a thread that is invisible to other threads
summarize
Above is the entire content of this article on python concurrent programming of the thread example analysis, I hope to help you. Interested friends can continue to refer to other related topics on this site, if there are inadequacies, welcome to leave a message to point out. Thank you for the support of friends on this site!