Overview
Threads are not protected from each other! Since they share the same address range in memory (the stack of the process) data from one thread can overwrite the data of another thread.
These are also known as lightweight processes.
Threads differ from Processes in that they share the user processes' address space. They have their own stack pointer and stack, program counter, and registers but can access the memory of parent processes and even that of other threads.
Benefits
- Responsiveness - Threads can swap execution to improve efficiency
- Resource Sharing - Using shared memory to access data across threads
- Economy - Using shared memory is more efficient time and memory space wise.
- Scalability - Multicore CPU utilization
Shared Data
- Process instructions (text region of memory)
- Open files (descriptors)
- Signals and signal handlers
- Current working directory
- User and group id
Thread Specific Data
- Thread id
- Registers
- Stack pointer/stack
- Signal mask - (can handle signals for the entire process hiding it from other threads)
- Scheduling properties
- Return value
Thread Types
User Level Threads
User level threads operate on the principle that the underlying OS does not support threading.
This means that the user manages the stack pointer, registers, etc. for each thread. It is very easy to incorrectly manage these resources and catastrophically crash the program.
Advantages
- Efficient space usage and higher speed
- Low cost of switching
- Low cost of scheduling
- OS independent
Disadvantages
- One thread blocked on I/O blocks all threads [1]
- Difficult to take advantage of SMP (symmetrical multiprocessing)
Kernel Level Threads
Kernel level threads need support from the underlying OS.
Advantages
- Removes disadvantages of #User Level Threads
- Threads can be treated as individually schedulable
Disadvantages
- Greater overhead due to kernel level threads.
Relationships Between Thread Types
Since many OSes support threads nowadays there are multiple models for allocating #Kernel Level Threads to #User Level Threads.
Many-to-One
This maps many user level threads to one kernel level threads.
This model can block other threads as the OS only sees one schedulable entity. We can also run this on one CPU at a time.
One-to-One
This maps many user level threads to many kernel level threads.
It allows for blocked threads not to interfere with other threads as each thread is treated as a schedulable entity by the OS.
Linux only supports this model with pthread
Many-to-Many
Also notated as M:M
This maps
It allows for less OS overhead when trying to create threads, while still providing the benefits of the #One-to-One model.
This must be done by a user level library and requires more overhead in that aspect.
Two-Level
This is mostly the same as #Many-to-Many with the added functionality of control over how user threads are mapped to kernel threads.
Given a program that needs many worker threads to process data and a thread to generate data. You can map the one generator thread to its own kernel level thread while mapping the worker threads to many other kernel level threads. This will ensure that the generator thread continues uninterrupted.
Thread Cancellation
Asynchronous Cancellation
Asynchronous cancellation has the process immediately terminates the target thread. This comes with a few caveats:
- Allocated resources may not be freed easily
- Status of shared data may remain ill-defined
Deferred Cancellation
Deferred cancellation is where a thread terminates itself. This way orderly cancellation can be easily achieved. However, failure to check cancellation status may cause issues.
Signal Handling
Refer to Processes#Signals
Signal handling can be done by threads. There are two basic types of signals:
- Synchronous: Generated by some event in the process
- Asynchronous: Generated by some event outside the process
In Unix-like systems you can apply a signal mask to threads to have them handle signals sent to the process they belong to. There are a few different ways to handle these signals.
- Delivering the signal to the thread where the signal applies
This is somewhat difficult as you may have to find the thread that applies
- Deliver the signal to every thread in the process
This method can be useful, but if only one action needs to occur there are other methods that are better as
-
Deliver the signal to certain threads in the process
-
Assign a specific thread to receive all signals for the process
This is usually the best and simplest implementation for handling signals in a multithreaded process.
Implicit Threading
Writing multi-threaded programs are difficult to do correctly. It can cause latency and performance issues if done incorrectly.
The solution is to use compiler directives and runtime libraries to help manage threads (semi) automatically.
Thread Pools
This is a runtime library that manages the use of multiple user threads and how they are mapped to kernel threads.
Thread pools create a number of kernel threads up to the number of logical processors in a system. When a new thread is created by the user they are added to the pool where they await assignment to a kernel thread (work).
OpenMP
This is a compiler directive that is included with GCC. It supports parallel programming in shared-memory environments.
OpenMP manages shared memory for the user. Although users can identify a parallel region and try to access or modify it, OpenMP will not allow the operation to occur if it knows the region is shared.
Programming
Threads can be created using the pthreads API. This is a POSIX standard library.
Pthreads is a user-level API, it does utilize OS Structures#System Calls, but the functions programmers end up using are not system calls themselves.
Example:
#include <pthread.h>
void *runner (void *param); // pointer to function for the thread to run
int main(int argc, char *argv[]) {
pthread_t tid;
pthread_attr_t attrs;
// Default attributes for the thread
pthread_attr_init(&attr);
// Start running the thread
pthread_create(&tid, &attr, runner, argv[1]);
// Wait for the thread to finish
pthread_join(tid, NULL);
}
When initializing a thread using pthreads
you can only provide one argument to the function the thread runs. This can be mitigated by passing a struct
as an argument to the thread.
Linux refers to threads as tasks. To create a task we can create a #Kernel Level Threads with the clone()
syscall.
System Call Semantics
fork()
Some system provide two different versions of fork()
but Linux duplicates only the thread called by fork()
. That is the resulting child process will only have one thread.
exec()
In Linux calling exec()
from any thread will wipe the entire process image, therefore overwriting all threads.
It is usually not required to call fork()
or exec()
in a multithreaded program. If done, it is usually because you call exec()
right after a fork()
.
Unix Thread Cancellation
Also refer to the man pages for pthread_cancel
Reference
- Kulkarni, Prasad Various Lectures The University of Kansas 2024
Related
This is because the underlying OS does not know if the process has threads running on it. If using a time-slice OS it will pause execution of all threads once the main processes' time slice has run out. ↩︎